id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.05065
Metatickles and Death in Damascus
The prescriptions of our two most prominent strands of decision theory, evidential and causal, differ in a general class of problems known as Newcomb problems. In these, evidential decision theory prescribes choosing a dominated act. Attempts have been made at reconciling the two theories by relying on additional requirements such as ratification (Jeffrey 1983) or "tickles" (Eells 1982). It has been argued that such attempts have failed (Lewis 1981a; Skyrms 1982). More recently, Huttegger (forthcoming) has developed a version of deliberative decision theory that reconciles the prescriptions of the evidentialist and causalist. In this paper, I extend this framework to problems characterised by decision instability, and show that it cannot deliver a resolute answer under a plausible specification of the tickle. I prove that there exists a robust method of determining whether the specification of the tickle matters for all two-state, two-act problems whose payoff tables exhibit some basic mathematical relationships. One upshot is that we have a principled way of knowing ex-ante whether a reconciliation of evidential and causal decision theory is plausible for a wide range of decision problems under this framework. Another upshot is that the tickle approach needs further work to achieve full reconciliation.
Saira Khan
2023-07-11T07:12:30Z
http://arxiv.org/abs/2307.05065v2
# Metatickles and Death in Damascus ###### Abstract The prescriptions of our two most prominent strands of decision theory, evidential and causal, differ in a general class of problems known as Newcomb problems. In these, evidential decision theory prescribes choosing a dominated act. Attempts have been made at reconciling the two theories by relying on additional requirements such as ratification ([14]) or "tickles" ([4]). It has been argued that such attempts have failed ([19]; [24]). More recently, Huttegger ([12]) has developed a version of deliberative decision theory that reconciles the prescriptions of the evidentialist and causalist. In this paper, I extend this framework to problems characterised by decision instability, and show that it cannot deliver a resolute answer under a plausible specification of the tickle. I prove that there exists a robust method of determining whether the specification of the tickle matters for all two-state, two-act problems whose payoff tables exhibit some basic mathematical relationships. One upshot is that we have a principled way of knowing _ex-ante_ whether a reconciliation of evidential and causal decision theory is plausible for a wide range of decision problems under this framework. Another upshot is that the tickle approach needs further work to achieve full reconciliation. ## 1 Introduction Decision theory offers a normative framework for determining rational choice. Its primary components are a set of beliefs (probabilities) over states of the world and a set of valuations (utilities) over the different outcomes of acts in these states of the world. Two prominent forms of decision theory are the causalist and the evidentialist approaches. Causal decision theory determines rational action by evaluating what an agent can expect to bring about by her action. Evidential decision theory determines rational action by evaluating what evidence an agent's action provides her with. The theories prescribe different acts as rational under a class of problems known as Newcomb problems. It is frequently held that the causalist prescription is the correct one ([27]; [7]; [23]; [19]).1 The characteristic feature of Newcomb problems is that there is a correlation between state and act such that the choosing of the act is understood to be good evidence for a state of the world. The result is that evidentialism prescribes choosing an act which is strictly worse in both states of the world. The evidentialist recognises that though the agent cannot causally bring about a different state of the world, they deny that causality is important for practical rationality ([2]). Rather, the rational act should be based on its "news value". That is, an agent ought to prefer a proposition to another just in case she would rather learn that proposition over the other. In light of criticism of this position, attempts have been made - notably by Jeffrey ([14]) and Eells ([4]) - to amend evidential decision theory to better accord with causalist prescriptions. Footnote 1: Though some, such as [2], [10] and [11] support the evidentialist conclusion. In this paper, I focus on a version of reconciliation developed by Huttegger ([12]) and show that it cannot reconcile evidential and causal decision theory without further, questionable assumptions. Huttegger uses an idea due to Eells called the "tickle" defence: that the evidentialist becomes increasingly confident that the state of the world is not causally dependent on her act as a result of knowledge of her beliefs and desires. However, Huttegger employs the deliberative apparatus developed by Skyrms ([23]) and thus overcomes some objections to the original Eellsian approach.2 Section 2 of this paper expounds the technical differences between causal and evidential decision theory and briefly outlines two decision problems: the Newcomb problem and Death in Damascus. Section 3 discusses Eells' approach to resolving the difference between the evidentialist and causalist prescriptions and details Huttegger's proposed amendment using deliberative dynamics. Huttegger's approach delivers the (commonly considered) correct answer for the evidentialist in the Newcomb problem. Footnote 2: In particular, the assumption that the agent access to a proposition which fully describes her beliefs and desires. Under Huttegger’s approach, this is not assumed but rather reached through a process of deliberation. Section 4 considers the same framework applied to a class of problems characterised by decision instability. These are where, as soon as the agent leans toward performing one action, the other looks preferable. In more technical terms: there is no dominant act (no act which is preferred regardless of the state of the world) and every act is in principle causally unratifiable (after we have chosen the act we would prefer to have chosen otherwise). In particular, I consider a decision problem known as Death in Damascus ([6]). When the payoff table is symmetric, the received view is that both naive evidentialism and naive causalism (without any deliberative dynamics) remain silent on which is the correct act to perform. When it is asymmetric, the evidentialist is decisive whereas the causalist is trapped in a state of indecision. A more sophisticated (deliberative) causalist may settle upon choosing an act with probability slightly less than 0.5. In this paper, we see that Huttegger's framework, when applied to this problem, cannot straightforwardly reconcile the evidentialist prescription with the prescription of the causalist (both sophisticated and naive). In Section 5, I offer an original analysis of the deliberative framework to explicate why it is irresolute in the Death in Damascus problem, and prove some general facts about its irresoluteness given a plausible version of the dynamical process, which I call the _shortest-path independence dynamics_. I identify the existence of what I call the _plane of indifference_ in all two-act, two-state decision problems which exhibit the basic mathematical structure of either Newcomb or Death in Damascus problems. The key insight is that the specification of the tickle matters only depending on the positioning of this plane of indifference. In particular, regardless of the precise operation of the tickle during deliberation - shortest-path or not - the positioning of the plane in the Newcomb problem renders it the case that deliberation will always lead us to the same conclusion. This is not so in Death in Damascus and reconciliation of evidential and causal decision theory here requires more questionable assumptions. Section 6 discusses the status of reconciliation and the importance of the proof of the indifference plane for future work in deliberative decision theory. Section 7 concludes an offers a view on the status of the Eellsian project. ## 2 The decision problems The canonical form of evidential decision theory is attributable to Jeffrey ([13]). Under his framework, states of the world, acts and outcomes are all propositions of the same kind, forming a Boolean algebra. Probabilities and desirabilities may be applied to any of these propositions. Call the Boolean closure of the set of acts, states and outcomes, the _decision-relevant_ propositions. The agent's conditional expected utility of an act is calculated from her probabilities and desirabilities for maximally specific decision-relevant propositions. Formally, the evidential decision theorist prescribes performing the act, \(A\), that maximises the following conditional expected utility formula, where \(D\) denotes desirability, \(P\) denotes probability, and \(S\), the state of the world. \[EU_{evid}(A)=\sum_{i}D(S_{i}\&A)P(S_{i}|A)\] There are multiple versions of causal decision theory.3 For simplicity, I present Lewis' ([19]) account. Like the traditional decision-theoretic framework of Savage ([22]), states, acts and outcomes are not propositions of the same Boolean algebra but are separate entities. Probabilities attach to states of the world, and desirabilities or utilities, to outcomes. Lewis builds on the Savage framework but introduces _dependency hypotheses_ which determine the appropriate partition of the state space. A dependency hypothesis is defined as the maximally specific proposition about how outcomes do, and do not, causally depend on the agent's present acts. Formally, the causal decision theorist prescribes performing the act, \(A\), that maximises the following expected utility formula relative to the partition given by the dependency hypothesis.4 Footnote 3: Most notably, the subjunctive accounts of Stalnaker ([27]) and Gibbard and Harper ([7]), as well as the non-subjunctive accounts of Skyrms ([23]) and Lewis ([19]). Footnote 4: The merit of the evidential approach is that it is partition invariant and it is much less sensitive to the formal specification of the decision problem. Indeed, it is a more general framework that can be reduced to Savage’s decision theory under correct specification of the state space. In comparison, in many causal decision theories, the decision problem must be specified in such a way that each state-act pair is guaranteed to lead to a unique outcome; there is state-act independence; and the desirabilities of the outcomes are not influenced by the state-act pair which eventuated them. None of these restrictions are required in the evidential framework. See Eells ([4]) for discussion. \[EU_{caus}(A)=\sum_{i}D(S_{i}\&A)P(S_{i})\] I now present two decision problems. One which has caused particular worry for the evidentialist, and one which has caused worry for both theories, though it is more frequently levied against the causalist ([6]). The first, Newcomb's problem, can be described as follows ([20]). Tomas is in a room with two boxes, one of which is opaque and one of which is transparent. Under the transparent box lies $1,000. Under the opaque box, there is either nothing or $1,000,000 and Tomas does not know which. He is offered the option to take either only the opaque box, or both the transparent one and the opaque one. The catch is that there is a predictor who, if she predicts Tomas chooses only the opaque box puts $1,000,000 under it and, if she predicts he chooses both boxes, puts nothing under it. Tomas believes the predictor is reliable. The payoff table is illustrated Table 1. In this decision problem, the causalist recommends taking both boxes, as it can be seen that this act strictly dominates taking only the opaque box. That is, it has higher expected utility under both states of the world. The naive evidentialist, however, recommends taking only the opaque box, as choosing only the opaque box is good evidence that the predictor put $1,000,000 there. In this decision problem, the evidentialist seems to prescribe the wrong answer and Tomas loses out on a guaranteed $1,000. The Death in Damascus problem is as follows ([7]). Death works from an appointment book which specifies a time and a place. If and only if Tereza happens to be in the time and place when Death is there, she dies. Suppose Tereza is in Damascus and she accidentally bumps into Death. He tells her that \begin{table} \begin{tabular}{l|l|l} & Box empty & Box not empty \\ \hline Take opaque box & 0 & 1,000,000 \\ \hline Take both boxes & 1,000 & 1,001,000 \\ \end{tabular} \end{table} Table 1: Newcomb’s Problem he is coming for her tomorrow. Her options are either to stay where she is or to flee to Aleppo. The catch is that Death is a reliable predictor of where she will be, so as soon as Tereza believes it is better for her to flee, this constitutes good evidence that Death's appointment for her is in Aleppo and it seems as though she should stay. Analogously, however, if she decides to stay, this constitutes good evidence that Death knows that she stays and so she would be better off fleeing. The problem is therefore one of decision instability. The moment Tereza becomes confident in one option, the other appears more attractive. Here, I consider an asymmetric problem where the cost of fleeing is 1 util. The payoff table is given in Table 2, where we assign 10 utils to Tereza's survival.5 Footnote 5: While only the asymmetric case is presented in this paper, for completeness, the symmetric case was also analysed. This exhibits multiple lines of equilibria on the faces of the dynamical cube and therefore constitutes greater instability on the boundary than the asymmetric case. However, some would deny that indecision in such a circumstance constitutes a flaw in the theory. See, for example, [8]. In this decision problem, the naive evidentialist believes that, as Tereza's act is good evidence of the state of the world no matter what she chooses, she ought to stay in Damascus, since she should not pay the extra 1 util to flee to Aleppo. The causalist, however, believes that staying is irrational as it will put the agent in a position from which fleeing looks superior. She is therefore in a state of decision instability. Gibbard and Harper ([7]) argue that this is the correct answer as neither choice is ratifiable. Other forms of causal decision theory, for example, the deliberative framework of Joyce ([16]) or Arntzenius ([3]), prescribe the mixed act of fleeing with probability 0.474.6 In Skyrms' and Huttegger's deliberative dynamics, the agent only has access to pure acts and is therefore in a state of indecision when deliberation assigns an act probability of less than 1. In Joyce's framework, the mixed act is a choice for the agent should she have access to a random chance device she may use to pick her final, pure act. That is, a chance device which will determine that she flees with probability 0.474. One might ask whether the evidentialist should be reconciled with the naive causalist or deliberative causalist. If we sought similar instability as the naive causalist, it will be clear from the analysis which follows that this will not be achieved: in many cases the deliberative evidentialist is decided. So I ask whether reconciliation with the Joycean causalist is possible on Huttegger's model - whether evidential decision theory can prescribe the mixed act of fleeing with probability 0.474. First, we must explicate the framework. Footnote 6: This is derived using Joyce’s ([16]) framework for Murder Lesion applied to Death in Damascus assuming conditional probabilities \(P(S2|A2)=P(S1|A1)=0.99\). Under this framework, one’s unconditional probabilities are revised in light of the expected utility calculation of an act in conjunction with the probabilistic correlation between state and act. More precisely, let \(\alpha\) be a real number, \(P_{t+1}(S2)=P_{t}(S2|EU_{t}(A2)=\alpha)\neq P_{t}(S2)\) when \(\alpha\neq 0\), so the probability of a state of the world is updated based on its probability conditional upon the expected utility of an act. Further, let \(x\) and \(y\) be real numbers, if \(P_{t}(A2)<1\) and \(x>y\), then \(P_{t}(A2|EU_{t}(A2)=x\)\(\&\)\(\&\)\(\&\)\(\&\)\(\&\)\(\&\)\(U_{t}(\sim A2)=y)>P_{t}(A2)\), so the choice probability of an act is updated based on its expected utility. The iterative process of updating one’s choice probability will continue in this fashion until \(P_{t}(A2)=P_{t+1}(A2)=P_{t}(A2|EU_{t}(A2))\), so information about its expected utility does not change its choice probability. As in Skyrms’ ([24]) deliberational framework, this occurs when the expected utility of the two acts are equal. \begin{table} \begin{tabular}{l|l|l} & Death in Damascus & Death in Aleppo \\ \hline Stay in Damascus & 0 & 10 \\ \hline Flee to Aleppo & 9 & -1 \\ \end{tabular} \end{table} Table 2: Asymmetric Death in Damascus Problem A brief history of the metatickle approach and Huttegger's dynamics A prominent evidentialist attempt to prescribe the causalist action in the Newcomb problems is attributable to Eells ([4]; [5]). This has been referred to as the "tickle" or "metatickle" defence ([19]; [24]).7 Eells argues that the mistake being made by the naive evidentialist in the Newcomb problem is the inference from some underlying common cause of both state and act, to a dependence _of_ the state _on_ the act. Eells argues that the only way in which the underlying cause could affect an agent's act is through the agent's beliefs and desires since, under our decision theories, these are the entities that determine action.8 This implies that if the agent had full knowledge of his beliefs and desires, knowledge of the presence or absence of the common cause would be irrelevant to his act. Footnote 7: It is so named for the following thought experiment. Suppose the agent feels a tickle in his left pinkie just in case the predictor has put $1,000,000 in the opaque box. Then, even though the presence of money depends probabilistically on the agent’s act, the tickle is sufficient to screen off the relevance of that act to the state of the world – the tickle tells the agent all he needs to know. A tickle may not always be available but, according to Eells, a “metatickle” is. This is a proposition which describes the agent’s beliefs and desires. Footnote 8: Eells suggests the common cause could not affect an agent’s act by changing his decision rule. In particular “the agent believes that the causal influence of the common cause is sufficiently insignificant as to be irrelevant to the eventual determination of which act is correct in light of his beliefs and desires... This is because he believes that the causal influence of whatever is causally responsible for his rationality – his training, genetic make-up, and so on – will be overwhelming” ([4, 188]). The intuition is clear with a simple example. Consider a decision problem with the same structure as the Newcomb problem but is instead a decision about whether or not to smoke cigarettes. Suppose that there is a genetic cause, \(C\), that results in both lung cancer and a proclivity to enjoy cigarettes but smoking does not itself result in lung cancer. It is correlated with lung cancer but there is no causal state-act dependence. Causal decision theory recognises this independence and thus prescribes smoking insofar as it is enjoyable to the agent. The naive evidentialist prescribes abstaining as smoking is good evidence for the presence of the gene which determines lung cancer. The Eellsian evidential decision theorist, however, believes that the only way the common cause can affect the agent's acts is through his beliefs and desires. Let the proposition which describes his beliefs and desires be denoted \(T\) for metatickle. We have: \[P(A|T\&C)=P(A|T\&\sim C)\] If an agent has full knowledge of her beliefs and desires, \(P(T)=1\). So in the presence of the metatickle, \[P(A|C)=P(A|\sim C)\] By symmetry of probabilistic independence, \[P(C|A)=P(C|\sim A)\] Since the cause is not probabilistically dependent on the act in the presence of the metatickle, neither is the state of the world. This means \[P(S|A\&T)=P(S|T)\] Eells believed that the proposition \(T\) was a proposition available to an agent ([4]; [5]). Conditional upon \(T\), state and act are independent, and if this is the case, evidential decision theory will make the correct prescription: to smoke. Knowledge of the beliefs and desires of the kind caused by the common cause screens off what was previously thought to be evidence about the state of the world: the act. Analogous reasoning will lead the Eellsian evidential decision theorist to two-box in Newcomb's problem; the act of two-boxing is irrelevant to the $1,000,000 being there or not, and one should therefore choose the strictly dominant act.9 The reasoning behind the metatickle approach is diagrammed in Figure 1. Footnote 9: See also Reichenbach’s principle of screening off ([20]). For both Eells and Jeffrey ([12]), it is the agent's ability to anticipate her own choices that screens off the evidential import of her acts for states of the world.10 However, unlike Eells, Jeffrey does not make reference to common causes. For Jeffrey, deliberation is what allows the sophisticated evidentialist to screen off the correlation between act and state which caused her to disagree with the causalist. He states "it is my credences and desirabilities at the end of deliberation that correspond to the preferences in the light of which I act, i.e., it is my final credence and desirability functions [...] not the initial ones [...] that underlie my choice" ([12, 486]). The idea is that the agent should not choose to maximise news value as she now sees it, but as she now expects herself to estimate it after having made the decision. This is known as "ratificationism". However, Huttegger believes that both Eells and Jeffrey did not adequately specify how the agent comes to fully know her beliefs and desires and achieve this screening off ([11]).11 To fill this lacuna, he first turns to the delibational dynamics of Skyrms ([23]). Footnote 10: Indeed, Skyrms ([24, 74]) refers to Jeffrey’s idea as a “hypothetical version of the metatickle defense”. Skyrms models a delibational process where, as an agent deliberates about which act to choose, this is incorporated into her up-to-date probabilities and desirabilities. The agent has some information at the start of deliberation upon which she can assess expected utility but the deliberation process itself generates information that causes her to recalculate her expected utility. Suppose we assign probabilities to acts that represent the agent's belief that she will choose that particular act at the end of deliberation. Since states and acts are correlated, act probabilities provide evidence about states of the world which the agent can use to update her expected utility. Deliberation then pushes the agent in the direction of the act that has the higher expected utility in his current assessment. In particular, the direction of his choice probability of choosing both boxes, denoted \(P(A2)\), is proportional to the difference in expected utility so that we have: \[\frac{dP(A2)}{dt}\propto EU(A2)-EU(A1)\] And Figure 1: Diagrammatic depiction of Eells’ metatickle defence where the causal connection between act and state is erroneously drawn on the basis of the common cause \[\frac{dP(A2)}{dt}=\begin{cases}\text{positive if }EU(A2)>EU(A1)\\ \text{negative if }EU(A2)<EU(A1)\\ 0\text{ if }EU(A2)=EU(A1)\end{cases}\] We will refer to this as the "adaptive dynamics".12. It is assumed, in both Skyrms' and Huttegger's frameworks, that the adaptive dynamics operates continuously, though others, such as Eells [5] have developed discontinuous approaches. Since this paper is engaging with Huttegger's reconciliation project, I will assume a continuous adaptive dynamics. For Skyrms, the updating of one's choice probability continues until such a time as the agent reaches probability 1 of performing a certain act or the agent reaches a mixed equilibrium where there is no change in her choice probabilities (\(\frac{dP(A2)}{dt}=0\)). The basic intuition capturing the metatickle is that, if Tomas leans toward only taking the one box, the probability of the $1,000,000 being there increases, and so he begins to believe that choosing both boxes is better. Let \(S2\) denote the presence of the $1,000,000. Formally, as \(P(A2)\) approaches 0 or 1, the conditional probabilities \(P(S2|A1)\) and \(P(S2|A2)\) approach 1 and 0, respectively. The value of \(P(A2)\) where the expected utility of \(A2\) and the expected utility of \(A1\) are equal is where deliberation stops, and this is Tomas' final probability of two-boxing. On Skyrms' model this does not in fact end in a reconciliation of evidential and causal decision theory. Supposing Tomas is an evidentialist and begins on the fence, he ends deliberation most probably one-boxing, but also attaches some positive probability to two-boxing. Footnote 12: Skyrms also refers to this informally as a dynamical rule which “seeks the good” ([26, 31]). He describes such rules as “qualitatively Bayesian” in the sense that the dynamical rule should reflect the agent’s knowledge that she is an expected utility maximiser and the status of her present expected utilities as an expectation of her final utilities. Informally, such rules state that act probabilities should increase if the act has utility greater than the status quo, and that the probability of all acts with utilities greater than the status quo should increase. Frequently used dynamical rules that meet these conditions are the replicator dynamics or Nash dynamics, and the dynamics of Brown and von Neumann ([26]). Formally, \(\frac{dP(A)}{dt}=\frac{cor(A)-P(A)\sum_{j}cor(A)_{j}}{k+\sum_{j}cor(A)_{j}}\) and \(\frac{dP(A)}{dt}=cov(A)^{2}-P(A)\sum_{j}cov(A)_{j}^{2}\) respectively, where the constant \(k\) represents how quickly the agent adjusts her act probabilities. To this, Eells ([5]) introduces a model called "continual conditional expected utility maximization" which embraces Skyrms' idea that deliberation generates information upon which we should update our expected utilities but also introduces the notion that agents may face an urgency to act. Thus, depending on whether one wants to reach a decision quickly, one might eschew the states of indecision that Skyrms claims the evidentialist stuck in. Eells believes this reconciles the prescriptions of evidential and causal decision theory on Newcomb's problem, resulting in two-boxing. However, as Huttegger ([12]) rightly points out, this is a large deviation from traditional evidential decision theory. Whether an agent rushes to a decision or procrastinates are features of an agent not well captured by her preferences. Therefore, the proposed solution arguably fails. Huttegger takes a different approach to reconciliation in light on Skyrms' findings. His amendment to Skyrms' model is a relaxation of the assumption that as \(P(A2)\) approaches 0 or 1, the conditional probabilities \(P(S2|A1)\) and \(P(S2|A2)\) approach 1 and 0. That is, conditional probabilities of the states given acts are not functions of our choice probabilities. Indeed, in the original Eellsian account, there is nothing over and above one's informed beliefs and desires upon which the agent's decision is based; convergence towards one or the other act is not required for the appropriate screening off. Instead, conditional probabilities change by a separate "independence dynamics" as a function of time, or stages, in the delibational process, moving closer to one another over the course of deliberation.13 The independence dynamics is formally defined as follows.14 Footnote 14: If \(P(A1)=0\), then \(P(S2|A1)\) is not well defined. Huttegger states this obstacle can be overcome by requiring that dynamics of \(P(S2|A1)\) is continuous with the dynamics for arbitrarily close states that have \(P(A1)>0\). \[\frac{dP(S2|A1)}{dt}=\begin{cases}\text{positive if }P(S2|A1)>P(S2|A2)\\ \text{negative if }P(S2|A1)<P(S2|A2)\end{cases}\] Likewise, \[\frac{dP(S2|A2)}{dt}=\begin{cases}\text{positive if }P(S2|A2)>P(S2|A1)\\ \text{negative if }P(S2|A2)<P(S2|A1)\end{cases}\] There are also no reappearances of correlations, so \[\frac{d[P(S2|A2)-P(S2|A1)]}{dt}=0\text{ if }P(S2|A2)=P(S2|A1)\] Under this dynamical process, evidential deliberation converges to two-boxing since the choice probability of two-boxing is governed by the adaptive dynamics when state and act are independent. It is precisely the introduction of the independence dynamics that brings us to this reconciliation. If the evidentialist does not believe her act is evidence for a state of the world, she in effect uses the same probabilities the causalist uses. Furthermore, while in Skyrms' work, the end point of deliberation is where the choice probability of an act is \(1\) or \(\frac{dP(A2)}{dt}=0\), this is not the case under Huttegger's framework.15 Rather, deliberation, in most cases, will continue until \(\frac{dP(A2)}{dt}=0\)_and_ the agent reaches state-act independence. I say "in most cases" since Huttegger does not assume deliberation always leads to full state-act independence. This is because deliberation can sometimes fail provide all the information we need, for example, if the agent believes that the predictor in Newcomb's problem knows more about how he makes decisions than he knows about himself. If this is so, there are hidden factors influencing his choice which he cannot access via deliberation. Nonetheless, Huttegger states that situations where agents' acts are determined solely on the basis of their desires, beliefs and decision rule are the "most natural setting for decision theory" ([11, 22]). As such, I will be considering those cases in which the agent's deliberative process is sufficient to screen off state-act correlations. Footnote 15: In Skyrms ([23]), that the adaptive dynamics continues until the probability of an act equals \(1\), and does not exceed \(1\), is guaranteed by the fact that this is when deliberation ends. This is not the case under Huttegger’s framework; deliberation does not end when the probability of an act reaches \(1\). Therefore, as stated here, it is possible that \(P(A2)\) exceeds \(1\) since the rule that the change in choice probability is proportional to the difference in expected utility does not ensure that \(P(A2)\) remains within the probability simplex. As such, we stipulate that the adaptive dynamic rules which are permissible under this general formulation are those which effectively slow as they reach the boundary, therefore remaining within the probability simplex over the course of deliberation. In Huttegger's framework, the reason that the independence dynamics can continue after the adaptive dynamics concludes is because the operation of the independence dynamics is independent of the adaptive dynamics: it is not a function of the agent's choice probabilities. It is important to note that, on this interpretation, the relative strength of the independence and adaptive dynamics becomes relevant to where the agent ends deliberation. Huttegger's work finds that the exact specification of the operation of the independence dynamics relative to the adaptive dynamics does not matter for Eells' reconciliation project on Newcomb's problem. In this paper, I show that it does matter for other decision problems on which evidential and causal decision theory diverge. I will not reconstruct Huttegger's work on Newcomb's problem here but rather apply his same framework to Death in Damascus. I begin by determining the dynamics on the boundaries and discuss the more complicated interior dynamics in Sections 5 and 6. ## 4 Death in Damascus for the deliberative evidentialist In the language of metatickles, both Tereza's act of staying or fleeing and Death's appointment in Damascus or Aleppo are effects of a common cause; that is, the cognitive architecture of the agent upon which Death bases his appointment, sometimes referred to as the agent's "type" ([17]). Thus, conditional on the metatickle, \(T\), which fully captures Tereza's beliefs and desires, states and acts are independent, and knowledge of the beliefs and desires of the kind caused by the common cause screens off the evidence that her choice provided for Death's location. Without making reference to common causes, but noting that deliberation can screen off state-act correlations, Huttegger introduces the independence dynamics, which, along with the adaptive dynamics describes the changes in an agent's choice probability over the course of her deliberation. Under Huttegger's framework, \(P(S2|A2)\) and \(P(S2|A1)\) may vary independently so the deliberational space is represented in three dimensions; one being \(P(S2|A1)\); the other \(P(S2|A2)\); and the final being Tereza's probability of fleeing, \(P(A2)\), all of which change during the deliberative process. The deliberational space is depicted in Figure 2. Note that the cube does not represent a phase diagram as the magnitude of the movement in any particular direction has not been specified. It should rather be thought of as a qualitative tool by which we may analyse where deliberation leads us. Recall the conditional expected utility formulae of evidential decision theory. That is, \[EU_{evid}(A1)=D(S1\&A1)P(S1|A1)+D(S2\&A1)P(S2|A1)\] \[EU_{evid}(A2)=D(S1\&A2)P(S1|A2)+D(S2\&A2)P(S2|A2)\] Given these formulae and the logical fact that \(P(S1|A1)+P(S2|A1)=1\) and \(P(S1|A2)+P(S2|A2)=1\) (one or other state of the world must obtain given our act), we may discern the movement of \(P(A2)\) on the faces of the cube by calculating the expected utility of both acts. First, let us address the front face, indicated in green, where \(P(S2|A1)=1\). The top edge is where \(P(S2|A2)=1\). Here we have \(EU(A1)=10\) and \(EU(A2)=-1\). Since \(EU(A2)<EU(A1)\), by the adaptive dynamics, \(P(A2)\) decreases. Similarly, on the bottom edge of the front face, where \(P(S2|A2)=0\), \(EU(A2)<EU(A1)\). Figure 2: Deliberative evidentialist reasoning under Huttegger’s framework It can be verified that all points in between the edges also lead to a final choice probability of \(P(A2)=0\) on the front face of the cube. This is intuitive as, if \(P(S2|A1)=1\), Tereza can outsmart Death. That is, if the probability of Death being in Aleppo given that Tereza stays in Damascus is 1, she should surely stay in Damascus and not pay the extra 1 util to flee. Now consider the back face, indicated in yellow, where \(P(S2|A1)=0\). The top edge is where \(P(S2|A2)=1\). Here we have \(EU(A1)=0\) and \(EU(A2)=-1\). Again \(P(A2)\) decreases. However, on the bottom edge of the back face, the dynamics look different. Here, \(P(S2|A2)=0\), so \(EU(A1)=0\) and \(EU(A2)=9\). Since \(EU(A2)>EU(A1)\), \(P(A2)\) increases. The exact point at which Tereza prefers fleeing over staying will be explored in the next section using what I call the _plane of indifference_. However, we have not yet considered the operation of the independence dynamics on the left and right faces, indicated in pink. This leads us to what Huttegger calls the Eells-Jeffrey manifold, represented by the grey diagonal face in the cube, and consists of all points where \(P(S2|A2)=P(S2)=P(S2|A1)\), in other words, where there is state-act independence. Movement toward the Eells-Jeffrey manifold is given by the evolving metatickle which screens off states from acts during an agent's deliberation. If our metatickle is sufficient to reach full state-act independence, we must determine the movement on the manifold itself. All areas above the bold blue line move to \(P(A2)=0\) and all areas below it move to \(P(A2)=1\) by the adaptive dynamics. The bold blue line is where \(P(S2|A2)=P(S2|A1)=0.45\). Here, \(EU(A1)=EU(A2)=4.5\) so there is no movement in \(P(A2)\) as per our specification of the adaptive dynamics. I have not yet discussed the dynamical movement in much of the interior of the cube, which is the subject of the next section, but first it is worth noting the following facts. Here, we have multiple equilibria represented by the bold blue line. All of these choice probabilities of \(P(A2)\) render the expected utility of staying equal to that of fleeing, despite the fact that the unconditional probability of Death being in Damascus is 0.45.16 However, this is also the case for the deliberative causalist. Though the mixed act of fleeing with probability 0.474 is the end point of deliberation, at this point, all other acts have equal expected utility so all are equally permissible ([15]). Here, one might inquire what then renders the mixed act the correct answer. The reason is that this is the uniquely ratifiable act (should one have the option to execute it using a chance device that represents this probability distribution). That is, it is the only act where, upon knowledge that one has chosen it, one would not Figure 3: Evidentialist reasoning on the Eells-Jeffrey manifold prefer otherwise.17 Footnote 17: This is also supported by consideration of the fact that the mixed act would constitute the Nash equilibrium of a normal form game with Death and Tereza as players. For discussion of the connection between ratifiability in deliberative decision theory and Nash equilibria in game theory, see [26]; [9]; [18]; and [28]. In Sections 5 and 6, I show that the prescription of the mixed act under Huttegger's framework hinges upon two further conditions: (i) the independence dynamics does not take the "shortest path" to state-act independence, and (ii) the relative strength of the adaptive and independence dynamics must be such that they reach the Eells-Jeffrey manifold exactly where \(P(A2)=0.474\). Since these conditions imply that deliberation must proceed via a very specific route to the precise choice probability, it will not deliver reconciliation under many plausible specifications of the deliberative process. First, I consider what happens under one plausible specification of the independence dynamics. ## 5 Shortest-path independence and the plane of indifference In this section, I offer an original analysis of the Huttegger's deliberative framework given a plausible version of the dynamical process, which I call the _shortest-path independence dynamics_. I prove the existence of what I call the _plane of indifference_ which determines why the framework is irresolute in the case of Death in Damascus and not in Newcomb's problem. I then show that, under Huttegger's framework, this plane of indifference exists in all two-act, two-state decision problems which exhibit the basic mathematical structure of either Newcomb or decision instability problems. The upshot is that the precise specification of the independence dynamics matters for reconciliation only depending on the positioning of this plane of indifference. This provides a principled way of knowing _ex-ante_ whether a reconciliation of evidential and causal decision theory is plausible for a wide range of decision problems under this framework. Informally, the independence dynamics drives the agent's conditional probabilities toward one another over time, though the exact way in which this occurs is left open in Huttegger's work. One way the independence dynamics could operate is by adjusting one starting conditional probability to match the other. For example, if Tereza's initial value of \(P(S2|A2)\) is 0.99 and her initial value of \(P(S2|A1)\) is 0.01, she adjusts up the value of \(P(S2|A1)\) until it also equals 0.99. However, this does not seem particularly rational. Given the description of the decision problem, both of her initial conditional probabilities reflect the Death's reliability in predicting her action, so there appears no reason to count one rather than the other as more viable for informing her unconditional credence in the state of the world. A more plausible version of the independence dynamics would be one that concludes at the average across her two initial conditional probabilities. Since a movement in the direction of the manifold for one conditional probability then implies an equal movement in the direction of the manifold for the other, the independence dynamics decrees - absent its interaction with the adaptive dynamics - that Tereza's conditional probabilities move in the straight line that captures the shortest path to the manifold. This is illustrated in Figure 4, which represents a slice through the dynamical cube and the diagonal line represents the manifold.18 Footnote 18: Since the analysis is qualitative, this may extend to sufficiently similar independence dynamics, though this has not yet been considered. To see what this means for our deliberative process, first we must return to an important feature of the dynamical cube previously overlooked. In our earlier illustration, the line of equilibria on the manifold represented a situation where there was no movement prescribed by the adaptive dynamics; any choice probability of \(P(A2)\) was acceptable since all mixtures of acts had equal expected utility. Moving off the Eells-Jeffrey manifold, we see that this is not only a feature existing at state-act independence but, as I will show, there exists a whole plane on which the adaptive dynamics prescribes no change in \(P(A2)\). This occurs where the two conditional probabilities of state given act, \(P(S2|A1)\) and \(P(S2|A2)\) sum to \(0.9\). The fact that this is a plane of the cube follows from the fact that two axes of the 3-dimensional space represent these conditional probabilities. The fact that the adaptive dynamics decrees no change in choice probability on this plane can be seen from the following. Let \(P(S2|A1)+P(S2|A2)=0.9\) and note it is true by definition that \(P(S1|A1)=1-P(S2|A1)\) and \(P(S1|A2)=1-P(S2|A2)\). Then \[EU(A1) =0P(S1|A1)+10P(S2|A1)\] \[=10P(S2|A1)\] And \[EU(A2) =9P(S1|A2)-1P(S2|A2)\] \[=10P(S2|A1)\] Since the expected utility of both acts are equal as defined in terms of \(P(S2|A1)\), the adaptive dynamics prescribes no movement on the plane given by \(P(S2|A1)+P(S2|A2)=0.9\). Figure 5 illustrates what I call the _plane of indifference_. The key feature of this plane is that if one begins deliberation on the plane, since \(P(A2)\) does not change, one simply moves by the independence dynamics toward the line of equilibria and ends deliberation with the same choice probability as she began with. Of utmost interest is what happens when we begin deliberation either below or above the plane of indifference. It turns out that if Tereza begins at any point below the plane, where \(P(S2|A1)+P(S2|A2)<0.9\), Tereza's deliberation concludes that she should flee to Aleppo with probability 1. If she begins above the plane, where \(P(S2|A1)+P(S2|A2)>0.9\), Tereza concludes she must stay in Damascus, and flee to Aleppo with probability 0. For example, consider \(P(S2|A1)+P(S2|A2)=1\). Here, we have a 2-dimensional plane which sits above the plane of indifference. All initial choice probabilities will lead Tereza to staying. To see this, note that since we have imposed the constraint \(P(S2|A2)+P(S2|A1)=1\), and by logical fact, \(P(S1|A1)+P(S2|A1)=1\) and \(P(S1|A2)+P(S2|A2)=1\), our constraint implies \(P(S1|A2)+P(S1|A1)=1\). Given these formulae, we may calculate our expected utilities. First, consider the top edge of the plane, Figure 4: Shortest path to Eells-Jeffrey manifold where \(P(S2|A2)=1\). We see that \(EU(A1)=0\) and \(EU(A2)=-1\). Since \(EU(A2)<EU(A1)\), by the adaptive dynamics, \(P(A2)\) must reduce. Similarly, on the bottom edge of the plane where \(P(S2|A2)=0\), \(EU(A2)<EU(A1)\). Since \(P(S2|A2)+P(S2|A1)=1\), shortest-path independence dynamics drives her unconditional probability \(P(S2)\) to \(0.5\). In the middle of the plane on its intersection with the Eells-Jeffrey manifold, \(EU(A1)=5\) and \(EU(A2)=4\) so, again, \(EU(A2)<EU(A1)\). As a result, deliberation moves Tereza toward staying in Damascus until we reach a stable equilibrium point where \(P(A2)=0\) and \(P(S2|A2)=P(S2)=P(S2|A1)=0.5\). Analogous reasoning applies when we begin on the other side of the plane and \(P(S2|A1)+P(S2|A2)<0.9\). In what follows, I will prove that the adaptive dynamics is governed by whether we are below or above the plane of indifference for a general payoff table representing a wide range of decision instability problems. Let \(a\) denote the utility assigned to survival and \(b\) the utility assigned to death. Since we consider an asymmetric payoff table, let \(c\) denote the cost of fleeing. Our payoff table represents a general version of a wide range of asymmetric decision instability problems where \(a>b\) and \(c\leq a-b\). Other problems with a similar structure are the Murder Lesion problem and the Psychopath Button ([6]; [3]; [16]). The plane of indifference can be defined in terms of the utilities in the payoff table. Recall that the adaptive dynamics prescribes no movement in \(P(A2)\) when \(EU(A1)=EU(A2)\). This is when \[bP(S1|A1)+aP(S2|A1)=(a-c)P(S1|A2)+(b-c)P(S2|A2)\] By substitution and rearranging, we get \[P(S2|A1)+P(S2|A2)=\frac{a-b-c}{a-b}\] We must prove that the sum is defined and that it is greater than or equal to \(0\) and less than or equal to \(2\) in order for it to appropriately represent an agent's conditional probabilities. First, by definition of Figure 5: The plane of indifference \begin{table} \begin{tabular}{l|l|l} & S1 & S2 \\ \hline A1 & b & a \\ \hline A2 & a - c & b - c \\ \end{tabular} \end{table} Table 3: Generalised payoff table for asymmetric decision instability problem the payoff table \(a>b\), so the denominator is positive and the expression is defined. Second, \(\frac{a-b-c}{a-b}\geq 0\) entails that the numerator is also positive. Note that since \(a>b\), this will be satisfied as long as \(c\leq a-b\). Of course, this is true from the definition of the asymmetric decision instability problem. If the cost of fleeing was greater than the difference between survival and death, we would not be in a case of asymmetric Death in Damascus as it would never be preferable to flee. Finally, \(\frac{a-b-c}{a-b}\leq 2=a-b-c\leq 2(a-b)=-c\leq a-b\). This is satisfied by definition of the payoff table again, as \(c\) is positive and \(a>b\) so the left hand side is negative whilst the right is positive. From this equation for the plane of indifference, we can see that as the cost of fleeing increases, the right hand side of the equation reduces, meaning the plane of indifference will move downwards in the diagonal space of the dynamical cube. This decreases the area of the cube where Tereza's deliberation leads her to flee. In other words, the greater the cost of fleeing, the more sure Tereza must be that Death is in Damascus than that he is in Aleppo in order that rationality decree she purchases the ticket to flee.19 Now that we have proved the existence of an indifference plane, we can demonstrate how the adaptive dynamics will operate either side of it in a general setting. Footnote 19: If \(c=0\), we are in a symmetric decision instability problem where the plane of indifference intersects the Eells-Jeffrey manifold at \(P(S2)=0.5\). Since \(a-b\) is positive (the utility of living exceeds that of dying) we can easily replace our equalities in the above existence proof with inequalities. The direction of the inequality does not change throughout the proof. It follows that: \[P(S2|A1)+P(S2|A2)>\frac{a-b-c}{a-b}\iff EU(A1)>EU(A2)\] \[P(S2|A1)+P(S2|A2)<\frac{a-b-c}{a-b}\iff EU(A1)<EU(A2)\] This means that if the agent begins deliberation above the plane, she will end deliberation with \(P(A2)=0\) and if she begins below it, she will end deliberation with \(P(A2)=1\). Here, one might ask whether her dynamical deliberation could cross over the plane. In principle, it could. However, this would be to violate the plausible stipulation we have made that the ideal deliberator approaches the Eells-Jeffrey manifold via the shortest-path independence dynamics. By definition of how I have specified the shortest-path dynamics, the path toward the manifold is perpendicular to the manifold. This can be seen in Figure 4. We can also prove that the indifference plane is perpendicular to the manifold by showing that the dot product of the normal vectors of both planes is \(0\). Since the normal vector of a plane is perpendicular to it, it is sufficient to show that the normal vectors are perpendicular to each other in order to show that the planes are perpendicular. The plane of indifference is given by \(P(S2|A2)+P(S2|A1)=\frac{a-b-c}{a-b}\) and the Eells-Jeffrey manifold is given by \(P(S2|A2)-P(S2|A1)=0\). The normal vectors are therefore \(A=\langle 1,1\rangle\) and \(B=\langle 1,-1\rangle\). The dot product is thus \(A\cdot B=0\). The planes are therefore perpendicular and this will hold for any value of \(\frac{a-b-c}{a-b}\). It is clear, therefore, that the shortest-path dynamics decrees dynamical adjustments of conditional probabilities that run parallel to the plane of indifference and do not cross it. Given this feature, one's initial starting point entirely determines the ending point of deliberation. This is true of more general cases than the one considered here, as long as the payoff table bears the same mathematical relationship to the one presented above, where \(a>b\) and \(c\leq a-b\), and raises important questions for the reconciliation of causal and evidential decision theory for problems of decision instability in Huttegger's deliberative framework. Now let us consider why this problem does not arise in Newcomb's problem. In short, the reason is that the structure of the payoff table renders the plane of indifference _parallel_ to the Eells-Jeffrey manifold. This means that, above or below the plane, shortest-path independence dynamics will necessarily pass through it to the Eells-Jeffrey manifold where adaptive dynamics dictates that Tomas takes both boxes. Consider the following generalised payoff table where \(a>b\) and \(c\leq a-b\). Other problems with a similar structure are the Cholesterol problem, Smoking problem, and Solomon's problem ([22]; [6]; [3]). As above, the plane of indifference is found where \(EU(A1)=EU(A2)\). This is when \[bP(S1|A1)+aP(S2|A1)=(b+c)P(S1|A2)+(a+c)P(S2|A2)\] By substituion and rearranging, we get \[P(S2|A1)-P(S2|A2)=\frac{c}{a-b}\] We must prove that the difference is defined and that it lies between -1 and 1 inclusive in order for it to appropriately represent an agent's conditional probabilities. First, by definition of the payoff table \(a>b\), so the denominator is positive and the expression is defined. Second, \(-1\leq\frac{c}{a-b}=b-a\leq c\). This is satisfied by definition of the Newcomb payoff table, since if \(c\) was strictly less than \(b-a\), \(c\) would be negative, and there would be no benefit to two-boxing. Finally, \(\frac{c}{a-b}\leq 1=c\leq a-b\). This is again satisfied by the definition of Newcomb payoffs, since if \(c\) were strictly greater than \(a-b\), this would mean \(c+b>a\) and it would therefore always be better to two-box. Notice here that the relationship that defines the plane is not a sum but a difference. This means that the plane is parallel to the Eells-Jeffrey manifold. This is easily proved by taking the ratio of the components of their normal vectors and showing that they are the same. Indeed, they are both 1. This will hold for any value of \(\frac{c}{a-b}\). It will be illuminating to rewrite the above condition as \(P(S2|A2)=P(S2|A1)-\frac{c}{a-b}\) so we see the indifference plane sits below the manifold. This is illustrated in Figure 6. \begin{table} \begin{tabular}{c|l|l} & S1 & S2 \\ \hline A1 & b & a \\ \hline A2 & b + c & a + c \\ \end{tabular} \end{table} Table 4: Generalised payoff table for Newcomb’s Problem Figure 6: The plane of indifference for the Newcomb problem The movement decreed by the adaptive dynamics on either side of the plane in the Newcomb problem is given by examining the following biconditional statements. As before, the proof proceeds straightforwardly from the existence proof replacing the equalities with inequalities without any change in direction, as the term \(a-b\) is positive. \[P(S2|A1)>P(S2|A2)+\frac{c}{a-b}\iff EU(A1)>EU(A2)\] \[P(S2|A1)<P(S2|A2)+\frac{c}{a-b}\iff EU(A1)<EU(A2)\] We can see from Figure 6 that when \(P(S2|A1)>P(S2|A2)\), we are below the Eells-Jeffrey manifold. So when \(P(S2|A1)>P(S2|A2)+\frac{c}{a-b}\), we are below the plane of indifference. Here, the biconditional statements above reveal that the rational act according to our adaptive dynamics is to one-box. By analogous reasoning, all points above the indifference plane end deliberation in two-boxing. As the independence dynamics moves the agent towards the Eells-Jeffrey manifold, and the Eells-Jeffrey manifold lies above the indifference plane, the adaptive dynamics decrees that the agent ought to two-box in Newcomb's problem, corroborating Huttegger's conclusion. As the value of \(c\), the monetary sum under the transparent box, increases, the plane of indifference shifts downward in diagonal space away from the Eells-Jeffrey manifold. As a result, the region of the cube where Tomas should rationally one-box reduces. This is intuitive as, by description of the problem, the agent only receives the value \(c\) when he two-boxes, so the greater the value of \(c\), the greater the incentive to two-box. The denominator \(a-b\) captures the difference between the contents of the opaque box in the two states of the world. If this difference is large, the plane shifts upwards, expanding the region of points which decree as rational one-boxing. This again is intuitive, as the greater the incentive to one-box, the less sure the agent need be that the predictor put \(a\) there in order for him to rationally choose it. Note that when \(c=0\) the plane of indifference is exactly equivalent to the Eells-Jeffrey manifold. It might be tempting to think that if there is nothing under the transparent box, the agent should one-box, but this is not the correct answer. Recall that when we have reached state-act independence, Tomas does not see his act as evidence about the state of the world, so he is rationally indifferent between one-boxing and two-boxing. The causalist answer is the same, as the payoffs are the same under both states of the world. The preceding discussion has shown that it is the plane of indifference which determines rational action in both decision problems. The crucial difference, however, is that regardless of the exact specification of the independence dynamics, the agent's trajectory of deliberation in Newcomb's problem may pass through the indifference plane to the Eells-Jeffrey manifold, since the two are parallel. This means that where one begins deliberation does not determine where one ends in the same way that it does in the Death in Damascus problem. Here, if we accept the plausibility of the shortest-path independence dynamics, movement toward the manifold never crosses the indifference plane, since the independence path and plane of indifference are parallel to one another. This analysis shows that the relatively straightforward reconciliation of causal and evidential deliberation for Newcomb's problem under Huttegger's deliberative framework is not so straightforwardly achieved in problems of decision instability. Much more would have to be said on the nature of the independence dynamics in order to determine whether we may cross the plane of indifference and end deliberation with a resolute answer. In the next section, I turn to these further requirements. ## 6 On the possibility of reconciliation Recall that, under Huttegger's framework, deliberation ends when the adaptive dynamics prescribes no further movement and when we reach state-act independence. In this section, I show that this will only lead to a reconciliation under two very specific conditions: (i) the independence dynamics must be specified such that it does not take the shortest path to the manifold, and (ii) the adaptive dynamics and independence dynamics must have a relative speed such that they reach the Eells-Jeffrey manifold at precisely the point of reconciliation. As we saw from the previous section, if we take the shortest-path independence dynamics to be true, whether Tereza begins above or below the plane of indifference determines where she will end deliberation. The only time, therefore, where she could end deliberation with \(P(A2)=0.474\) is when she begins with deliberation with her choice probability at \(P(A2)=0.474\) and her conditional probabilities precisely on the plane of indifference (where they sum to \(0.9\)). In this case, shortest-path independence will move her directly to the line of equilibria without any change in her choice probability. This is a case where there appears to be no deliberation at all driving her conclusion, and is therefore implausible as a reconciliation of evidential and causal decision theory via deliberation. Of course, there may be viable independence dynamics other than shortest-path independence so let us relax this assumption. However, even if we allow violation of shortest-path independence, it must be the case that the relative speed of the adaptive and independence dynamics is such that the agent reaches the Eells-Jeffrey manifold precisely at the point where it intersects the plane of indifference at \(P(A2)=0.474\). If Tereza reaches the manifold on the equilibrium line at any point to the left or right of this, \(P(A2)\neq 0.474\) and \(\frac{dP(A2)}{dt}=0\) so we do not achieve reconciliation. If Tereza reaches the manifold at any other point above or below the equilibrium line, the adaptive dynamics leads her to \(P(A2)=0\) or \(1\) depending on whether this is above or below the plane of indifference. It is only if the two conditions I have specified obtain that we may witness trajectories such as those depicted in Figure 7, but the reconciliation here appears forced. Again, it is important to recognise that this was not a issue in the case of Newcomb's problem. Here, regardless of the specification of the independence dynamics, since the Eells-Jeffrey manifold lies on the side of the indifference plane where two-boxing is rational, as long as deliberation leads us to state Figure 7: Diagrammatic portrayal of the deliberative evidentialist reasoning under Huttegger’s framework. The red arrows represent possible trajectories to reconciliation. Both trajectories cross the plane of indifference where the upper red arrow begins above it and the lower red arrow begins below it. act independence, the framework will always prescribe the correct answer. The relative strength of the independence and adaptive dynamics may lead Tomas to different points on the line of equilibria where the Eells-Jeffrey manifold intersects the right face of the cube, but this does not change Tomas' ultimate action, as \(P(A2)=1\). Where he concludes deliberation only determines his beliefs about his winnings. That is, he believes himself to be more fortunate if he ends deliberation where the probability of the $1,000,000 being there, \(P(S2)\), is high, and less fortunate if he ends deliberation where it is low. The analysis I have offered in this section therefore represents a principled way to delineate when the specification of the independence dynamics matters for the reconciliation of evidential and causal decision theory under Huttegger's framework. In particular, it depends on whether the plane of indifference intersects the Eells-Jeffrey manifold or not. If it does not, implying it lies entirely to one side of the Eells-Jeffrey manifold, the specification of the independence dynamics does not matter. Any independence dynamics that moves the agent in the direction of state-act independence over time will lead to the same answer. As is shown from the generalised proofs, for any problem representing the mathematical structure of the generalised Newcomb's problem, the plane of indifference will not intersect the Eells-Jeffrey manifold. For any problem representing the mathematical structure of the generalised Death in Damascus problem, the plane of indifference will be perpendicular to the Eells-Jeffrey manifold, and the specification of the independence dynamics as well as its strength relative to that of the adaptive dynamics, matters for where the agent concludes deliberation. We therefore have a robust way of determining _ex-ante_ whether reconciliation of evidential and causal decision theory is plausible for a wide range of two-state, two-act decision problems under this framework. Note that what is important is not whether the plane of indifference is perpendicular or parallel to the Eells-Jeffrey manifold, but whether it _intersects_ the manifold, meaning that the analysis here could in principle be extended to other decision problems, where the angle of the plane of indifference relative to the manifold differs, in order to determine whether specification of the independence dynamics matters in these problems. Furthermore, we would expect the key result - that the relative strength of the adaptive and independence dynamics matters for reconciliation - to hold in larger (_nxn_) decision problems, though this has not as yet been investigated. ## 7 Conclusion The prescriptions of evidential and causal decision theory come apart in two general classes of problems known as Newcomb problems and decision instability problems. Huttegger ([11]) has developed a framework for evidential deliberation building on Eells' ([3]) metatickle approach and Skyrms' ([24]) deliberation dynamics which reconciles the prescriptions of the evidentialist and causalist in Newcomb's problem. Since deliberation results in increasing awareness of our beliefs and desires (and these are the mechanisms by which our action is determined), our acts no longer provide information about the state of the world. That is, deliberation screens off the state-act correlation which previously caused the evidentialist to choose the dominated act in Newcomb's problem. Huttegger's more sophisticated, deliberative evidentialist agent agrees with the causalist in preferring two-boxing. In this paper, I have extended Huttegger's framework to consider an asymmetric case of decision instability: the Death in Damascus problem. I have shown that, in this context, Skyrms' adaptive dynamics and Huttegger's independence dynamics are insufficient to recommend a decisive answer. In Section 5, I consider a plausible version of the independence dynamics, shortest-path independence, and explore the particular features of the deliberative process that this independence dynamics decrees in Death in Damascus. We find that the dynamics decrees different answers for different initial starting points of deliberation. I prove the statements made here are applicable to a more general class of problems of decision instability, as long as the payoff table accords with some simple mathematical relationships. In particular, I show that there exists what I call a _plane of indifference_ where either act is equally acceptable, and this plane of indifference entails that where one concludes deliberation depends entirely on where one begins deliberation. This, however, is not true of the Newcomb case. There are three upshots to this work. First, whilst application of the Eellsian metatickle to deliberation could straightforwardly lead to the correct answer in Newcomb's problem, this notion is not so easily extended to problems of decision instability, and the reconciliation requires assumptions that appear forced. Second, the proof of the plane of indifference for all two-state, two-act problems whose payoff tables exhibit the basic mathematical relationships in Section 5 provides us with a principled way of delineating those cases where the specification of the independence dynamics matters for a reconciliation of evidential and causal decision theory within this framework. Specifically, if the plane of indifference never intersects the Eells-Jeffrey manifold, the specification of the independence dynamics does not matter for reconciliation. If it does, reconciliation requires additional, and potentially questionable, assumptions about the exact specification of the adaptive and independence dynamics. Finally, this work shows that the metatickle approach has so far failed to reconcile evidential and causal decision theory. Eells' and Jeffrey's original ideas were widely criticised for not providing details of how an agent arrives at knowledge of their own beliefs and desires, involving implicit assumptions, or idealisations that limit the metatickle approach ([12]; [2]; [11]; [15]; [19]; [24]). Attempts to resolve this using the theory of deliberation have shown it does not result in a reconciliation, rather the evidentialist is left in a state of indecision in Newcomb's problem ([26]). Eells' amendment ([5]) to his original idea then introduced spurious assumptions about other features of the agent, such as her felt urgency to act, which are marked deviations from traditional evidential decision theory ([12]). In this paper, I have shown that the most recent attempt to salvage Eells' idea, owing to Huttegger ([12]) also fails to deliver a reconciliation of evidential and causal decision theory in problems of decision instability. Future work on reconciliation would need to pay heed to the fact that our results will depend heavily on the interaction of the adaptive and independence dynamics, and any attempt at reconciliation would need to specify their relative strength such that evidential decision theory agrees with causal decision theory in both Newcomb and decision instability problems.
2303.17196
Secure multiparty quantum computations for greatest common divisor and private set intersection
We present a secure multiparty quantum computation (MPQC) for computing greatest common divisor (GCD) based on quantum multiparty private set union (PSU) by Liu, Yang, and Li. As the first step, we improve the security of the MPQC protocol for computing least common multiple (LCM) by Liu and Li by constructing an efficient exact quantum period-finding algorithm (EQPA) as a subroutine instead of the standard (probabilistic) Shor's quantum period-finding algorithm (QPA). The use of EQPA instead of the standard QPA guarantees the correctness of the protocol without repetitions. The improvement of LCM protocol also improves the private set union protocol which is based on computing LCM. Finally, using the same idea of the PSU protocol, we construct a quantum multiparty private set intersection (PSI) by transforming the PSI problem into the problem of computing GCD. Performance analysis shows that the correctness and the unconditional security in the semihonest model are guaranteed directly from the correctness and the security of the subroutine protocols (LCM and PSU protocols). Moreover, we show that the complexity of the proposed protocols is polynomial in the size of the secret inputs and the number of parties.
Muhammad Imran
2023-03-30T07:33:30Z
http://arxiv.org/abs/2303.17196v3
# Secure multiparty quantum computations for greatest common divisor and private set intersection ###### Abstract We present a secure multiparty quantum computation (MPQC) for computing greatest common divisor (GCD) based on quantum multiparty private set union (PSU) by Liu, Yang, and Li. As the first step, we improve the security of the MPQC protocol for computing least common multiple (LCM) by Liu and Li by constructing an efficient exact quantum period-finding algorithm (EQPA) as a subroutine instead of the standard (probabilistic) Shor's quantum period-finding algorithm (QPA). The use of EQPA instead of the standard QPA guarantees the correctness of the protocol without repetitions. The improvement of LCM protocol also improves the private set union protocol which is based on computing LCM. Finally, using the same idea of the PSU protocol, we construct a quantum multiparty private set intersection (PSI) by transforming the PSI problem into the problem of computing GCD. Performance analysis shows that the correctness and the unconditional security in the semihonest model are guaranteed directly from the correctness and the security of the subroutine protocols (LCM and PSU protocols). Moreover, we show that the complexity of the proposed protocols is polynomial in the size of the secret inputs and the number of parties. Keywords:Multi-party quantum computation, Greatest common divisor, Quantum private set intersection, Exact quantum period-finding algorithm. ## 1 Introduction Secure multiparty computation (MPC) is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over multiparty private inputs. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage where the adversary is outside the system, the cryptography in this model protects participants' privacy from each other. Since Yao's seminal work [1] in 1982, MPC has attracted a lot of attention because it has a lot of important applications such as secret sharing, electronic voting, privacy-preserving computation, etc. On the other hand, as the field of quantum computing evolves, cryptography is one of the most influenced field. Quantum cryptography, which can be regarded as the quantum mechanics and classical cryptography, has been widely investigated on numerous branches such as quantum key distribution [2, 3, 4, 5, 6], quantum secret sharing [7, 8, 9], quantum key agreement [10, 11, 12]) including multiparty quantum computation (MPQC). It is an important and interesting question whether the principle of quantum computing can be applied into MPC. General purpose secure multiparty quantum computation was first studied by Crepeau, Gottesman and Smith [13]. It is important to find efficient MPC protocols for some specific problems since this will also improve the performance of the general purpose MPC. The algorithms for computing greatest common divisor (GCD) and least common multiple (LCM) are important tasks in many cryptographic protocols, therefore it is very useful to have MPC protocols for the tasks. Moreover, Liu, Yang, and Li in [14] show that the availability of LCM protocol in quantum setting directly leads to quantum multiparty private set union by using Shor's factoring algorithm [15]. The same idea can be used to construct quantum multiparty private set intersection having MPQC for GCD. Private set intersection (PSI) is a cryptographic primitive that allows two parties to learn the intersection of their input sets and nothing else. There has been a significant amount of work on privacy-preserving set operations, including set intersection as it has numerous applications which are not limited in cryptographic purposes such as testing human genomes [16], contact discovery [17], remote diagnostic [18], record linkage [19], and many more. Most of the existing PSI protocols are based on traditional classical cryptosystems, which are proven to be vulnerable in quantum domain. This makes the requirement of quantum computer resistant PSI. Applying quantum cryptography in the design of PSI is an ideal approach to address these issues. In order to construct quantum multiparty PSI in the quantum setting using similar approach for PSU in [14], an MPQC for GCD is required. However, it was still unclear how to construct an MPQC for GCD. According to the formula \(\gcd(x,y)=\frac{xy}{\operatorname{lcm}(x,y)}\), one can obtain greatest common divisor by using both protocols for multiplication and LCM. However, the formula is only applicable to two integers and it is obvious that for the two-party case this is not secure since the two-party multiplication protocol always reveals each other inputs. Hence, the recursive generalization of the formula, i.e., \(\gcd(a,b,c)=\gcd(a,\gcd(b,c))\), does not give any help to build secure protocol. A simple observation also shows that computing GCD cannot be done using the approach of [20] for LCM which is based on period-finding algorithm. Fortunately, the extension of LCM protocol to the private set union [14] seems to be a promising method to construct a secure protocol for GCD. Specifically, we can transform the GCD problem to the private set union problem by working iteratively on the set of prime factors of the secret inputs. ### Our contributions In this paper, the first MPQC for computing greatest common divisor is proposed. The protocol is mainly based on the quantum multiparty PSU by Liu, Yang, and Li in [14]. Since the PSU protocol is based on MPQC protocol by Li and Liu in [20], firstly we revisited the protocol and improve the performance (success probability, security, and efficiency) of the protocol by constructing a new efficient exact quantum period-finding algorithm (EQPA) and use it as a subroutine instead of the standard (probabilistic) quantum period-finding algorithm (QPA). Finally, using the same idea of the PSU protocol, we construct a quantum multiparty private set intersection (PSI) by transforming the PSI problem into the problem of computing GCD. Concretely, we make the following contributions: 1. We present the first efficient exact quantum period-finding algorithm. The only sufficient information required is a multiple \(m\) of the period \(r\). Our assumption on having the information \(m\) about the period \(r\) is not standard. In fact, knowing a multiple of the period of the multiplicative group modulo the number \(m\) would make it possible to factor \(m\) in randomized classical polynomial time. However, there are some particular situations where a multiple of the period is known (or can be made available) such as finding period of elements in a finite field and constructing cryptographic protocols. The exact quantum period-finding algorithm modulo \(m\) runs in time \(\mathcal{O}(\log^{4}m)\)) while the standard Shor's period-finding algorithm has complexity \(\mathcal{O}(\log\log m(\log^{3}m))\). The main idea of the algorithm is based on amplitude amplification method following the exact quantum algorithm for Simon's problem by Brassard and Hoyer [21]. 2. We propose a an improved version of the LCM protocol [20] using EQPA instead of the standard QPA. The proposed MPQC for LCM improves the security of Li-Liu's protocol. The total computation and communication complexity of the protocol are \(\mathcal{O}(n^{4}m^{4})\) and \(\mathcal{O}(n^{2}m)\) respectively, where \(n\) is the number of parties and \(m\) is the size of the inputs while Li-Liu's computation complexity is \(\mathcal{O}(n^{3}m^{2})\) with the same communication complexity. However, considering the success probability of the standard QPA, Li-Liu's protocol needs \(\mathcal{O}(\log(nm))\) repetitions. The repetition itself leads to some possible security issues. Therefore, the modified protocol is more secure when the number of participants and the size of the inputs grow as repetition itself can lead to some possible attacks. 3. Furthermore, we also propose an efficient secure MPQC for computing GCD and private set intersection. Specifically, all parties prepare the prime factors of their inputs by using Shor's factoring algorithm and then use the quantum multiparty private set union [14] to jointly compute the union of all prime factors of their secret inputs. Finally, by using the voting procedure in [20] iteratively, they are able to obtain the greatest power of each prime factors that simultaneously dividing all the inputs and hence the GCD of their secret inputs is found. Finally, as a straight forward implication, we have a quantum multiparty PSI based on the GCD protocol using similar approach with the quantum multiparty PSU [14]. ### Outline The rest of the paper is organized as follows: In Section 2, we briefly recall all the necessary tools and protocols for our results: Shor's factoring algorithm, Li-Liu's protocol for LCM, and the quantum multiparty private set union. Section 3 is fully devoted for exact quantum algorithms. In section 3.1, we briefly discuss amplitude amplification which is a common technique used to derandomizing quantum algorithms. In Section 3.2, we give a detailed construction of our exact quantum period-finding algorithm. Section 4 contains all the proposed MPQC protocols: an improved Li-Liu's protocol, the GCD protocol, and the private set intersection protocol. ## 2 Preliminary In this section, we give high level descriptions of Shor's factoring algorithm, Li-Liu's MPQC protocol for least common multiple, and the quantum multiparty private union by Liu, Yang, and Li. ### Shor's factoring algorithm The well-known Shor's factoring algorithm is able to factor any large integer \(N\) efficiently. Shor's factoring algorithm is based on a reduction of factoring to period-finding problem (observed by Miller in the 1970s). The main tool of Shor's factoring (to factor a large integer \(N\)) is the quantum period-finding algorithm (QPA) to find the period of the function \(f:\mathbb{Z}\rightarrow\mathbb{Z}_{N}\) defined by \(f(x)=a^{x}\bmod N\) (where \(a\) is chosen at random), i.e., the smallest positive integer \(r\) such that \(f(x+r)=f(x)\). Quantum period-finding algorithm in modulo \(N\) requires \(\mathcal{O}((\log n)n^{3})\) quantum operations, with \(\mathcal{O}(\log n)\) uses of modular exponentiation where \(n=\log N\). The main subroutines of Shor's period-finding algorithm are modular exponentiation and quantum Fourier transform. Modular exponentiation needs \(\mathcal{O}(n)\) multiplications [22] and the Quantum Fourier Transform circuit is quadratic in \(n\)[15]. Hence, to find a factor of an odd number \(N\), given quantum period-finding algorithm is as follows: choose a random \(x\bmod N\) and find its period \(r\) using the QPA. Finally, compute \(\gcd(x^{r/2}-1,N)\). Since \((x^{r/2}-1)(x^{r/2}+1)=x^{r}-1=0\bmod N\), thus the \(\gcd(x^{r/2}-1,N)\) fails to be a non trivial divisor of \(N\) only for \(r\) is odd. Hence, the procedure yields a non trivial divisor of \(N\) with probability at least \(1-1/2^{k-1}\), where \(k\) is the number of distinct odd prime factors of \(N\). The factoring process will be iterated over the obtained non trivial factors, then all prime factors of \(N\) can be found. ### Li-Liu's MPQC for least common multiple Multiparty least common multiple problem:Assume that there are \(n\) parties: \(P_{1},\ldots,P_{n}\), where each party \(P_{k}\) has a secret integer \(r_{k}\in\{0,1,\ldots,2^{m}-1\}\). All \(n\) parties want to jointly compute the \(\operatorname{lcm}(r_{1},\ldots,r_{n})\) without revealing their respective secret. The key idea of Li-Liu's protocol is based on the observation that given functions \(f_{1},\ldots,f_{n}\) with period \(r_{1},\ldots,r_{n}\) respectively, then the function \(f(x)=(f_{1}(x),\ldots,f_{n}(x))\) has period \(r=\operatorname{lcm}(r_{1},\ldots,r_{n})\). Thus, each party \(P_{i}\) is equipped with the oracle of the secret function \(f_{i}\) (\(|x\rangle|0\rangle\mapsto|x\rangle|f_{i}(x)\rangle\)) and hence together they compute the superposition: \[\frac{1}{\sqrt{N}}\sum_{x\in\mathbb{Z}_{N}}|x\rangle|f_{1}(x)\rangle\ldots|f_{ n}(x)\rangle\] where \(N=2^{m}\). Therefore, the period \(r=\operatorname{lcm}(r_{1},\ldots,r_{n})\) can be found by applying the quantum period-finding algorithm. However, because of the probabilistic nature of the QPA, an additional voting procedure is required to check the correctness of the QPA's output. Namely, each party votes whether the output divides their secret input. If the output divides all the secret inputs, then the output passes the verification. The voting procedure is based on the multiparty quantum summation by Shi _et al._ in [23]. The total computation and communication complexity of Li-Liu's protocol is \(\mathcal{O}(n^{3}m^{2})\) and \(\mathcal{O}(n^{2}m)\) respectively. However, considering the success probability of the standard QPA, Li-Liu's protocol needs \(\mathcal{O}(\log(nm))\) repetitions. A simple observation can show that the repetition itself can lead to some possible attacks specifically the parties can learn a factor of others in each repetition from the incorrect outputs and their own secrets. Hence, the risk increases as the repetition grows (the size \(m\) of the inputs grows), especially in the malicious model. ### Quantum multiparty private set union Private set union problem:Assume that there are \(n\) parties: \(P_{1},\ldots,P_{n}\), where each party \(P_{i}\) has a secret set \(S_{i}\subseteq U\) where \(U\) is the complete set of cardinality \(N\): \(2^{m-1}<N\leq 2^{m}\). All \(n\) parties want to jointly compute the \(\bigcup S_{i}\) without revealing their respective secret. The key idea of the quantum multiparty private set union proposed by Li, Yang, and Liu consists of three main steps: encoding procedure, an improved quantum multiparty computation for LCM, and decoding procedure. The encoding procedure transforms each of the secret set \(S_{i}\) (for all \(1\leq i\leq n\)) to prime numbers and hence encode the set \(S_{i}\) as the product of prime numbers image of all its elements. After the encoding procedure, the MPQC protocol for LCM (based on an improved QPA) is performed to find the LCM of all the encoded \(S_{i}\). Finally, decoding procedure is done by (an improved) Shor's algorithm to get the union from the prime factors of the LCM obtained in the previous procedure. The computation and communication complexity of the protocol are \(\mathcal{O}(n^{3}m^{3}k^{3}\log(nmk))\) and \(\mathcal{O}(n^{2}mk)\) respectively where \(k\) is the upper bound of the cardinalities of the secret inputs \(S_{i}\). The use of an improved QPA in the protocol increases the success probability of the LCM protocol to more than \(99\%\) and hence eliminates the requirement of the necessary repetitions of Li-Liu's protocol. However, it is still interesting to have a deterministic protocol for the LCM to produce a correct output with certainty. ## 3 Exact quantum algorithms Shor's quantum algorithm [15] can determine the order (period) of group elements efficiently, and it serves as the main tool for factoring integers. However, Shor's algorithm is polynomial-time in the expected sense, which means it may fail with a small probability and in the unlucky case may take a very long time to succeed, even may never terminate. The same case happens with Simon's algorithm [24]. However, Brassard and Hoyer, in [21], came up with an exact quantum polynomial time for Simon's problem. The Brassard-Hoyer algorithm utilizes a modified version of Grover's technique in [25] to derandomize Simon's algorithm. Specifically, they propose a method that, assuming that we can construct a superposition in which the total squared amplitude of the "desired" constituents (intuitively, the probability of success) is \(\frac{1}{2}\), boosts this success probability to \(1\). The question about the existence of exact quantum replacements for bounded quantum error probabilistic algorithms is a natural question, as it is analogous to derandomizing probabilistic classical algorithms. Besides, some earliest quantum algorithms that demonstrate the power of quantum computers, such as Deutsch-Jozsa procedure [26] and Bernstein-Vazirani problem [27], are exact. It is a difficult open question whether Shor's factoring algorithm can be derandomized. In [28], Mosca and Zalka successfully derandomize Shor's algorithm for discrete logarithm problem in a cyclic group of known order. All previous exact quantum algorithms are uniform, which means the circuits for the algorithms can be classically computed in time polynomial in the logarithm of the inputs, see [29] for the details of uniform quantum circuits. Here we consider the question whether Shor's period-finding algorithm can be derandomized in the uniform computational model assuming some knowledge. Note that we use the term order and period interchangeably (using the term order when we talk about group elements and the term period for general functions). As knowing a multiple of the order \(\mathbb{Z}_{m}^{*}\) would factor \(m\) in randomized classical polynomial time, finding orders of group elements with a known multiple of the order is not necessarily as hard as factoring, so a multiple of the period may be a good candidate for such a help. An important example where this help is available is the case of computing multiplicative orders (and testing primitivity) of elements of finite fields. Beside, this can be very useful for some cryptographic protocols, see section 4.1. ### Amplitude amplifications Amplitude amplification is a common technique used to boost up the success probability of quantum algorithms. The basic idea is to look at the final state of a quantum algorithm (before performing a measurement) as a state living in the plane generated by the good space (generated by all the desired outputs) and the bad space and then rotate it into the direction of the good space. Thus, amplitude amplification can be regarded as the generalization of Grover search algorithm [25]. We present a brief review of the general amplitude amplification discussed by Brassad, Hoyer and Tapp in [30]. Given an algorithm \(\mathcal{A}\) using no measurement, the amplitude amplification is a method to boost the success probability of the algorithm \(\mathcal{A}\). On initial input \(|0\rangle\), the algorithm \(\mathcal{A}\) returns a pure superposition \(\mathcal{A}|0\rangle=\sum_{i\in I}|i\rangle|\Gamma_{i}\rangle\) for some index set \(I\subset\mathbb{Z}\). We consider \(\chi:I\rightarrow\{0,1\}\) a Boolean function that separates the desired outcome states (all states \(|i\rangle|\Gamma_{i}\rangle\) with \(\chi(i)=1\)) from the unwanted states (all states \(|i\rangle|\Gamma_{i}\rangle\) with \(\chi(i)=0\)) as follows. Let \(A=\{i\in I\mid\chi(i)=1\}\) and \(B=\{i\in I\mid\chi(i)=0\}\). We write \(\mathcal{A}|0\rangle=|\Gamma_{a}\rangle+|\Gamma_{b}\rangle\), where \[|\Gamma_{a}\rangle=\sum_{i\in A}|i\rangle|\Gamma_{i}\rangle\text{ and }|\Gamma_{b} \rangle=\sum_{i\in B}|i\rangle|\Gamma_{i}\rangle.\] Hence the success probability of the algorithm \(\mathcal{A}\) is \(a=\langle\Gamma_{a}|\Gamma_{a}\rangle=||\Gamma_{a}\rangle|^{2}\). Therefore, the amplitude amplification operator for the algorithm \(\mathcal{A}\) is defined as \[\mathcal{Q}(\mathcal{A},\chi,\phi,\varphi)=-\mathcal{A}S_{0}^{\phi}\mathcal{ A}^{-1}S_{\chi}^{\varphi}, \tag{1}\] where \(S_{\chi}^{\varphi}\) and \(S_{0}^{\phi}\) are phase changing operators defined by \[S_{\chi}^{\varphi}|i\rangle|\Gamma_{i}\rangle=\left\{\begin{array}{ll}\varphi |i\rangle|\Gamma_{i}\rangle&\text{ if }\chi(i)=1\\ |i\rangle|\Gamma_{i}\rangle&\text{ otherwise,}\end{array}\right.\quad\text{ and }\quad S_{0}^{\phi}|i\rangle|\Gamma_{i}\rangle=\left\{ \begin{array}{ll}\phi|i\rangle|\Gamma_{i}\rangle&\text{ iff }i=0\\ |i\rangle|\Gamma_{i}\rangle&\text{ otherwise,}\end{array}\right.\] with \(\phi\) and \(\varphi\) are complex number of unit length. The operator \(\mathcal{Q}\) is a generalization of Grover's iterations applied in his quantum search algorithm [25]. Moreover, by setting \(\phi=\varphi=-1\), we have for every \(j\geq 0\), \[\mathcal{Q}^{j}\mathcal{A}|0\rangle=k_{j}|\Gamma_{a}\rangle+l_{j}|\Gamma_{b}\rangle\] where \[k_{j}=\frac{1}{\sqrt{a}}\sin((2j+1)\theta)\quad\text{ and }\quad l_{j}=\frac{1}{ \sqrt{1-a}}\cos((2j+1)\theta),\] and \(0\leq\theta\leq\pi/2\) is defined so that \(\sin^{2}\theta=a=||\Gamma_{a}\rangle|^{2}\). A natural question to ask whether it is possible to boost the success probability to certainty. It turns out there are positive answers to this question. In [21], Brassard and Hoyer present an optimal value for the parameters \(\phi\) and \(\varphi\), namely \(\phi=\varphi=\sqrt{-1}\), such that whenever the success probability of an algorithm \(\mathcal{A}\) is \(\frac{1}{2}\), then one application of the amplitude amplification \(\mathcal{Q}\) boosts the success probability to \(1\). This is the approach that Brassard and Hoyer use to derandomize Simon's algorithm. Another positive answer is also presented in [28] by Mosca and Zalka. They use one application of \(\mathcal{Q}\) with parameters \(\phi=\varphi=-1\) to increase the success probability \(\frac{1}{4}\) of an algorithm \(\mathcal{A}\) to \(1\). They use this variant of amplitude amplification to present an exact quantum Fourier transform and derandomize Shor's quantum algorithm for discrete logarithm over groups of known orders. Therefore, one application of the exact quantum Fourier proposed by Mosca and Zalka requires three applications of the usual quantum Fourier transform. As one may notice from some previous derandomizations, such as Simon's algorithm and Shor's discrete logarithm, the knowledge of the success probability of the algorithms makes the derandomizations possible. Therefore, in section 3.2, we show that a multiple of the unknown order is sufficient to adjust the success probability to \(\frac{1}{2}\). Hence, this amplitude amplification derandomizes Shor's order finding algorithm when a multiple of the order is known. ### Exact quantum period-finding algorithm The problem we consider is given a function \(f\) with a promise that there exists a period \(r\) such that \(f(x)=f(y)\) if and only if \(x=y\bmod r\), and a multiple \(N\) of the unknown period \(r\), determine the period \(r\). The first part of the algorithm is the standard Fourier sampling. We use here an exact version based on the exact quantum Fourier transform of Mosca and Zalka [28]. The standard Fourier sampling procedure maps \(|0\rangle|0\rangle\) to \(\sum_{k=0}^{m-1}|k\rangle|\Gamma_{k}\rangle\), where \(|\Gamma_{k}\rangle=\frac{1}{m}\sum_{j=0}^{m-1}\omega^{kj}|f(j)\rangle\) and \(\omega=e^{2\pi i/m}\). Write \(j\) as \(j_{0}+rj_{1}\) (\(0\leq j_{0}\leq r-1\)). Then \[|\Gamma_{k}\rangle=\left\{\begin{array}{ll}1/r\sum_{j_{0}=0}^{r-1}\omega^{ kj_{0}}|f(j_{0})\rangle&\mbox{ if $m/r$ divides $k$;}\\ 0&\mbox{otherwise,}\end{array}\right.\] whence \[|\Gamma_{k}|^{2}=\left\{\begin{array}{ll}1/r&\mbox{ if $m/r$ divides $k$;}\\ 0&\mbox{otherwise.}\end{array}\right.\] In words, we have terms with \(|k\rangle\) in the first register only for those \(k\) which are multiples of \(m/r\). Initially, any \(k\) which is nonzero modulo \(m\) is useful because \(\frac{m}{\gcd(k,m)}\) is a proper divisor of \(r\). We have \(\sum_{k\neq 0}|\Gamma_{k}|^{2}=1-\frac{1}{r}\). However, fortunately, if we already know a divisor \(d\) of \(r\) then those values \(k\) that give us new information are the non-multiples of \(\frac{m}{d}\). We have \(\sum_{kd\neq 0}|\Gamma_{k}|^{2}=1-\frac{d}{r}\). The point is we do not know \(r\). The second part of the algorithm is based on the discussion in the last part of the previous paragraph. We maintain a divisor \(d\) of \(r\). We construct iterations of a procedure that increase \(d\). Initially \(d:=1\). As long as \(d<r\), we find \(k\) such that \(dk\bmod m\neq 0\). Then we replace \(d\) with \(\frac{m}{gcd(m,k)}\) since this is another divisor of \(r\) greater than \(d\). Hence, \(d\) keeps increasing as long as \(d<r\) and it stops immediately when \(d=r\) as \(dk=0\bmod m\) for all \(k\) if and only if \(d\) is a multiple of \(r\). In order to construct an exact algorithm for the iteration procedure above, we need to adjust the probability to \(\frac{1}{2}\) of each iteration as follows. Assume \(d<r\). Let \(rep(dk)\) be the smallest positive integer representative of \(dk\bmod m\). In this case, \(rep(dk)=d\frac{m}{r}\) for all \(k\). Then \(rep(dk)\) divides \(m\) and all the \(\frac{m}{rep(dk)}-1=\frac{r}{d}-1\) positive integers of the form \(td\frac{m}{r}<m\) are the nonzero multiple of \(d\frac{m}{r}\) modulo \(m\). Note that if \(\frac{r}{d}\) is even, then the integers of the form \(td\frac{m}{r}\) with \(m/2\leq td\frac{m}{r}<m\) represent just half of multiples of \(d\frac{m}{r}\) modulo \(m\). However, if \(\frac{r}{d}\) is odd, we need to add another multiple of \(d\frac{m}{r}\) modulo \(m\), say \(d\frac{m}{r}\), with weight \(\frac{1}{2}\). The problem is we do not know \(d\frac{m}{r}\). However, fortunately, for at least one integer \(0\leq j\leq\log_{2}m\), namely for \(j=\lceil\log_{2}d\frac{m}{r}\rceil\), the interval \((0,2^{j}]\) contains only \(d\frac{m}{r}\) and no other multiple of \(d\frac{m}{r}\) as if \(j-1<\log_{2}d\frac{m}{r}\leq j\) then \(d\frac{m}{r}\leq 2^{j}\) and \(2d\frac{m}{r}>2^{j}\). Based on the descriptions above, we summarize the exact algorithm in the following pseudocode. Each round consists of iterations for \(j=-1,\ldots,\lfloor\log_{2}m\rfloor\) instead of starting with index \(j=0\) to cover both cases when \(\frac{r}{d}\) is even and when \(\frac{r}{d}\) is odd. The case when \(\frac{r}{d}\) is even is covered at least once, when \(j=-1\) where the interval \((0,2^{j}]\) does not contain any integer. While the case when \(\frac{r}{d}\) is odd is covered at least once, when \(j=\lceil\log_{2}d\frac{m}{r}\rceil\). As in each round before termination, the size of \(d\) is increased by at least a factor \(2\) and it stops immediately when \(d=r\), we need at most \(\lceil\log_{2}r\rceil\) rounds of iterations. The overall number of calls to the exact Fourier transform or its inverse is \(\mathcal{O}(\log m\log r)=\mathcal{O}(\log^{2}m)\). ## 4 Proposed MPQC protocols ### The MPQC least common multiple based on EQPA The goal is to replace the standard QPA by our EQPA in Liu-Yang-Li's protocol. Therefore, we want to make sure that the requirement of EQPA is fulfilled, namely we provide a multiple of the least common multiple or the period of the common function \(f(j)=f_{0}(j)||\ldots||f_{n-1}(j)\). In step (1), each party \(P_{i}\) chooses a random \(q\) such that \(x_{i}q\sim 2^{m}\) and sends \(y_{i}=x_{i}q\) to \(P_{0}\). Therefore, \(P_{0}\) has a multiple of the least common multiple by computing \(k=\prod_{i=0}^{n-1}y_{i}\) and broadcasts it to all parties. Moreover, each party \(P_{i}\) is equipped with the period function \(f_{i}:\mathbb{Z}_{k}\rightarrow\mathbb{Z}_{k}\) defined by \(f_{i}(x)=x\bmod r_{i}\). The rest of the protocol follows the original Liu-Yang-Li's protocol but using EQPA instead the standard QPA. We give the summary of the protocol in algorithm 3. ``` 1:Initialize:\(d\gets 1\), \(Found\gets 1\); 2:while\(Found>0\)do 3:for\(j=-1,\ldots,\lfloor\log_{2}m\rfloor\)do 4:\(\chi_{j}(k,b)=\left\{\begin{array}{ll}1&\mbox{if ${\mbox{\it erp}}(dk)\geq\frac{m}{2}$ or $b=1$ and $0<rep(dk)\leq 2^{j}$;}\\ 0&\mbox{otherwise;}\end{array}\right.\) 5:\(\mathcal{U}_{j}:|0\rangle|0\rangle|0\rangle\mapsto|\psi_{j}\rangle=\frac{1}{ \sqrt{2}}\sum\lvert k\rangle\lvert\Gamma_{k}\rangle|b\rangle|\chi_{j}(k,b)\rangle ;\)\(\qquad\triangleright\) where \(k\in\{0,1,\ldots,m-1\}\), \(b\in\{0,1\}\). 6: Apply the amplitude amplified version of \(\mathcal{U}_{j}\) to obtain\(|\psi^{\prime}_{j}\rangle=\sum c^{\prime}_{\chi_{j}(k,b)}|k\rangle|\Gamma_{k} \rangle|b\rangle|\chi_{j}(k,b)\rangle\). 7: Look at the \(|k\rangle\)-register; 8:if\(dk\neq 0\bmod m\)then 9:\(d\leftarrow\frac{m}{gcd(m,k)}\); 10:else 11:\(Found\gets Found-1\) 12:endif 13:endfor 14:endwhile ``` **Algorithm 1** Exact quantum period-finding algorithm ### The MPQC least common multiple based on EQPA The goal is to replace the standard QPA by our EQPA in Liu-Yang-Li's protocol. Therefore, we want to make sure that the requirement of EQPA is fulfilled, namely we provide a multiple of the least common multiple or the period of the common function \(f(j)=f_{0}(j)||\ldots||f_{n-1}(j)\). In step (1), each party \(P_{i}\) chooses a random \(q\) such that \(x_{i}q\sim 2^{m}\) and sends \(y_{i}=x_{i}q\) to \(P_{0}\). Therefore, \(P_{0}\) has a multiple of the least common multiple by computing \(k=\prod_{i=0}^{n-1}y_{i}\) and broadcasts it to all parties. Moreover, each party \(P_{i}\) is equipped with the period function \(f_{i}:\mathbb{Z}_{k}\rightarrow\mathbb{Z}_{k}\) defined by \(f_{i}(x)=x\bmod r_{i}\). The rest of the protocol follows the original Liu-Yang-Li's protocol but using EQPA instead the standard QPA. We give the summary of the protocol in algorithm 3. ``` 1:Initialize:\(d\gets 1\), \(Found\gets 1\); 2:while\(Found>0\)do 3:for\(j=-1,\ldots,\lfloor\log_{2}m\rfloor\)do 4:\(\chi_{j}(k,b)=\left\{\begin{array}{ll}1&\mbox{if ${\mbox{\it erp}}(dk)\geq\frac{m}{2}$ or $b=1$ and $0<rep(dk)\leq 2^{j}$;}\\ 0&\mbox{otherwise;}\end{array}\right.\) 5:\(\mathcal{U}_{j}:|0\rangle|0\rangle|0\rangle\mapsto|\psi_{j}\rangle=\frac{1}{ \sqrt{2}}\sum\lvert k\rangle\lvert\Gamma_{k}\rangle|b\rangle|\chi_{j}(k,b)\rangle;\)\(\qquad\triangleright\) where \(k\in\{0,1,\ldots,m-1\}\), \(b\in\{0,1\}\). 6: Apply the amplitude amplified version of \(\mathcal{U}_{j}\) to obtain\(|\psi^{\prime}_{j}\rangle=\sum c^{\prime}_{\chi_{j}(k,b)}|k\rangle|\Gamma_{k} \rangle|b\rangle|\chi_{j}(k,b)\rangle\). 7: Look at the \(|k\rangle\)-register; 8:if\(dk\neq 0\bmod m\)then 9:\(d\leftarrow\frac{m}{gcd(m,k)}\); 10:else 11:\(Found\gets Found-1\) 12:endif 13:endfor 14:endwhile ``` **Algorithm 2** Exact quantum period-finding algorithm ### The MPQC least common multiple based on EQPA The goal is to replace the standard QPA by our EQPA in Liu-Yang-Li's protocol. Therefore, we want to make sure that the requirement of EQPA is fulfilled, namely we provide a multiple of the least common multiple or the period of the common function \(f(j)=f_{0}(j)||\ldots||f_{n-1}(j)\). In step (1), each party \(P_{i}\) chooses a random \(q\) such that \(x_{i}q\sim 2^{m}\) and sends \(y_{i}=x_{i}q\) to \(P_{0}\). Therefore, \(P_{0}\) has a multiple of the least common multiple by computing \(k=\prod_{i=0}^{n-1}y_{i}\) and broadcasts it to all parties. Moreover, each party \(P_{i}\) is equipped with the period function \(f_{i}:\mathbb{Z}_{k}\rightarrow\mathbb{Z}_{k}\) defined by \(f_{i}(x)=x\bmod r_{i}\). The rest of the protocol follows the original Liu-Yang-Li's protocol but using EQPA instead the standard QPA. We give the summary of the protocol in algorithm 3. ``` 1:Initialize:\(d\gets 1\), \(Found\gets 1\); 2:while\(Found>0\)do 3:for\(j=-1,\ldots,\lfloor\log_{2}m\rfloor\)do 4:\(\chi_{j}(k,b)=\left\{\begin{array}{ll}1&\mbox{if ${\mbox{\it erp}}(dk)\geq\frac{m}{2}$ or $b=1$ and $0<rep(dk)\leq 2^{j}$;}\\ 0&\mbox{otherwise;}\end{array}\right.\) 5:\(\mathcal{U}_{j}:|0\rangle|0\rangle|0\rangle\mapsto|\psi_{j}\rangle=\frac{1}{ \sqrt{2}}\sum\lvert k\rangle|\Gamma_{k}\rangle|b\rangle|\chi_{j}(k,b)\rangle;\)\(\qquad\triangleright\) where \(k\in\{0,1,\ldots,m-1\}\), \(b\in\{0,1\}\). 6: Apply the amplitude amplified version of \(\mathcal{U}_{j}\) to obtain\(|\psi^{\prime}_{j}\rangle=\sum c^{\prime}_{\chi_{j}(k,b)}|k\rangle|\Gamma_{k} \rangle|b\rangle|\chi_{j}(k,b)\rangle\). 7: Look at the \(|k\rangle\)-register; 8:if\(dk\neq 0\bmod m\)then 9:\(d\leftarrow\frac{m}{gcd(m,k)}\); 10:else 11:\(Found\gets Found-1\) 12:endif 13:endfor 14:endwhile ``` **Algorithm 3** Exact quantum period-finding algorithm ### The MPQC least common multiple based on EQPA The goal is to replace the standard QPA by our EQPA in Liu-Yang-Li's protocol. Therefore, we want to make sure that the requirement of EQPA is fulfilled, namely we provide a multiple of the least common multiple or the period of the common function \(f(j)=f_{0}(j)||\ldots||f_{n-1}(j)\). In step (1), each party \(P_{i}\) chooses a random \(q\) such that \(x_{i}q\sim 2^{m}\) and sends \(y_{i}=x_{i}q\) to \(P_{0}\). Therefore, \(P_{0}\) has a multiple of the least common multiple by computing \(k=\prod_{i=0}^{n-1}y_{i}\) and broadcasts it to all parties. Moreover, each party \(P_{i}\) is equipped with the period function \(f_{i}:\mathbb{Z}_{k}\rightarrow\mathbb{Z}_{k}\) defined by \(f_{i}(x)=x\bmod r_{i}\). The rest of the protocol follows the original Liu-Yang-Li's protocol but using EQPA instead the standard QPA. We give the summary of the protocol in algorithm 3. 3. applies \(CNOT^{\otimes m}\) on \(h,t\), where \(h\) controls \(t\): \[\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|0\rangle_{t}\mapsto\frac{1}{\sqrt{ k}}\sum_{j\in[k]}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t};\] 4. prepares an \(k\)-qubit quantum register \(e_{0}\) initialized as \(|0\rangle_{e_{0}}\); 5. applies \(U_{f_{0}}:|j\rangle_{t}|0\rangle_{e_{0}}\mapsto|j\rangle_{t}|f_{0}(j)\rangle_{e _{0}}\) on \(t,e_{0}\) : \[\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t}|0\rangle_{e_{0}} \mapsto\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t}|f_{0}(j) \rangle_{e_{0}};\] 6. sends \(t\) to \(P_{1}\). 5. For \(P_{i}\), \(1\leq i\leq n-1\): 1. prepares an \(m\)-qubit registers \(e_{i}\) initialized as \(|0\rangle_{e_{i}}\); 2. applies \(U_{f_{i}}:|j\rangle_{t}|0\rangle_{e_{i}}\mapsto|j\rangle_{t}|f_{i}(j)\rangle_{ e_{i}}\) on \(t,e_{i}\) : \[\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t}|f_{0}(j)\rangle_{e_{ 0}}|f(j)\rangle_{e_{1}}\ldots|f_{i-1}(j)\rangle_{e_{i-1}}|0\rangle_{e_{i}}\] \[\mapsto\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t}|f_{0}(j) \rangle_{e_{0}}|f(j)\rangle_{e_{1}}\ldots|f_{i-1}(j)\rangle_{e_{i-1}}|f_{i}(j) \rangle_{e_{i}};\] 3. sends \(t\) to \(P_{i+1}\). 6. For \(P_{0}\): 1. applies \(CNOT^{\otimes m}\) on \(h,t\), where \(h\) controls \(t\): \[\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|j\rangle_{t}|f(j)\rangle_{e} \mapsto\frac{1}{\sqrt{k}}\sum_{j\in[k]}|j\rangle_{h}|0\rangle_{t}|f(j)\rangle _{e},\] where \(f(j)=f_{0}(j)||\ldots||f_{n-1}(j)\), \(e=(e_{0},\ldots,e_{n-1})\); 2. measures \(t\), if \(t\) is not \(|0\rangle\), then rejects, otherwise continues; 3. Applies EQPA to find the period \(r\) of \(f\); 4. Broadcasts \(r\) to all other parties. Correctness proof.The correctness of the protocol is ensured by the property of EQPA being deterministic and the fact that the function \(f(x)=(f_{0}(x),\ldots,f_{n-1}(x))\) has period \(r=\mathrm{lcm}_{i=0}^{n}(r_{i})\). Security analysis.In the first step, each \(P_{i}\) sends \(y_{i}=x_{i}q\) to \(P_{0}\). However, \(P_{0}\) cannot gain any useful information as \(y_{i}\) is a multiplication of the secret input \(x_{i}\) with a random element \(q\). Moreover, following the security analysis of Li-Liu's protocol [20], the protocol is secure under the three possible attacks (_direct measurement attack, pre-period-finding attack, post-period-finding attack_) in the semihonest model. In the malicious model, our protocol seems more secured compared to Li-Liu's protocol because there is no repetition of the protocol is required. Complexity analysis.Note that the parties share a multiple \(k\) of each of their secret so, \(k\) is a multiple of the LCM. Moreover, since \(k=\mathcal{O}(2^{mn})\), the most time consuming step of the protocol is the EQPA procedure that has \(\mathcal{O}(\log^{4}k)=\mathcal{O}(m^{4}n^{4})\) computational complexity. Therefore, the total computational communication complexity are \(\mathcal{O}(m^{4}n^{4})\) and \(\mathcal{O}(m^{2}n)\) ### The proposed MPQC for GCD and private set intersection #### 4.2.1 Multiparty quantum computation for GCD. Assume that there are \(n\) parties: \(P_{0},\dots,P_{n-1}\), where each party \(P_{k}\) has a secret integer \(r_{k}\in\{0,1,\dots,2^{m}-1\}\). All \(n\) parties want to jointly compute the \(\gcd(r_{1},\dots,r_{n})\) without revealing their respective secret. Furthermore, assume that the communication process is done via an authenticated quantum channel. * For \(P_{i}\), \(0\leq i\leq n-1\) : applies Shor's factoring algorithm to obtain the set \(R_{i}\) of all prime factors of \(r_{i}\) * All parties jointly perform the private set union protocol to get the set \(R=\bigcup_{i=0}^{n-1}R_{i}\). * For each prime \(p\in R\), do the following iteration: using the multiplication protocol, all parties jointly vote whether \(p,p^{2},\dots\) divide their secret inputs in order to get the largest power \(p^{k}\) that simultaneously divides all their secret inputs. Thus, the GCD can be obtained by the product of all the largest prime power of all elements of \(R\). Correctness proof.In the first step, each party performs Shor's factoring on their inputs to get the set of all prime factors of \(r_{i}\). Therefore, each party can easily verify that they hold a correct set of prime factors of their inputs before applying private set union protocol in the next step. Since the correctness of the second step follows directly from [14], then it is left to show that the last step indeed gives the gcd of the secret inputs \(r_{i}\)'s. The last step indeed gives a correct output according to the definition of greatest common divisor \[\gcd(p_{1}^{a_{1}}\cdots p_{m}^{a_{m}},p_{1}^{b_{1}}\cdots p_{m}^{b_{m}})=p_{1 }^{\max\{a_{1},b_{1}\}}\cdots p_{m}^{\max\{a_{m},b_{m}\}}\] which is true for computing GCD for any \(n\) numbers through their prime factorizations. Note that the success probability of [14] is greater than \(99\%\) but still probabilistic. Using the EQPA in the subroutine can guarantee the output with certainty. Security analysis.Since there is no meaningful information can be gained regarding the secret inputs from the set \(R\), then the security of the protocol follows directly from the security of the private set union protocol [14]. Since the private set union protocol is unconditionally secure in the semihonest model, then similar security holds for the proposed multiparty quantum computation for GCD. Complexity analysis.The use of Shor's factoring in the first step of the protocol costs \(\mathcal{O}(nm^{2}\log m)\) computational complexity. On the other hand, the computational and communication complexity of the private set union are \(\mathcal{O}(n^{3}m^{3}k^{3}\log(nmk))\) and \(\mathcal{O}(n^{2}mk)\) respectively where \(k\) is the upper bound of the cardinality of the sets \(R_{i}\)'s. Thus, the second step has \(\mathcal{O}(n^{3}m^{6}\log(nm^{2}))\) computational complexity and communication complexity \(\mathcal{O}(n^{2}m^{2})\). As for the last step, there are at most \(m\) iterations of voting procedure, thus the computational and communication complexity of the last step are \(\mathcal{O}(nm^{3})\) and \(\mathcal{O}(nm^{2})\) respectively. Hence the total computational and communication complexity are \(\mathcal{O}(n^{3}m^{6}\log(nm^{2}))\) and \(\mathcal{O}(n^{2}m^{2})\) respectively. On the other hand, using the EQPA to get a deterministic output in the subroutine of the PSU protocol gives extra computational complexity with total computational complexity \(\mathcal{O}(n^{4}m^{6}\log(nm^{2}))\) instead of \(\mathcal{O}(n^{3}m^{6}\log(nm^{2}))\). #### 4.2.2 Multiparty quantum private set intersection Assume that there are \(n\) parties: \(P_{1},\ldots,P_{n}\), where each party \(P_{i}\) has a secret set \(S_{i}\subseteq U\) where \(U\) is the complete set of cardinality \(N\): \(2^{m-1}<N\leq 2^{m}\). All \(n\) parties want to jointly compute the \(\bigcup S_{i}\) without revealing their respective secret. Furthermore, assume that the communication process is done via an authenticated quantum channel. The protocol for private set intersection straightforwardly follows the protocol for private set union by Liu, Yang, and Li. We give the key steps of the protocol as follows: * **Encoding:** each party \(P_{i}\) transforms the elements of their corresponding secret set \(S_{i}\) into prime numbers and encodes the secret \(S_{i}\) as the product of all primes representation of its elements. * **GCD protocol:** apply the multiparty quantum computation for GCD to compute the greatest common divisor of all the encoded \(S_{i}\). * **Decoding:** use the improved Shor's factoring algorithm in [14] to factor the GCD obtained in the previous step and get the intersection of all \(S_{i}\) from the prime factors of the GCD. Correctness proof.The correctness of the protocol follows directly from the correctness of the GCD protocol and the fact that the prime factors of the greatest common divisor are common prime factors of all the encoded \(S_{i}\). Hence decoding the prime factors give the elements of the intersection of all sets \(S_{i}\)'s. The same case with the GCD protocol, the original version of PSI protocol gives more than 99% of success probability and we can get the certainty by using the EQPA in the subgroutines. Security analysis.The security of the protocol follows as well directly from the security of the GCD protocol. Thus, it follows the security of the private set union [14] which has unconditional security in the semihonest model. Complexity analysis.The most computational costs comes from the GCD protocol which is the same complexity with the PSU protocol which is \(\mathcal{O}(n^{3}m^{6}\log(nm^{2}))\). In order to get the correct output with certainty, we can use the EQPA instead which increases the total complexity becomes \(\mathcal{O}(n^{4}m^{6}\log(nm^{2}))\). While the communication complexity remains \(\mathcal{O}(n^{2}m^{2})\).
2306.12630
On sets of rational functions which locally represent all of $\mathbb{Q}$
We investigate finite sets of rational functions $\{ f_{1},f_{2}, \dots, f_{r} \}$ defined over some number field $K$ satisfying that any $t_{0} \in K$ is a $K_{p}$-value of one of the functions $f_{i}$ for almost all primes $p$ of $K$. We give strong necessary conditions on the shape of functions appearing in a minimal set with this property, as well as numerous concrete examples showing that these necessary conditions are in a way also close to sufficient. We connect the problem to well-studied concepts such as intersective polynomials and arithmetically exceptional functions.
Benjamin Klahn, Joachim König
2023-06-22T01:44:13Z
http://arxiv.org/abs/2306.12630v2
# On sets of rational functions which locally represent all of \(\mathbb{Q}\) ###### Abstract. We investigate finite sets of rational functions \(\{f_{1},f_{2},\ldots,f_{r}\}\) defined over some number field \(K\) satisfying that any \(t_{0}\in K\) is a \(K_{p}\)-value of one of the functions \(f_{i}\) for almost all primes \(p\) of \(K\). We give strong necessary conditions on the shape of functions appearing in a minimal set with this property, as well as numerous concrete examples showing that these necessary conditions are in a way also close to sufficient. We connect the problem to well-studied concepts such as intersective polynomials and arithmetically exceptional functions. ## 1. Introduction and main results Given a rational number \(\alpha\) and a sufficiently large prime \(p\), one of the numbers \(\alpha\), \(\alpha-1\) and \(\frac{\alpha-1}{\alpha}\) is a square in \(\mathbb{Q}_{p}\). Indeed, this is a direct consequence of the multiplicativity of the Legendre symbol. Moreover, this fact translates directly into a fact about rational functions: letting \(f_{1}(X)=X^{2}\), \(f_{2}(X)=X^{2}+1\) and \(f_{3}(X)=\frac{1}{1-X^{2}}\in\mathbb{Q}(X)\), for every \(\alpha\in\mathbb{Q}\) and every sufficiently large prime \(p\) (depending on \(\alpha\)), there exists \(\beta\in\mathbb{Q}_{p}\) such that one of \(f_{1}(\beta)\), \(f_{2}(\beta)\) and \(f_{3}(\beta)\) equals \(\alpha\). In this paper, we will investigate this curious property of a given finite set of rational functions in greater generality. To this end, we first make some definitions. For a field \(F\) and a rational function \(f\in F(X)\), say that \(\alpha\in F\) is an \(F\)-value of \(f\) if there exists \(\beta\in F\cup\{\infty\}\) such that \(f(\beta)=\alpha\). **Definition 1.1**.: Let \(K\) be a number field and \(f_{1},\ldots,f_{r}\in K(X)\) rational functions. We say that \(t_{0}\in K\) is a pseudo-value of \(\{f_{1},\ldots,f_{r}\}\) if there exists a finite set \(S\) of primes of \(K\) such that for all primes \(p\) of \(K\) outside of \(S\), there exists some \(i\in\{1,\ldots,r\}\) such that \(t_{0}\) is a \(K_{p}\)-value of \(f_{i}\). If \(t_{0}\) is a pseudo-value of \(\{f_{1},\ldots,f_{r}\}\), but not a \(K\)-value of any \(f_{i}\) (\(i=1,\ldots,r\)), we say that \(t_{0}\) is a fake value of \(\{f_{1},\ldots,f_{r}\}\). Say that \(f_{1},\ldots,f_{r}\) locally represent \(K\), if all elements of \(K\) are pseudo-values of \(f_{1},\ldots,f_{r}\). Finally, call such \(\{f_{1},\ldots,f_{r}\}\) a minimal locally representing set if no proper subset locally represents \(K\). Of course, a single rational function of degree \(1\) locally represents its field of definition, since it even (globally) attains any value. In contrast to this, the sets of \(K\)-values of finitely many rational functions of degree \(>1\) over a number field \(K\) can never cover all of \(K\); this allows locally representing sets to be interpreted as counterexamples to a local-global principle. The idea of locally representing sets can be traced back to the work of Fried, notably [5, (4.5) and Example 7], which, within a more general investigation of mod-\(p\) value sets, briefly mentions a very similar (and after all equivalent) property and gives a class of examples. Note also that for \(r=1\), the definition of fake values coincides with the one in [3]. That paper also identifies situations in which the set of fake values of a rational function (or a more general cover) is infinite. The property of sets of (non-linear) rational functions to locally represent all of \(K\) may be seen as the strongest possible form of this phenomenon; note that this can no longer happen with \(r=1\), see Corollary 2.5b). We propose the following general question. ###### Abstract We consider the following question: **Question 1**.: Let \(K\) be a number field. What are the minimal locally representing sets \(\{f_{1},\dots,f_{r}\}\) of geometrically indecomposable rational functions \(f_{i}\in K(X)\)? **Question 1**.: Let \(K\) be a number field. What are the minimal locally representing sets \(\{f_{1},\dots,f_{r}\}\) of geometrically indecomposable rational functions \(f_{i}\in K(X)\)? Here, \(f_{i}\) is called geometrically indecomposable if it cannot be written as a composition of two rational functions of degree \(>1\) over \(\overline{K}\). Of course Question 1 may also be asked without the indecomposability assumption. It is nevertheless a reasonable assumption in this context, as explained in Section 2.2 (see in particular Lemma 2.2 and Corollary 2.4), While we do not attempt to answer Question 1 in full, we will reasonably classify the possible shapes of the individual functions \(f_{i}\). After having seen an easy example of how several rational functions combined may "work together" to locally represent all of \(\mathbb{Q}\), it may come as a surprise that "most" rational functions are in fact useless for creating such examples. This is the content of our first main result Theorem 1.1. To state our results we recall some standard notions for rational functions. For a rational function \(f\in K(X)\) defined over some number field \(K\) write \(f=g/h\) where \(g,h\in K[X]\) are coprime. Then we denote by \(\operatorname{Split}(f(X)-t|K(t)):=\operatorname{Split}(g(X)-th(X)|K(t))\) the _splitting field_ of \(f(X)-t\). Furthermore, the roots of \(g(X)-th(X)\) are referred to as the roots of \(f(X)-t\). **Theorem 1.1**.: _Let \(K\) be a number field, and let \(f_{1},\dots f_{r}\in K(X)\) be a minimal collection of geometrically indecomposable rational functions locally representing \(K\). Then there exists an absolute bound \(B\in\mathbb{N}\) such that all \(f_{i}\) fulfill one of the following:_ 1. \(\deg(f_{i})\leq N\) _or_ 2. \(\operatorname{\mathit{Split}}(f_{i}(X)-t/K(t))\) _is of genus_ \(\tilde{g}\leq 1\)_.\(1\) Footnote 1: In particular, the rational functions in the last case are explicitly classified and all have solvable monodromy group. The proof of Theorem 1.1 rests on a combination of group-theoretical arguments based on the concept of "intersective polynomials" with strong classification results for monodromy groups of indecomposable rational functions, initiated by the "Guralnick-Thompson conjecture" (see [9]) on composition factors of genus zero monodromy groups. In particular, Theorem 1.1 implies that "generic" rational functions, namely such with monodromy group \(S_{n}\), cannot be part of a set as in Question 1, except in certain small degrees. For this "generic" case, we make the result more explicit. **Theorem 1.2**.: _Let \(K\) be a number field, and let \(f_{1},\dots,f_{r}\in K(X)\) be rational functions of degree \(d_{1},\dots,d_{r}\) with \(\operatorname{\mathit{Gal}}(f_{i}(X)-t/K(t))=S_{d_{i}}\) or \(A_{d_{i}}\) (\(i=1,\dots,r\)). If \(\{f_{1},\dots,f_{r}\}\) is a minimal locally representing set, then all \(d_{i}\) are in \(\{2,3,4,6\}\)._ In Section 4, we will complement the above theorems by existence results showing that the types of functions not yet excluded in Theorems 1.1 and 1.2 do indeed occur. In particular, the results of that section together with the known classification of functions with splitting field of genus \(\leq 1\) yields the following partial converse to Theorem 1.1. **Theorem 1.3**.: _Let \(f\in\mathbb{Q}(X)\) be a geometrically indecomposable rational function of degree \(>163\) such that the splitting field of \(f(X)-t\) is of genus \(\leq 1\). Then there exists \(\tilde{f}\in\mathbb{Q}(X)\), linearly equivalent to \(f\) over \(\mathbb{C}\), such that \(\tilde{f}\) is part of a minimal set of rational functions locally representing \(\mathbb{Q}\)._ ## 2. Preliminaries ### Rational functions and their monodromy groups We collect some facts on monodromy of rational functions. Let \(K\) be a field of characteristic \(0\) and \(f(X)=\frac{g(X)}{h(X)}\in K(X)\) a rational function. The (arithmetic) monodromy group of \(f\) is the group \(\text{Mon}(f):=\text{Gal}(g(X)-th(X)/K(t))\). The geometric monodromy group of \(f\) is the Galois group of the same polynomial over \(\overline{K}(t)\); the latter is isomorphic to the group of deck transformations of the Galois closure of the cover \(\mathbb{P}^{1}_{\overline{K}}\to\mathbb{P}^{1}_{\overline{K}}\) given by \(x\mapsto f(x)\). This cover has finitely many branch points \(t_{1},\dots,t_{r}\mathbb{P}^{1}_{\overline{K}}\), which are exactly the critical values of the function \(f\). Associated to each branch point \(t_{i}\) is a conjugacy class of elements \(\sigma_{i}\) of the geometric monodromy group \(G\), and the cycle lengths of \(\sigma_{i}\) (acting on the roots of \(f(X)-t\)) are the ramification indices of places extending \(t\mapsto t_{i}\) in a root field of \(f-t\), which in terms are simply the multiplicities of points in \(f^{-1}(t_{0})\subset\mathbb{P}^{1}_{\overline{K}}\). The tuple \((\sigma_{1},\dots,\sigma_{r})\) is a "genus-zero tuple" in \(G\), which via the Riemann-Hurwitz formula translates to an explicit condition on the total number of cycles of \(\sigma_{1},\dots,\sigma_{r}\). A rational function \(f\in K(X)\) is called indecomposable (resp., geometrically indecomposable) if it cannot be written as a composition \(f=f_{1}\circ f_{2}\) of two functions \(f_{1},f_{2}\in K(X)\) (resp. \(\in\overline{K}(X)\)) of degree \(>1\). In group-theoretical terms, indecomposability (resp., geometric indecomposability) corresponds to primitivity of the monodromy group (resp., of the geometric monodromy group). While being the Galois group of an intersective polynomial as in the previous section is not a restrictive condition, being the monodromy group of a rational function is quite restrictive, and it is the combination of these two that will be relevant for us. As mentioned in the introduction, we will use the classification of monodromy groups of geometrically indecomposable covers of bounded genus, conjectured in [8] and proven (in greater generality) in [14], to prove Theorem 1.1. For the case of rational functions the result reads as follows:2 Footnote 2: Note that [14] gives results in term of the geometric monodromy group. Since the latter is a transitive normal subgroup of the arithmetic monodromy group, cases 1) to 3) follow readily in this setup as well; for case 4), we refer to Section 4.4. **Theorem 2.1** (Neftin, Zieve [14], Theorem 1.1).: _Let \(K\subseteq\mathbb{C}\), and let \(f(X)\in K(X)\) be a geometrically indecomposable rational function of degree \(n\). There is an absolute constant \(N\in\mathbb{N}\) such that if \(n\geq N\), then one of the following holds:_ 1. \(\text{Mon}(f)=A_{n}\) _or_ \(S_{n}\) _in the natural degree_ \(n\) _action,_ 2. \(\text{Mon}(f)=A_{\ell}\) _or_ \(S_{\ell}\) _where_ \(n=\ell(\ell-1)/2\) _with the action on the roots of_ \(f(X)-t\) _induced by the action on 2-element subsets of_ \(\{1,2,\dots,\ell\}\)_,_ 3. \(A_{\ell}^{2}<\text{Mon}(f)\leq S_{\ell}^{2}\rtimes C_{2}\)_, where_ \(n=\ell^{2}\)_, where_ \(C_{2}\) _permutes the two copies of_ \(S_{\ell}\)_,_ 4. \(n=p^{i}\) _for a prime_ \(p\) _and integer_ \(i\leq 2\)_, and_ \(\text{Mon}(f)\leq AGL_{i}(p)\) _is solvable._3__ Footnote 3: In this case, the splitting field of \(f(X)-t\) is additionally of genus \(\tilde{g}\leq 1\). The statement of Theorem 1.1 then boils down to showing that if \(n=\text{deg}(f)\) is sufficiently large and \(f\) falls into case (1)-(3) in Theorem 2.1, then \(f\) cannot appear in a minimal locally representing set \(\{f_{1},f_{2},\dots,f_{r}\}\). ### First observations on locally representing sets We begin with some observations on the behavior of locally representing sets of rational functions under composition, which follow straight from the definitions. **Lemma 2.2**.: _Assume \(f_{1},\dots,f_{r}\in K(X)\) locally represent \(K\). Then the following hold:_ 1. _If_ \(\lambda\) _as well as_ \(\mu_{1},\ldots,\mu_{r}\in K(X)\) _are fractional linear transformations, then_ \(\lambda\circ f_{1}\circ\mu_{1},\lambda\circ f_{2}\circ\mu_{2},\ldots,\lambda \circ f_{r}\circ\mu_{r}\) _locally represent_ \(K\)_._ 2. _If_ \(f_{i}=g_{i}\circ h_{i}\) _are decompositions of_ \(f_{i}\) _into rational functions_ \(g_{i},h_{i}\in K(X)\)_, then_ \(g_{1},\ldots,g_{r}\) _locally represent_ \(K\)_._ The following characterization of locally representing sets of rational functions will be crucial for the further treatment. The equivalences 1) to 3) demonstrate the extent to which locally representing and non-representing sets of functions differ from each other, whereas 4) and 5) translate the arithmetic property into a group-theoretical one. **Lemma 2.3**.: _Let \(K\) be a number field \(f_{1},\ldots,f_{r}\in K(X)\) be rational functions, of degree \(>1\), and set \(G:=\text{Gal}(\prod_{i=1}^{r}(f_{i}(X)-t)/K(t))\). The following are equivalent:_ 1. _The set of fake values of_ \(\{f_{1},\ldots,f_{r}\}\) _is not a thin subset of_ \(K\) _(in the sense of Serre)_4_. Footnote 4: Here, a subset \(S\subseteq\mathbb{P}^{1}_{K}\) is called thin, if it is contained in the union of a finite set and finitely many value sets of nontrivial branched coverings \(\varphi_{i}:X_{i}\to\mathbb{P}^{1}_{K}\) of curves over \(K\). 2. _All elements of_ \(K\) _are pseudo-values of_ \(\{f_{1},\ldots,f_{r}\}\)_, i.e., the set_ \(\{f_{1},\ldots,f_{r}\}\) _locally represents_ \(K\)_._ 3. _For every finite extension_ \(F\supseteq K\)_, the set_ \(\{f_{1},\ldots,f_{r}\}\) _locally represents_ \(F\)_._ 4. _Every element of_ \(G\) _fixes a point in its action on the roots._ 5. _The union of_ \(G\)_-conjugates of_ \(U_{1},\ldots,U_{r}\) _is all of_ \(G\)_, where_ \(U_{i}\) _denotes the stabilizer of a root of_ \(f_{i}(X)-t\) _in_ \(G\)_._ Proof.: Of course, 3) implies 2). Furthermore, since the union of the sets of \(K\)-values of finitely many functions of degree \(>1\) is thin by definition, 2) implies 1). Now, assume that 4) does not hold, and choose any \(t_{0}\in K\) such that \(\text{Gal}(\prod_{i=1}^{r}(f_{i}(X)-t_{0})/K)=G\). The set of these \(t_{0}\) is the complement of a thin subset by Hilbert's irreducibility theorem. Denote the respective \(G\)-extension of \(K\) by \(F/K\). Choose a conjugacy class of elements \(x\in G\) fixing no root. There are infinitely many primes unramified in \(F/K\) and possessing this class as the Frobenius class in \(F/K\), and in particular \(f_{i}(X)=t_{0}\) then has no \(K_{p}\)-rational solution for any \(i=1,\ldots,r\). This shows that \(t_{0}\) is not a pseudo-value of \(\{f_{1},\ldots,f_{r}\}\). We have thus derived that 1) implies 4). Next, assume 4) holds, let \(F/K\) be a finite extension, and choose \(t_{0}\in F\), not a critical value of any \(f_{i}(X)\). In this case, \(\text{Gal}(\prod_{i=1}^{r}(f_{i}(X)-t_{0})/F)\) is permutation-isomorphic to a subgroup of \(\text{Gal}(\prod_{i=1}^{r}(f_{i}(X)-t)/F(t))\), which itself is permutation-isomorphic to a subgroup of \(G\). Now let \(p\) be a prime of \(F\) unramified in the splitting field of \(\prod_{i=1}^{r}(f_{i}(X)-t_{0})\). The Frobenius class at \(p\) in this splitting field consists of elements of \(G\), which by assumption have a fixed point. This implies that at least one factor \(f_{i}(X)-t_{0}\) has a root in \(F_{p}\), i.e., \(t_{0}\) is an \(F_{p}\)-value of some \(f_{i}\). To derive that 3) holds, we are thus left with considering the critical values \(t_{0}\in F\) of \(f_{1},\ldots,f_{r}\). Let \(t_{0}\) be such a value. We may assume \(\infty\notin f^{-1}(t_{0})\), since otherwise \(t_{0}\) is even an \(F\)-value of \(f_{i}\). Then the numerator of \(f_{i}(X)-t_{0}\) factors as \(\prod_{j=1}^{d_{i}}g_{i,j}(X)^{e_{i,j}}\) for suitable (pairwise coprime) irreducible polynomials \(g_{i,j}(X)\in K[X]\) (\(i=1,\ldots,r\)); here the \(e_{i,j}\) are the ramification indices of places extending \(t\mapsto t_{0}\) in a root field of \(f_{i}(X)-t\), and the roots of a given irreducible \(g_{i,j}\) correspond to all the orbits of the inertia group \(I\) at (a fixed place extending) \(t\mapsto t_{0}\) joined into a common orbit of the decomposition group \(D\) at \(t\mapsto t_{0}\), yielding an identification of the Galois group \(\text{Gal}(\prod_{i=1}^{r}(f_{i}(X)-t_{0})/F)\) with the quotient group \(D/I\). Now let \(p\) be any prime of \(F\) such that the product of all \(g_{i,j}\) remains separable modulo \(p\). Then \(p\) is unramified in the splitting field of \(\prod_{i=1}^{r}(f_{i}(X)-t_{0})\), and its decomposition group at \(p\) is naturally identified with a cyclic subgroup of \(D/I\). Let \(\tau\) be any generator of an extension of this subgroup to a cyclic subgroup of \(D\). Since \(\tau\) is assumed to fix a point, its projection to \(D/I\) certainly fixes an orbit of \(I\), implying that the decomposition group at \(p\) fixes a root of at least one \(g_{i,j}(X)\). This translates to \(t_{0}\) being an \(F_{p}\)-value of the respective rational funcion \(f_{i}\), completing the implication 4)\(\Rightarrow\) 3). Finally, it is easy to see (and well-known) that 4) and 5) are equivalent. The group-theoretical translation in Lemma 2.3 has several immediate, but noteworthy consequences. In particular, the property to locally represent a number field is compatible with composition of functions, thus further emphasizing the special role of indecomposable functions in Question 1. **Corollary 2.4**.: _If \(\{f_{1},\ldots,f_{r}\}\) and \(\{g_{1},\ldots,g_{s}\}\) each locally represent \(K\), then so does \(\{f_{i}\circ g_{j}\mid 1\leq i\leq r;1\leq j\leq s\}\)._ Proof.: Let \(L\supseteq K(t)\) be the splitting field of \(\prod_{i,j}(f_{i}(g_{j}(X))-t)\), and let \(\sigma\in\operatorname{Gal}(L/K(t))\). By assumption, \(\sigma\) fixes a root \(y\) of some \(f_{i}(X)-t\); in particular \(\sigma\in\operatorname{Gal}(L/K(y))\). But also by assumption, \(\sigma\) fixes a root of some \(g_{j}(X)-y\). This root \(x\) is a root of \(f_{i}(g_{j}(X))-t\), and the assertion follows from the equivalence 2)\(\Leftrightarrow\)4) in Lemma 2.3. Finally, the following observations show that it does not make sense to replace "rational functions" by "polynomials" (see already (4.2) of [5]), or to restrict to the case \(r=1\) in Question 1. **Corollary 2.5**.: * _If_ \(f_{1},\ldots,f_{r}\in K[X]\) _are polynomials of degree_ \(>1\)_, then_ \(f_{1},\ldots,f_{r}\) _do not locally represent_ \(K\)_._ * _If_ \(f_{1}\in K(X)\) _is a single rational function of degree_ \(>1\)_, then_ \(f_{1}\) _does not locally represent_ \(K\)_._ Proof.: The inertia group generator at \(t\mapsto\infty\) in the joint splitting field of \(f_{i}(X)-t\)\((i=1,\ldots r)\) acts as a full cycle on each set of roots, and hence in particular is fixed point free. This shows a). In the situation of b), the Galois group acts transitively, and it is then well-known that it must contain a fixed point free element. ### Intersectivity, exceptionality and related group-theoretic notions In view of Lemma 2.3, it is worth focussing on the following property of proper subgroups \(U_{1},\ldots,U_{r}\subsetneq G\) of a finite group \(G\), which amounts exactly to saying that the action of \(G\) on the union of cosets of \(U_{i}\) (\(i=1\ldots,r\)) is faithful and every element of \(G\) fixes at least one point: (*) \(\bigcup_{x\in G}\bigcup_{i=1}^{r}U_{i}^{x}=G\), and \(\bigcap_{x\in G}\bigcap_{i=1}^{r}U_{i}^{x}=\{1\}\). In the case where \(G=\operatorname{Gal}(f/K)\) is the Galois group of a separable polynomial \(f=f_{1}\cdots f_{r}\) over a number field \(K\) (with \(\deg(f_{i})\geq 2\) for all \(i\)), and the \(U_{i}\) are the subgroups fixing a root of \(f_{i}\) (\(i=1,\ldots r\)), Condition (*) translates to the property that \(f\) has a root in all but finitely many completions \(K_{p}\) of \(K\). Polynomials with this property are known in the literature as (weakly)5 intersective polynomials. Furthermore, \(f\) is called newly intersective if it has no proper intersective divisors. Such polynomials were studied, e.g., in [1] or [15]. Footnote 5: Usually, “intersective” is understood to imply existence of roots in _all_\(K_{p}\), whereas “weakly intersective” refers to the same property for almost all \(K_{p}\). Since the main notions in our paper deal with properties for almost all primes, we will use the term “intersective” instead of “weakly intersective” for convenience. Since a finite transitive permutation group necessarily possesses a fixed point free element, an irreducible polynomial cannot be intersective. On the other hand, it is straightforward to see that for every non-cyclic group \(G\) which occurs as a Galois group over \(K\), there exists an intersective polynomial \(f=f_{1}\cdots f_{r}\in K[X]\) with Galois group \(G\), and the smallest number \(r\) of factors of such \(f\) is directly related to the "normal covering number" of \(G\), i.e., the smallest number of proper subgroups of \(G\) the union of whose \(G\)-conjugates is all of \(G\). We recall an important result on the normal covering numbers of alternating and symmetric groups. **Proposition 2.6** ([2]).: _There exists a constant \(c>0\) such that the following holds. Let \(G=S_{n}\) or \(A_{n}\) and let \(U_{1},\ldots,U_{r}\) be a collection of proper subgroups of \(G\) such that the union of all conjugates of \(U_{i}\) (\(i=1,\ldots,r\)) equals all of \(G\). Then \(r\geq cn\)._ Our notions also relate naturally to the concept of "exceptional polynomials" (or more generally exceptional covers). Recall that a polynomial \(f\in K[X]\) over a number field is called (arithmetically) exceptional if \(f\) induces a bijection on infinitely many residue fields of primes of \(K\). This property, once again, translates nicely to a permutation-group theoretic property. In particular, exceptionality necessarily yields a pair \((G,N)\) of permutation groups with the following property (see, e.g., [7, Lemma 3.3]): (**) \(N\trianglelefteq G(\leq S_{n})\) are two subgroups with the same orbits, and there exists a coset of \(N\) in \(G\) in which every element fixes a point. Note that in the classical setup of arithmetically exceptional functions, \(G\) and \(N\) in (**) are actually transitive, namely equal to the arithmetic and geometric monodromy group of the given function \(f\). The generalization to not necessarily irreducible covers has been considered in depth in [6]. The group-theoretical condition (**) is naturally related to intersectivity and to the problem of locally representing sets of functions; indeed, \(N=G\) in (**) (plus assuming that \(G\) has no fixed point) translates to (*). In the proof of Theorem 1.1, Condition (**) will however also feature crucially with \(N<G\). The following lemma relates fixed point free elements in almost simple permutation groups to such elements in groups of "product type". **Lemma 2.7**.: _Let \(G\leq S_{n}\) be an almost simple group with socle \(L\), let \(U_{1},\ldots,U_{r}\) be subgroups of \(G\) and assume that every coset of \(L\) in \(G\) contains an element which is fixed point free in all the actions on cosets of \(U_{i}\) in \(G\) (\(i=1,\ldots,r\)). Then, for any fixed \(k\geq 2\), every coset of \(L^{k}\) in the wreath product \(G\wr S_{k}(=G^{k}\rtimes S_{k})\) contains an element which is fixed point free in all the actions of \(G\wr S_{k}\) on cosets of \((U_{i})^{k}\rtimes S_{k}\) (\(i=1,\ldots,r\))._ Proof.: First, note that cosets of \((U_{i})^{k}\rtimes S_{k}\) can be identified with \(k\)-tuples \((\omega_{1},\ldots,\omega_{k})\), where the \(\omega_{j}\) are cosets of \(U_{i}\) in \(G\). Any element \(x\) of \(G\wr S_{k}\) can be written as \(x=((\sigma_{1},\ldots,\sigma_{k}),\tau)\) where \(\sigma_{1},\ldots,\sigma_{k}\in G\), \(\tau\in S_{k}\), and \(x\) acts on cosets of \((U_{i})^{k}\rtimes S_{k}\) via \((\omega_{1}\ldots,\omega_{k})^{x}=(\omega_{\tau^{-1}(1)}^{\sigma_{\tau^{-1}(1) }},\ldots,\omega_{\tau^{-1}(k)}^{\sigma_{\tau^{-1}(k)}})\). In particular, if such \((\omega_{1},\ldots,\omega_{k})\) is a fixed point of \(x\), then \(\sigma_{j}\sigma_{\tau(j)}\cdots\sigma_{\tau^{d_{j}-1}(j)}\) fixes \(\omega_{j}\), where \(d_{j}\) is the length of the cycle of \(\tau\) containing \(j\). Now fix \(\tau\in S_{k}\). To show the assertion, it suffices to show the existence of \((\sigma_{1},\ldots,\sigma_{k})\), such that for all \(j=1,\ldots,r\), the element \(\sigma_{j}\sigma_{\tau(j)}\cdots\sigma_{\tau^{d_{j}-1}(j)}\) is fixed point free, where additionally the \(L\)-coset of each \(\sigma_{i}\) in \(G\) is prescribed. But this just amounts to producing finitely many (namely, one for each cycle of \(\tau\)) fixed point free elements in prescribed \(L\)-cosets in \(G\), which is possible by assumption. ## 3. Proof of Theorem 1.1 and Theorem 1.2 To make use of Theorem 2.1 we establish the following technical lemma: **Lemma 3.1**.: _Let \(\kappa\) be a field of characteristic zero. Let \(M|\kappa\) be a finite Galois extension and denote the set of composition factors of \(\text{Gal}(M|\kappa)\) by \(\mathcal{C}\). Let \(f_{1},f_{2},\ldots,f_{r}\in\kappa[X]\) where \(f_{i}\) has splitting field \(\Omega_{i}\) over \(\kappa\) and Galois group \(G_{i}:=\text{Gal}(\Omega_{i}|\kappa)\). Assume that every \(G_{i}\) is non-solvable with a unique minimal normal subgroup \(N_{i}\cong H_{i}^{m_{i}}\) for some simple non-abelian group \(H_{i}\not\in\mathcal{C}\). Let \(\Omega:=\Omega_{1}\Omega_{2}\cdots\Omega_{r}\), and let \(L\subset\Omega M\) be the maximal solvable subextension of \(\Omega M|\kappa\). Let \(E_{i}=\Omega_{i}LM\). Then the following holds:_ 1. \(\text{Gal}(E_{i}|LM)\cong N_{i}\)_._ 2. _For all_ \(i\) _and_ \(j\) _it holds that_ \(E_{i}=E_{j}\) _or_ \(E_{i}\cap E_{j}=LM\)_. If_ \(E_{i}=E_{j}\)_, then_ \(\Omega_{i}=\Omega_{j}\)_._ 3. _Let_ \(1\leq i_{1}<i_{2}<\cdots<i_{s}\leq r\) _be a minimal sequence such that for any_ \(j\leq r\) _it holds that_ \(E_{j}=E_{i_{m}}\) _for some_ \(m\leq s\)_. Then_ \(\text{Gal}(E_{1}E_{2}\cdots E_{r}|LM)\cong N_{i_{1}}\times N_{i_{2}}\times \cdots\times N_{i_{s}}\)_._ 4. _Let_ \(j_{0}\leq r\) _and_ \(j_{1},j_{2},\ldots,j_{m}\leq r\) _be indices such that_ \(\Omega_{j_{0}}\neq\Omega_{j_{k}}\) _for all_ \(k=1,2,\ldots,m\)_. Then the restriction map induces an isomorphism_ \[\text{Gal}(E_{j_{0}}E_{j_{1}}\cdots E_{j_{n}}|E_{j_{1}}\cdots E_{j_{n}})\cong \text{Gal}(E_{j_{0}}|LM)\cong N_{j_{0}}.\] Proof.: Ad 1) Write \(N_{i}=\text{Gal}(\Omega_{i}|K_{i})\) for some normal extension \(K_{i}|\kappa\). Since \(G_{i}/N_{i}\) is solvable it follows that \(K_{i}\) is the maximal solvable subextension of \(\Omega_{i}|\kappa\), and in particular that \(K_{i}\subset LM\). It follows that \[\text{Gal}(E_{i}|LM)\cong\text{Gal}(\Omega_{i}|\Omega_{i}\cap LM)\trianglelefteq \text{Gal}(\Omega_{i}|K_{i})=N_{i}.\] Thus, \(\text{Gal}(E_{i}|LM)\in\{\{1\},N_{i}\}\), and to exclude the possibility that \(\text{Gal}(E_{i}|LM)=\{1\}\) we have to check that \(\Omega_{i}\not\subset LM\). Notice that the composition factors of \(\text{Gal}(LM|\kappa)\) are the groups in \(\mathcal{C}\) and possibly a collection of cyclic groups of prime order. However, the composition factors of \(N_{i}\) are all \(H_{i}\), which by assumption is non-abelian and not in \(\mathcal{C}\). Thus, the composition factors of \(\text{Gal}(\Omega_{i}|\kappa)\) are not all included in the set of composition factors of \(\text{Gal}(LM|\kappa)\). That \(\Omega_{i}\not\subset LM\) then follows by the Jordan-Holder Theorem. Ad 2) We consider the group \(\text{Gal}(E_{i}|E_{i}\cap E_{j})\). Notice that \(E_{i}\cap E_{j}\) is a Galois extension of \(\kappa\) containing \(LM\). Hence, \(E_{i}\cap E_{j}=(LM)F_{ij}\) for some finite \(\kappa\)-extension \(F_{ij}\), which we may assume is Galois over \(\kappa\). Thus, \[\text{Gal}(E_{i}|E_{i}\cap E_{j})\cong\text{Gal}(\Omega_{i}|LMF_{ij}\cap\Omega _{i}).\] The latter is a normal subgroup of \(G_{i}\) contained in \(N_{i}\), hence either trivial or \(N_{i}\). If \(\text{Gal}(E_{i}|E_{i}\cap E_{j})\cong N_{i}\), then \(E_{i}\cap E_{j}=LM\). On the other hand, if \(\text{Gal}(E_{i}|E_{i}\cap E_{j})\) is trivial, then \(LM\subsetneq E_{i}\subset E_{j}\), and therefore \(\text{Gal}(E_{j}|E_{i})\) is a proper subgroup of \(N_{j}\) satisfying \[\text{Gal}(E_{j}|E_{i})\cong\text{Gal}(\Omega_{j}|E_{i}\cap\Omega_{j}) \trianglelefteq G_{j}.\] Hence, \(E_{j}=E_{i}\). Assume now that \(E_{i}=E_{j}\). Let \(\Omega_{ij}:=\Omega_{i}\Omega_{j}\), then \(\Omega_{i}LM=\Omega_{j}LM=\Omega_{ij}LM\). Notice that under this assumption we have \[\text{Gal}(\Omega_{ij}LM|\Omega_{i})=\text{Gal}(\Omega_{i}LM|\Omega_{i})\cong \text{Gal}(LM|\Omega_{i}\cap LM). \tag{1}\] On the other hand, we also have \[\text{Gal}(\Omega_{ij}LM|\Omega_{i})\twoheadrightarrow\text{Gal}(\Omega_{ij}| \Omega_{i})\cong\text{Gal}(\Omega_{j}|\Omega_{i}\cap\Omega_{j}). \tag{2}\] From (1) we infer that all composition factors of \(\text{Gal}(\Omega_{ij}LM|\Omega_{i})\) are either cyclic or from \(\mathcal{C}\). Thus, from the projection in (2) the same holds for the composition factors of \(\text{Gal}(\Omega_{j}|\Omega_{i}\cap\Omega_{j})\). Recall that the unique minimal normal subgroup \(\text{Gal}(\Omega_{j}|\Omega_{j}\cap LM)\) of \(G_{j}\) satisfies that all composition factors are \(H_{j}\), which is simple, non-abelian and not from \(\mathcal{C}\). Thus, it follows that \(\Omega_{i}\cap LM\subsetneq\Omega_{i}\cap\Omega_{j}\subseteq\Omega_{j}\), and therefore \(\Omega_{i}\cap\Omega_{j}=\Omega_{j}\), i.e. \(\Omega_{j}\subseteq\Omega_{i}\). Likewise it follows that \(\Omega_{i}\subseteq\Omega_{j}\), hence the claim. Ad 3) We show this by induction, i.e. that for \(n\leq s\) it holds that: \[\operatorname{Gal}(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}|LM)\cong\prod_{m=1}^{n}N_{ i_{m}}.\] For \(n=1\) this is part (1) of this Lemma. For the induction step recall that we have an embedding \[\iota:\operatorname{Gal}(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n+1}}|LM)\hookrightarrow \operatorname{Gal}(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}|LM)\times\operatorname{ Gal}(E_{i_{n+1}}|LM),\] and the image of \(\iota\) consists exactly of the pairs \[(\sigma_{1},\sigma_{2})\in\operatorname{Gal}(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n }}|LM)\times\operatorname{Gal}(E_{i_{n+1}}|LM)\] satisfying \[\sigma_{1}|_{E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}\cap E_{i_{n+1}}}=\sigma_{2}|_{ E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}\cap E_{i_{n+1}}}.\] Thus, we have to show that \[E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}\cap E_{i_{n+1}}=LM.\] Since the groups \(H_{i_{j}}\) are simple and non-abelian it follows by the induction hypothesis that the only normal subgroups of \[\operatorname{Gal}(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}|LM)\cong H_{i_{1}}^{m_{ i_{1}}}\times H_{i_{2}}^{m_{i_{2}}}\times\cdots\times H_{i_{n}}^{m_{i_{n}}}\] are coming from the projection onto some of the factors of the \(H_{i_{j}}\). By assumption we have \(E_{i_{n+1}}\neq E_{i_{m}}\) for all \(m\leq n\), and therefore by part (2) of this Lemma it holds stronger that \(E_{i_{n+1}}\cap E_{i_{m}}=LM\) for all \(m\leq n\). Thus, it follows that \(E_{i_{1}}E_{i_{2}}\cdots E_{i_{n}}\cap E_{i_{n+1}}=LM\), as desired. Ad 4) By part (2) it follows that \(E_{j_{0}}\neq E_{j_{k}}\) for all \(m\leq k\), and hence by part (3) it follows that \(E_{j_{0}}\cap E_{j_{1}}E_{j_{2}}\cdots E_{j_{m}}=LM\). Thus, the restriction map yields an isomorphism \[\operatorname{Gal}(E_{j_{0}}E_{j_{1}}E_{j_{2}}\cdots E_{j_{m}}|E_{j_{1}}E_{j_ {2}}\cdots E_{j_{m}})\cong\operatorname{Gal}(E_{j_{0}}|E_{j_{0}}\cap E_{j_{1}} E_{j_{2}}\cdots E_{j_{m}})=\operatorname{Gal}(E_{j_{0}}|LM)\cong N_{j_{0}},\] where the last isomorphism is from part (1). From Lemma 2.3 and Lemma 3.1 we obtain the following necessary condition for a set of functions \(\{f_{1},f_{2},\ldots,f_{r}\}\) to locally represent \(\mathbb{Q}\) where there is a function \(f_{i}\) of large degree and with non-solvable Galois group: **Corollary 3.2**.: _Let \(\mathcal{C}\) be a collection of simple groups. Let \(K\) be a number field and let \(\{f_{1},f_{2},\ldots f_{l},f_{l+1},\ldots,f_{r}\}\in K(X)\) be a minimal locally representing set with \(0\leq l<r\) such that_ * _for_ \(i\leq l\) _all composition factors of_ \(\text{Mon}(f_{i})\) _belong to_ \(\mathcal{C}\)_,_ * _for_ \(l+1\leq i\leq r\) _it holds that_ \(\text{Mon}(f_{i})\) _has a unique minimal normal subgroup_ \(N_{i}\cong H_{i}^{m_{i}}\)_, where_ \(H_{i}\) _is non-abelian simple and does not belong to_ \(\mathcal{C}\)_._ _Let \(\Omega_{i}\) be the splitting field of \(f_{i}(X)-t\) over \(\overline{K}(t)\). Then for every index \(l+1\leq i_{0}\leq r\), there is a coset \(\sigma N_{i_{0}}\) of \(G_{i_{0}}\) such that every element of \(\sigma N_{i_{0}}\) fixes a root of some \(f_{j}(X)-t\) satisfying \(\Omega_{j}=\Omega_{i_{0}}\)._ Proof.: Let \(i_{0}\) be given and let \[\mathcal{S}_{i_{0}}:=\{j\mid\Omega_{j}=\Omega_{i_{0}}\}.\] Furthermore, let \(\Omega\) be the compositum of all \(\Omega_{i}\), and let \[\Omega_{i_{0}}^{c}:=\prod_{j\in\{1,2,\ldots,r\}\setminus\mathcal{S}_{i_{0}}} \Omega_{j}.\] Since the set \(\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\mid j\in\mathcal{S}_{i_{0}}\}\) is not locally representing, there exists by Lemma 2.3 an element \(\sigma\in\operatorname{Gal}(\Omega|\overline{K}(t))\) that does not fix a root of any \(f_{j}(X)-t\) where \(f_{j}\in\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\mid f_{j}\in\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\mid f_{j}\in\{f_{1},f_{2},\ldots,f_{r}\}\setminus \{f_{j}\mid f_{j}\in\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\}\setminus\{f_{ j}\}\setminus\{f_{j}\}\setminus\{f_{j}\}\setminus\{f_{j}\}\setminus\{f_{j}\}\}\}\) is not locally representing. Let \(\Omega_{i}\) be the splitting field of \(f_{i}(X)-t\) over \(\overline{K}(t)\). Then for every index \(l+1\leq i_{0}\leq r\), there is a coset \(\sigma N_{i_{0}}\) of \(G_{i_{0}}\) such that every element of \(\sigma N_{i_{0}}\) fixes a root of some \(f_{j}(X)-t\) satisfying \(\Omega_{j}=\Omega_{i_{0}}\). Proof.: Let \(i_{0}\) be given and let \[\mathcal{S}_{i_{0}}:=\{j\mid\Omega_{j}=\Omega_{i_{0}}\}.\] Furthermore, let \(\Omega\) be the compositum of all \(\Omega_{i}\), and let \[\Omega_{i_{0}}^{c}:=\prod_{j\in\{1,2,\ldots,r\}\setminus\mathcal{S}_{i_{0}}} \Omega_{j}.\] Since the set \(\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\mid f_{j}\in\mathcal{S}_{i_{0}}\}\) is not locally representing, there exists by Lemma 2.3 an element \(\sigma\in\operatorname{Gal}(\Omega|\overline{K}(t))\) that does not fix a root of any \(f_{j}(X)-t\) where \(f_{j}\in\{f_{1},f_{2},\ldots,f_{r}\}\setminus\{f_{j}\}\) is not locally representing. \(\{f_{j}\mid j\in\mathcal{S}_{i_{0}}\}\). Let \(L\subset\Omega\) be the maximal solvable subextension of \(\Omega/\overline{K}(t)\). It follows that any element of \(\sigma\mathrm{Gal}(\Omega|\Omega_{i_{0}}^{c}L)\) fixes a root of some \(f_{j}(X)-t\) where \(j\in\mathcal{S}_{i_{0}}\). Let \(M\subset\Omega\) be the compositum of all \(\Omega_{i}\) for \(i\leq l\). Then \(M\subset\Omega_{i_{0}}^{c}\), and all composition factors of \(M\) belong to \(\mathcal{C}\). By Lemma 3.1 (4) it follows that the restriction map yields an isomorphism \[\mathrm{Gal}(\Omega|\Omega_{i_{0}}^{c}L)=\mathrm{Gal}(\Omega_{i_{0}}\Omega_{ i_{0}}^{c}LM|\Omega_{i_{0}}^{c}LM)\cong\mathrm{Gal}(\Omega_{i_{0}}|LM)\cong N_{i_{0}}.\] Thus, any element of \(\sigma|_{\Omega_{i_{0}}}N_{i_{0}}\) fixes a root of one of \(f_{j}(X)-t\) where \(j\in\mathcal{S}_{i_{0}}\). Proof of Theorem 1.1.: Let \(N\) be a constant such that Theorem 2.1 applies for rational functions of degree at least \(\sqrt{N}\). Let \(\{f_{1},f_{2},\ldots,f_{r}\}\in K(X)\) be a minimally locally representing set of functions. Assume for the sake of a contradiction that there is some \(f_{i_{0}}\) such that \(\mathrm{Mon}(f_{i_{0}})\) is non-solvable and \(\mathrm{deg}(f_{i_{0}})\geq N\). By Theorem 2.1 we may then assume that the \(f_{j}\) are ordered such that for some \(1\leq l\leq r\) the following holds: * for \(i\leq l-1\) it holds that \(\mathrm{Mon}(f_{i})\) is solvable or (\(\mathrm{deg}(f_{i})<\sqrt{N}\) and hence) all composition factors of \(\mathrm{Mon}(f_{i})\) are of order less than \(\lfloor\sqrt{N}\rfloor!\), and * for \(l\leq i\leq r\) it holds that (\(\mathrm{deg}(f_{i})\geq\sqrt{N}\) and hence) \(G_{i}:=\mathrm{Mon}(f_{i})\) is non-solvable with minimal normal subgroup \(N_{i}\cong A_{m_{i}}^{k}\) where \(k\leq 2\) and \(m_{i}^{k}\geq N\). Denote again by \(\Omega_{i}\) the splitting field of \(f_{i}(X)-t\) over \(K(t)\). By Corollary 3.2 there is then a coset \(\sigma N_{i_{0}}\) in \(G_{i_{0}}\) such that every element of \(\sigma N_{i_{0}}\) fixes a root of one of the functions \(f_{j}(X)-t\) where \(j\) is from the set \[\mathcal{S}_{i_{0}}:=\{j\mid\Omega_{j}=\Omega_{i_{0}}\}.\] However, we claim that this cannot be the case. Note that any \(j\in\mathcal{S}_{i_{0}}\) must necessarily fulfill \(j\geq l\) by definition. By Theorem 2.1 there are the following cases to consider: 1. \(G_{i_{0}}\cong A_{m_{i_{0}}}\), \(m_{i_{0}}\geq\sqrt{N}\). By Theorem 2.1 it follows that for \(j\in\mathcal{S}_{i_{0}}\) the degree of \(f_{j}\) satisfies \(\mathrm{deg}(f_{j})\in\{m_{i_{0}},m_{i_{0}}(m_{i_{0}}-1)/2\}\), i.e. the stabilizer in \(G_{i_{0}}\) of a root of a \(f_{j}(X)-t\) has index \(m_{i_{0}}\) or \(m_{i_{0}}(m_{i_{0}}-1)/2\). Since we may of course assume \(\sqrt{N}>6\) it follows by [4] Theorem 5.2A that all subgroups of \(G_{i_{0}}\) of index \(m_{i_{0}}\) are conjugate, and that all subgroups of \(G_{i_{0}}\) of index \(m_{i_{0}}(m_{i_{0}}-1)/2\) are conjugate. Thus, by [4] Lemma 1.6B the action of \(G_{i_{0}}\) on the roots of \(f_{j}(X)-t\in\mathcal{S}_{i_{0}}\) is equivalent to the standard action of \(A_{m_{i_{0}}}\) on \(\{1,2,\ldots,m_{i_{0}}\}\) or the action on two-sets \(\{\{i,j\}\mid 1\leq i<j\leq m_{i_{0}}\}\). An element of \(A_{m_{i_{0}}}\) not fixing a point or two-element subset corresponds to an element without a \(1\) or \(2\)-cycle, and it is easy to see that for \(m_{i_{0}}>7\) there is such an element. 2. \(G_{i_{0}}=S_{m_{i_{0}}}\), \(m_{i_{0}}\geq\sqrt{N}\). As in Case 1a, the functions \(f_{j}\) with monodromy group \(S_{m_{i_{0}}}\) have degree \(m_{i_{0}}\) or \(m_{i_{0}}(m_{i_{0}}-1)/2\), and by [4] Theorem 5.2B the actions are equivalent to that of \(S_{m_{i_{0}}}\) on \(1\) or \(2\)-element subsets of \(\{1,2,\ldots,m_{i_{0}}\}\). Again it is easy to see that every coset of \(A_{m_{i_{0}}}\) in \(S_{m_{i_{0}}}\) has an element which is fixed point free in both actions. 3. \(A_{m_{i_{0}}}^{2}\leq G_{i_{0}}\leq S_{m_{i_{0}}}^{2}\rtimes S_{2}\) acting as a product type group, and \(\mathrm{deg}(f_{i_{0}})=m_{i_{0}}^{2}\geq N\). In this case, the uniqueness (up to conjugacy) of the index \(m_{i_{0}}\) subgroup in \(S_{m_{i_{0}}}\) implies that there is only one possible conjugacy class of subgroups for the point stabilizer, and it then follows from Lemma 2.7 that every coset of \(N_{i_{0}}\) has a fixed point free element. Thus, every coset of \(N_{i_{0}}\) in \(G_{i_{0}}\) contains an element which is fixed point free in the action on the roots of all \(f_{j}(X)-t\) with splitting field \(\Omega_{i_{0}}\), contradicting that \(\{f_{1},f_{2},\ldots,f_{r}\}\) locally represents We now turn to the proof of Theorem 1.2. The proof uses the same idea as that of Theorem 1.1, namely that the Galois group of the product of the \(f_{j}(X)-t\) of large degree factors as a direct product. Proof of Theorem 1.2.: Assume that \(\{f_{1},f_{2},\ldots,f_{l},f_{l+1},\ldots,f_{r}\}\in K(X)\) is a minimally locally representing set of functions with alternating or symmetric monodromy group such that * \(\deg(f_{i})\in\{2,3,4,6\}\) for \(i\leq l\) * \(\deg(f_{i})\in\{5,7,8,\ldots\}\) for \(l+1\leq i\leq r\). Let \(\Omega_{i}=\text{Split}(f_{i}(X)-t|\overline{K}(t))\). For \(i\leq l\) the composition factors of \(\text{Mon}(f_{i})\) are among \(C_{2}\), \(C_{3}\) or \(A_{6}\), while for \(i\geq l+1\) it holds that \(\text{Mon}(f_{i})\) is non-solvable with unique minimal subgroup \(A_{\deg(f_{i})}\). It follows by Corollary 3.2 that for every index \(i_{0}\) with \(l+1\leq i_{0}\leq r\) there is a coset of \(A_{\deg(f_{i_{0}})}\) in \(\text{Mon}(f_{i_{0}})\) in which every element fixes a root of one of the functions \(f_{j}\) satisfying \(\Omega_{j}=\Omega_{i_{0}}\). Since \(\deg(f_{i})\neq 6\) for \(l+1\leq i\leq r\) it follows from [4], Theorem 5.2A and Theorem 5.2B, that the actions of \(\text{Gal}(\Omega_{i_{0}}|\overline{K}(t))\) on the the roots of \(f_{j}(X)-t\) with \(\Omega_{j}=\Omega_{i_{0}}\) are all equivalent. It is now easy to see that for \(n\geq 5\) both cosets of \(A_{n}\) in \(S_{n}\) have a fixed point free element. Thus, the set \(\{f_{1},f_{2},\ldots,f_{l}\}\) is already locally representing, a contradiction. _Remark 1_.: * While the above proof of Theorem 1.1 itself uses the full force of monodromy classification as in Theorem 2.1, previously known partial results on this classification also allow strong conclusions already. As as mentioned in [14, Proof of Theorem 1.1], it follows from previous classification results that any nonsolvable monodromy group \(G\) of a geometrically indecomposable rational function \(f\in K(X)\) of sufficiently large degree must necessarily act as a "product type" group \(A_{\ell}^{k}\leq G\leq S_{\ell}^{k}\rtimes S_{k}\), for suitable \(\ell\) and \(k\), where \(k\) is absolutely bounded from above (and hence, \(\ell\) is large). Assume that such \(f\) is part of a minimal set locally representing \(K\). By Corollary 3.2, there exists a coset of \(A_{\ell}^{k}\) in \(G\) all of whose elements fix a root of some rational function with the same splitting field as \(f(X)-t\). By Lemma 2.7, this induces finitely many proper subgroups \(U_{i}<A_{\ell}\) (\(i=1,\ldots n\)) whose \(S_{\ell}\)-conjugates together cover one of the two cosets of \(A_{\ell}\) in \(S_{\ell}\). If necessary, add the subgroup \(U_{n+1}=A_{\ell}\) to get finitely many subgroups of \(S_{\ell}\) (resp., of \(A_{\ell}\)) whose \(S_{\ell}\)- (resp., \(A_{\ell}\)-)conjugates cover all of \(S_{\ell}\) (resp., \(A_{\ell}\)). Proposition 2.6 now implies that \(n\) has to grow at least linearly in \(\ell\). In particular, the above consideration already yields the following: Assume that Theorem 1.1 does not hold. Then for arbitrarily large \(B\), there exists a number field \(K\) and \(>B\) geometrically indecomposable rational functions over \(K\) with pairwise non-isomorphic root fields but all with the same nonsolvable splitting field over \(K\). * Unlike the proof of Theorem 1.1, the one of Theorem 1.2 (and indeed, of Corollary 3.2) does not require monodromy classification of rational functions in any way, and uses only the group-theoretic intersectivity condition (*). In particular, the following result on intersective polynomials whose factors have "generic" Galois group is deduced in the same way. **Theorem 3.3**.: _Let \(K\) be a number field, and \(f=f_{1}\cdots f_{r}\in K[X]\) be a (nonlinear) newly intersective polynomial such that \(\text{Gal}(f_{i})\in\{S_{\deg(f_{i})},A_{\deg(f_{i})}\}\) for all \(i\in\{1,\ldots,r\}\). Then all \(f_{i}\) have degree one of \(2\), \(3\), \(4\) and \(6\)._ ## 4. Existence results In this section, we will give concrete examples to demonstrate that those classes of rational functions not excluded by Theorems 1.1 and 1.2 do indeed occur in suitable locally representing sets. We will ensure to always obtain examples over \(\mathbb{Q}\). We begin somewhat informally in Section 4.1 by listing some noteworthy low-degree nonsolvable examples. In Section 4.2, we specifically address low degree functions with symmetric monodromy group. In Sections 4.3 and 4.4, we deal more systematically with functions as in Case 2) of Theorem 1.1, thus eventually deducing Theorem 1.3. ### Some "sporadic" low-degree examples _Example 1_.: a) Let \(f_{1}(X)=\frac{(X^{2}+10X+5)^{3}}{X}\), \(f_{2}(X)=X^{3}(X^{2}+5X+40)\), and \(f_{3}(X)=\frac{(5X)^{3}(8X^{2}+25X+20)^{3}}{(X^{2}+5X+5)^{5}}\). Then \(f_{1}(X)-t\), \(f_{2}(X)-t\) and \(f_{3}(X)-t\) all have the same Galois closure over \(\mathbb{Q}(t)\) with Galois group isomorphic to \(S_{5}\), acting primitively on \(6\), \(5\) and \(10\) points respectively; indeed they all parameterize subcovers of Klein's icosahedral cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(60\). Since the \(S_{5}\)-conjugates of the maximal subgroups of index \(10\) and \(6\) cover the whole group, \(\{f_{1},f_{3}\}\) is an example of a (minimal) set locally representing \(\mathbb{Q}\). Furthermore, the conjugates of the index-\(6\) and index-\(5\) subgroups, while not covering all of \(S_{5}\), do cover all of \(A_{5}\), whence every automorphism fixing \(\operatorname{Fix}(A_{5})=\mathbb{Q}(\sqrt{5})(t)\) also fixes a root of \(f_{1}-t\) or of \(f_{2}-t\). Complement this by the function \(f_{4}(X)=X^{5}\); the splitting field of \(f_{4}-t\) contains \(\sqrt{5}\) as well, and every automorphism which does not fix \(\sqrt{5}\) is of even order in \(\operatorname{Gal}(X^{5}-t/\mathbb{Q}(t))=C_{5}\rtimes C_{4}\) and hence fixes a root of \(X^{5}-t\). In total, \(\{f_{1},f_{2},X^{5}\}\) is another minimal set of functions locally representing \(\mathbb{Q}\). * Let \(f_{1}(X)=\frac{f_{1,1}(X)}{f_{1,2}(X)}\) and \(f_{2}(X)=\frac{f_{2,1}(X)}{f_{2,2}(X)}\), where \[f_{1,1}(X):=(77X^{3}+10989X^{2}+129816X+496368)^{3}(77X^{2}+2376X+15472),\] \[f_{1,2}(X):=(11X^{2}-1296)^{4}(11X^{2}+143X+621),\] \[f_{2,1}(X):=7^{3}\cdot 5^{10}\cdot(11X^{2}+66X+91)^{3}(44X^{3}-33 X^{2}-570X+951),\] \[f_{2,2}(X):=11\cdot(11X^{2}+22X-89)^{4}(22X^{2}+110X+113)^{2}.\] These are two rational functions of degree \(11\) and \(12\) such that \(f_{1}(X)-t\) and \(f_{2}(X)-t\) have the same Galois closure with Galois group \(M_{11}\), the smallest Mathieu group; see [11]. Since every element of \(M_{11}\) lies in a point stabilizer in one of these two actions, these two rational functions yield another example. * Let \[f_{1}(X)=\frac{(X^{3}+16X^{2}+160X+384)^{3}}{X^{2}+13X+128}\] and \[f_{2}(X)=\frac{(X^{9}+11X^{8}+4X^{7}-868X^{6}+6174X^{5}-43974X^{4}+37492X^{3}-2 8852X^{2}-2967X+211)^{3}(X-5)}{-2^{7}\cdot(X^{3}-X^{2}-9X+1)^{7}}.\] This is taken from [10] and corresponds to two genus-\(0\) actions of \(P\Gamma L_{2}(8)\) of degrees \(9\) and \(28\). ### Functions with symmetric monodromy group We give examples of rational functions of degrees \(2,3,4\) and \(6\) which are part of a set of rational functions with \(S_{n}\)-monodromy and forming a minimal set in the sense of Question 1. One such example is the example of three quadratic functions given already in the introduction. We do not aim at classifying all further such sets and content ourselves with examples showing together that all the degrees do in fact occur. **Lemma 4.1**.: _The following are examples of minimal sets of rational functions with monodromy group \(S_{n}\) locally representing \(\mathbb{Q}\)._ * \(f_{1}(X)=X^{4}+aX^{2}+bX\) _and_ \(f_{2}(X)=-\frac{X^{3}+2aX^{2}+a^{2}X-b^{2}}{4X}\)_, for any_ \(a,b\in\mathbb{Q}\) _such that_ \(f_{1}(X)-t\) _has Galois group_ \(S_{4}\)_._ * \(f_{1}(X)=X^{5}(5X-6)\)_,_ \(f_{2}(X)=\frac{6^{6}X}{(X-1)^{3}(X-16)^{2}(X-25)}\)_,_ \(f_{3}(X)=5X^{2}-1\)_._ Proof.: In a), \(f_{1}(X)-t\) is a generic quartic polynomial (with constant coefficient taken as \(t\)) and \(f_{2}\) its cubic resolvent. In other words, \(\mathrm{Gal}((f_{1}(X)-t)(f_{2}(X)-t)/\mathbb{Q}(t))\) is \(S_{4}\), and the subgroups fixing a root of \(f_{1}\) resp., of \(f_{2}\) are \(S_{3}\) and \(D_{4}\leq S_{4}\). Since the conjugates of these two subgroups cover all of \(S_{4}\), the assertion follows. In b), \(f_{1}(X)=t\) gives a cover with Galois group \(S_{6}\), ramified at \(t=0\), \(t=-1\) and \(t=\infty\), with inertia groups generated by a 5-cycle, a transposition and a 6-cycle respectively. We use the fact that \(S_{6}\) has two different conjugacy classes of subgroups \(U,V\) both isomorphic to \(S_{5}\), and that the conjugates of \(U\) and \(V\), together with \(A_{6}\), cover all of \(S_{6}\). It then remains to quickly verify that the fixed fields of all these subgroups in the splitting field of \(f_{1}(X)-t\) are rational function fields; for the subgroup \(A_{6}\), this is easy and follows quickly from the shape of the discriminant of \(f_{1}(X)-t\). For the second class of \(S_{5}\)-subgroups, one verifies that the inertia group generators above become a 5-cycle, triple transposition and an element of cycle structure (3.2.1) in this coset action, and it is then again straightforward to compute that the underlying rational function is given by \(f_{2}\). ### Examples involving monomials and Chebyshev polynomials Rational functions \(f(X)\in\mathbb{C}(X)\) such that the splitting field of \(f(X)-t\) has genus 0 are well-understood since the days of Klein. In particular, the only infinite series are given by degree-\(n\) functions totally ramified at two places (and hence, geometric monodromy group \(\mathrm{Gal}(f(X)-t/\mathbb{C}(t))\cong C_{n}\)), and by degree-\(n\) functions with one totally ramified place and two further ramified places with inertia group generated by an involution (and hence, geometric monodromy group the dihedral group \(D_{n}\)). Up to fractional linear transformations \(f\mapsto\lambda\circ f\circ\mu\) (with \(\lambda,\mu\in\mathbb{C}(X)\) of degree 1), the former are equal to monomials \(X^{n}\) and the latter to the Chebyshev polynomials \(T_{n}(X)\). We show how to realize these as part of a pair or triple of rational functions locally representing \(\mathbb{Q}\). **Lemma 4.2**.: _Let \(p\) be an odd prime, \(f_{1}(X)=X^{p}\) and \(f_{2}(X)=T_{p}(X)\). There exists a quadratic rational function \(f_{3}(X)\in\mathbb{Q}(X)\) such that \(\{f_{1},f_{2},f_{3}\}\) is a minimal set locally representing \(\mathbb{Q}\)._ Proof.: Both \(f_{1}\) and \(f_{2}\) have monodromy group \(AGL_{1}(p)=C_{p}\rtimes C_{p-1}\) over \(\mathbb{Q}\), whence \(\mathrm{Gal}((f_{1}(X)-t)(f_{2}(X)-t)/\mathbb{Q}(t))\) is a subgroup of \(AGL_{1}(p)^{2}\). Furthermore, the splitting fields of both \(f_{1}(X)-t\) and \(f_{2}(X)-t\) contain the degree \(\frac{p-1}{2}\) extension \(\mathbb{Q}(\cos(2\pi/p))/\mathbb{Q}\) (e.g., as a consequence of the so-called "branch cycle lemma", although at least for \(f_{1}\), the claim is obvious). Next, consider the two splitting fields of \(f_{1}(X)-t\) and of \(f_{2}(X)-t\) over the common constant extension \(\mathbb{Q}(\cos(2\pi/p))(t)\). Both have Galois group \(D_{p}\), and must furthermore be linearly disjoint, since otherwise their intersection were a normal extension of \(\mathbb{Q}(\cos(2\pi/p))(t)\), and would thus contain the quadratic subextension, which is impossible because for \(f_{1}\), this quadratic extension is still a constant extension, whereas for \(f_{2}\) it is non-constant (ramified at \(t=\pm 1\)). In total, we have obtained that \(G:=\mathrm{Gal}((f_{1}(X)-t)(f_{2}(X)-t)/\mathbb{Q}(t))\cong\{(g,h)\in AGL_{1}( p)^{2}\mid gh^{-1}\in D_{p}\}\). Now set \(U:=\{(g,h)\in G\mid gh^{-1}\in C_{p}\}\). This is an index-2 normal subgroup of \(G\). Let \(F\supset\mathbb{Q}(t)\) be the fixed field of \(U\) inside the splitting field of \((f_{1}-t)(f_{2}-t)\). Since the splitting field of \(f_{2}-t\) is ramified at \(t=\pm 1\) of ramification index 2, whereas the splitting field of \(f_{1}-t\) is unramified at these points, the inertia group generators at \(t=\pm 1\) in the joint Galois closure are of the form \((1,\sigma)\in G\) with \(\sigma\in D_{p}\setminus C_{p}\). In particular, these are not contained in \(U\), whence the quadratic extension \(F/\mathbb{Q}(t)\) is ramified at \(t=\pm 1\) (and only there). \(F\) is thus of the form \(F=\mathbb{Q}(\sqrt{c\frac{t+1}{t-1}})\) for some constant \(c\), and hence contains the roots of \(f_{3}(X)-t\) with \(f_{3}(X):=\frac{X^{2}+c}{X^{2}-c}\). By construction, an element of \(G\) not fixing a root of \(f_{3}-t\) must be of the form \((g,h)\) where \(g\) and \(h\) are at least not both in \(C_{p}\). But then one of them is an element of \(AGL_{1}(p)\leq S_{p}\) fixing a point, i.e., a root of \(f_{i}-t\) for some \(i\in\{1,2\}\). Finally minimality follows because an element \((g,h)\) where both \(g\) and \(h\) are \(p\)-cycles fixes no root of \(f_{i}-t\) (\(i=1,2\)), and an element of the form \((g,h)\) where \(g\) is a \(p\)-cycle and \(h\) is an involution in \(D_{p}\) fixes no root of \(f_{1}-t\) or \(f_{3}-t\) (and same for \(f_{2}\) and \(f_{3}\)). **Lemma 4.3**.: _Let \(p\equiv 3\) mod \(4\) be a prime, and let \(p_{1},\ldots,p_{r}\in\mathbb{Z}\setminus\{0,1\}\) be such that \(p,p_{1},\ldots,p_{r}\) are pairwise coprime (\(r\geq 0\)). Then there exist rational functions \(f_{1}(X),\ldots f_{r}(X)\), all linearly equivalent to \(X^{p}\) over \(\mathbb{C}\), such that \(\{f_{1},\ldots,f_{r},f_{r+1},f_{r+2}\}\) is a minimal set locally representing \(\mathbb{Q}\), where \(f_{r+1}:=\frac{T_{p}(X)-1}{T_{p}(X)+1}\) and \(f_{r+2}:=-p\cdot p_{1}\cdots p_{r}X^{2}\)._ Proof.: We construct \(f_{i}\) (\(i=1,\ldots,r\)) in the following way: Begin with \(f_{r+1}(X)=\frac{T_{p}(X)-1}{T_{p}(X)+1}\), i.e., simply the \(p\)-th Chebyshev polynomial, up to fractional linear transformation. The transformations have been applied for convenience to move the critical values \(\pm 1\) of \(T_{p}\) to the critical values \(0\) and \(\infty\) of \(f_{r+1}(X)\). In particular, \(f_{r+1}(X)=t\) is ramified over \(t=0\) and \(t=\infty\) with inertia group generated by an involution in \(D_{p}\), and has exactly one more branch point (with inertia group generated by a \(p\)-cycle). The discriminant of \(f_{r+1}(X)-t\) is thus equivalent, up to squares, to \(ct\) for a constant \(c\), and precise calculation gives \(c=-p\).6 Now set \(t=-pp_{i}s^{2}\). This corresponds to pulling back the splitting field of \(f(X)-t\) by the quadratic extension \(\mathbb{Q}(\sqrt{-pp_{i}t})/\mathbb{Q}(t)\), and since the latter is ramified at \(t=0\) and \(t=\infty\), Abhyankar's lemma yields that the pulled-back extension is unramified at \(s=0\) and \(s=\infty\), and instead ramified (with inertia group generated by a \(p\)-cycle) only at the two preimages of the third branch point of \(f(X)-t\). Hence \(f(X)=-pp_{i}s^{2}\) yields a rational genus-0 curve, corresponding to the degree-\(p\) cover \(\mathbb{P}^{1}_{\mathbb{Q}}\to\mathbb{P}^{1}_{\mathbb{Q}}\) with the base line parameterized by \(s\). It is thus given by a rational function \(s=f_{i}(X)\), and the ramification type shows that this function is linearly equivalent, over \(\mathbb{C}\), to \(X^{p}\). Footnote 6: Here, we have used that \(p\equiv 3\) mod \(4\). Now let \(G:=\operatorname{Gal}(\prod_{i=1}^{r+1}(f_{i}(X)-t))\leq AGL_{1}(p)^{r+1}\). By construction, the unique quadratic subextension (i.e., the field \(\mathbb{Q}(t)(\sqrt{\Delta_{i}})\), where \(\Delta_{i}\) is the discriminant) of the splitting field of \(f_{i}-t\) equals \(\mathbb{Q}(\sqrt{p_{i}})(t)/\mathbb{Q}(t)\) for \(i=1,\ldots,r\); and \(\mathbb{Q}(\sqrt{-pt})\) for \(i=r+1\). Thus, the splitting field \(\mathbb{Q}(\sqrt{-pp_{1}\cdots p_{r}t})\) of \(f_{r+2}-t\) is contained in the joint splitting field of the others, and moreover equals the fixed field of \(U:=\{g=(g_{1},\ldots,g_{r+1})\in G\leq AGL_{1}(p)^{r+1}\mid sgn(g_{1}\cdots g_{ r+1})=+1\}\). On the other hand, an element of \(G\) fixing no root of any \(f_{i}-t\), \(i=1,\ldots,r+1\), must act as a \(p\)-cycle on each set of roots, and hence certainly belong to \(U\). We have thus obtained that \(\{f_{1},\ldots,f_{r+2}\}\) locally represents \(\mathbb{Q}\). Minimality now follows easily; e.g., an element of order \(p\) in \(G\) fixes no root of \(f_{i}-t\) for any \(i\leq r+1\); and an element fixing exactly the discriminant \(\Delta_{j}\) of \(f_{j}-t\) but no other \(\Delta_{i}\) for \(i\leq r+1\) does not fix a root of \(f_{r+2}-t\). That such an element exists is due to the pairwise coprimeness of \(p,p_{1},\ldots,p_{r}\). ### Functions with Galois closure of genus \(1\) Here we show that (geometrically indecomposable) rational functions \(f(X)\in\mathbb{Q}(X)\) such that the splitting field of \(f-t\) is of genus \(1\) occur as part of a set locally representing \(\mathbb{Q}\) as long as their degree is sufficiently large. Such functions arise from isogenies or endomorphisms of elliptic curves, as outlined in detail, e.g., in [7, Section 6]. **Theorem 4.4**.: _Let \(f(X)\in\mathbb{Q}[X]\) be a geometrically indecomposable rational function of degree \(>163\) such that the splitting field of \(f(X)-t\) is of genus \(1\). Then \(f\) is part of a minimal set of (at most four) rational functions locally representing \(\mathbb{Q}\)._ Proof.: As detailed in [7, Section 6], such a function \(f\in\mathbb{Q}(X)\) is necessarily of degree \(p^{2}\) for a prime \(p\), with exactly three branch points, of ramification indices \((3,3,3)\), \((2,4,4)\) or \((2,3,6)\).7 Indeed, any other such \(f\) would have to be of prime degree (cf., e.g., [14, Table 9.1]). As detailed in [7, Section 6], the only such cases which yield functions defined over \(\mathbb{Q}\) are such with four branch points, all of ramification index \(2\) (in which case the geometric monodromy group equals \(D_{p}\), and the arithmetic one lies between \(D_{p}\) and \(AGL_{1}(p)\)). Such functions, however, arise from elliptic curves over \(\mathbb{Q}\) with a \(p\)-isogeny, and Mazur's theorem [13] yields that this can happen only for an explicitly known list of \(p\leq 163\). Footnote 7: Only when dropping the condition of _geometric_ indecomposability, a fourth possible ramification type \((2,2,2,2)\) appears. Now, again as detailed in [7, Section 6], for each individual ramification type \((3,3,3)\), \((2,4,4)\) or \((2,3,6)\), the corresponding degree-\(p^{2}\) function \(f\) has geometric monodromy group \(G:=\operatorname{Gal}(f(X)-t/\overline{\mathbb{Q}}(t))\) is isomorphic to \((C_{p}\times C_{p})\rtimes C_{d}\leq AGL_{1}(\mathbb{F}_{p^{2}})\) with \(d\in\{3,4,6\}\). The geometric indecomposability assumption furthermore enforces \(p\equiv-1\) mod \(d\), since indeed \(p\equiv 1\) mod \(d\) would enforces the point stabilizer \(C_{d}\) in \(G\) to normalize some subgroup \(C_{p}\), in which case it would no longer be a maximal subgroup. Under these conditions, the arithmetic monodromy group \(A\) is furthermore isomorphic to \((C_{p}\times C_{p})\rtimes(C_{p^{2}-1}\rtimes C_{2})=A\Gamma L_{1}(\mathbb{F} _{p^{2}})\). Since \(\frac{|A|}{|G|}\frac{2(p^{2}-1)}{d}\) is necessarily divisible by \(4\), \(A/G\) has a quotient \(C_{2}\times C_{2}\), i.e., the splitting field of \(f(X)-t\) contains a biquadratic extension \(\mathbb{Q}(\sqrt{\alpha},\sqrt{\beta})/\mathbb{Q}\). Now let us look at the fixed points of elements in \(A\). Since \(AGL_{1}(\mathbb{F}_{p^{2}})\cong(C_{p}\times C_{p})\rtimes C_{p^{2}-1}\leq S_ {p^{2}}\) is a Frobenius group, all elements outside of the socle \(C_{p}\times C_{p}\) have a fixed point. In particular, the only fixed point free elements of \(A\) are the non-identity elements of \(C_{p}\times C_{p}\), and (certain) elements in outside of the index-\(2\) normal subgroup \(N:=AGL_{1}(\mathbb{F}_{p^{2}})\). Say that (the constant extension) \(\mathbb{Q}(\sqrt{\alpha})(t)\) is the fixed field of \(N\). We next construct functions \(f_{1},f_{2},f_{3}\in\mathbb{Q}(X)\) fulfilling all of the following: 1. \(f_{1}\) is linearly equivalent over \(\mathbb{C}\) to \(X^{3}\) and its splitting field contains \(\mathbb{Q}(\sqrt{\alpha})\). 2. \(\deg(f_{2})\) is a prime \(q\equiv 3\) mod \(4\), and \(f_{3}(X)\in\mathbb{Q}(X)\) is a quadratic rational function such that \(\operatorname{Gal}((f_{2}(X)-t)(f_{3}(X)-t)/\mathbb{Q}(t))=AGL_{1}(q)\times C _{2}\), and the joint splitting field contains \(\sqrt{\beta}\), but not \(\sqrt{\alpha}\). Constructing \(f_{1}\) is easy and amounts to a special case of (one of) the functions \(f_{1},\dots f_{r}\) constructed in the proof of Lemma 4.3. In the same way, one may choose \(r=1\) in Lemma 4.3 and take \(f_{2}\) and \(f_{3}\) from there; as can be seen in the proof of that result, these are constructed such that the joint splitting field has a unique quadratic constant extension, fixed by the diagonal subgroup \(U:=\{(g,h)\in AGL_{1}(q)\times C_{2}\mid ord(g)\equiv ord(h)\bmod 2\}\), and suitable choice of parameters allows us to take this quadratic extension to be \(\mathbb{Q}(\sqrt{\beta})\). Consider now the splitting field \(\Omega/\mathbb{Q}(t)\) of all \(h(X)-t\) with \(h\in\{f,f_{1},f_{2},f_{3}\}\). We know already that all \(\sigma\in\operatorname{Gal}(\Omega/\mathbb{Q}(t))\) which fix \(\sqrt{\alpha}\), but not \(\sqrt{\beta}\) must fix a root of \(f(X)-t\). On the other hand, all \(\sigma\) which do not fix \(\sqrt{\alpha}\) project to transpositions in \(\operatorname{Mon}(f_{1})=S_{3}\), hence fixing a root of \(f_{1}(X)-t\); and all \(\sigma\) which fix \(\sqrt{\beta}\) project to an element of \(AGL_{1}(q)\times C_{2}\) which is either trivial on the second component (in which case it fixes a root of \(f_{3}-t\)) or of even order on the first component (in which case it fixes a root of \(f_{2}-t\)). It remains to be seen that \(f\) cannot be dropped from the locally representing set \(\{f,f_{1},f_{2},f_{3}\}\). But by construction, the splitting fields of \(f_{1}-t\) and of \((f_{2}-t)(f_{3}-t)\) are linearly disjoint whence there exists an element acting as a full cycle on the set of roots of \(f_{i}-t\) for each \(i\in\{1,2,3\}\) simultaneously. ### Proof of Theorem 1.3 The examples collected in the previous sections are already enough to derive Theorem 1.3. We summarize the reasoning behind this. Firstly, a geometrically indecomposable rational function \(f(X)\in\mathbb{C}(X)\) whose splitting field is still of genus \(0\) is known to be linearly equivalent (over \(\mathbb{C}\)) to \(X^{p}\), \(T_{p}(X)\) (for some prime \(p\)) or to a subcover of the tetrahedral, octahedral or icosahedral Galois covers \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) (with geometric monodromy group \(A_{4}\), \(S_{4}\) and \(A_{5}\), and three branch points of ramification indices \((2,3,3)\), \((2,3,4)\) and \((2,3,5)\) respectively); cf., e.g., [12, Theorem I.6.2]. In fact, the icosahedral cases are covered by Example 1 and the octahedral ones by Lemma 4.1a); the interested reader may want to verify that the tetrahedral case can be handled similarly, so that indeed the cases with splitting field of genus \(0\) occur (up to \(\mathbb{C}\)-linear equivalence) without exception. Combine this with Theorem 4.4 to readily obtain Theorem 1.3. _Question 2_.: Can the restriction "up to \(\mathbb{C}\)-linear equivalence" in Theorem 1.3 be dropped? In several individual cases, this is actually true. However, notably the following case might be problematic: Let \(p\equiv 3\) mod \(4\) be a prime and \(f(X)\in\mathbb{Q}(X)\) a degree-\(p\) rational function totally ramified at exactly two algebraically conjugate points, with monodromy group \(\operatorname{Mon}(f)=C_{p}\rtimes C_{(p-1)/2}\). Such a function (called a Redei function) exists for all such \(p\) and can be constructed via pullback from the Chebyshev polynomial \(T_{p}\) similarly as the functions \(f_{i}\) in Lemma 4.3. Evidently, such \(f\) is linearly equivalent to \(X^{p}\) over \(\mathbb{C}\), but not over \(\mathbb{Q}\). Via choice of \(p\) in suitable residue classes, we may assume the smallest prime divisor of \(\frac{p-1}{2}\) to be arbitrarily large, and it is even expected that this value itself is a (Germain) prime infinitely often. Our approaches above, which rely crucially on the existence of certain small degree subfields in the Galois closure, do not carry over to this case; is it still possible to find an alternative construction? **Acknowledgement:** The first-named author acknowledges the support of the Austrian Science Fund (FWF): W1230.
2301.08817
Document Summarization with Text Segmentation
In this paper, we exploit the innate document segment structure for improving the extractive summarization task. We build two text segmentation models and find the most optimal strategy to introduce their output predictions in an extractive summarization model. Experimental results on a corpus of scientific articles show that extractive summarization benefits from using a highly accurate segmentation method. In particular, most of the improvement is in documents where the most relevant information is not at the beginning thus, we conclude that segmentation helps in reducing the lead bias problem.
Lesly Miculicich, Benjamin Han
2023-01-20T22:24:22Z
http://arxiv.org/abs/2301.08817v1
# Document Summarization with Text Segmentation ###### Abstract. In this paper, we exploit the innate document segment structure for improving the extractive summarization task. We build two text segmentation models and find the most optimal strategy to introduce their output predictions in an extractive summarization model. Experimental results on a corpus of scientific articles show that extractive summarization benefits from using a highly accurate segmentation method. In particular, most of the improvement is in documents where the most relevant information is not at the beginning thus, we conclude that segmentation helps in reducing the lead bias problem. text segmentation, extractive summarization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization + Footnote †: journal: Computer systems organization meeting transcripts. Here, we adapt and optimize them for section detection in text documents. The task is defined as follows: Given a set of input sentences \(S=\{S_{1},\dots,S_{M}\}\) with an underlying segment structure; the objective is to predict a sequence \(Y=\{y_{1},\dots,y_{M}\}\) where \(y_{i}\) is a binary value indicating whether \(S_{i}\) is the beginning of a new segment. ### Unsupervised Segmentation We based our model on the unsupervised segmentation approach proposed in (Liu et al., 2017). It is a modified version of TextTiling (Chen et al., 2017) that detects topic changes with a similarity score based on BERT embeddings (Chen et al., 2017). We first compute the representations for every sentence. Then, we divide the document in overlapping windows and perform max pooling to get the window representation. We compute cosine similarity among adjacent windows and derive segment boundaries where the semantic similarity is lower than a given threshold. We adjust the window size, and the similarity threshold parameters using a validation set. ### Supervised Segmentation We use the supervised segmentation approach proposed in (Liu et al., 2017). We divide the document in overlapping windows, and each window is encoded with a transformer network (Liu et al., 2017). The segmentation is performed as a sequence labeling task, where each token in the sequence is assigned with a binary label to indicate weather it is the start of a new segment. We initialize the weights of the model with the pretrained model DeltaLM (Liu et al., 2017), and adjust the window size, stride size, and the classification threshold parameters using a validation set. ## 4. Extractive Summarization Extractive summarization is the task of finding the subset of sentences in a document that best summarize it. Following (Chen et al., 2017), we define extractive summarization as a sequence labeling task. Given a set of input sentences \(S=\{S_{1},\dots,S_{M}\}\) the objective is to assign a label \(y_{i}\in\{0,1\}\) to each \(S_{i}\), indicating whether the sentence should be included in the summary. The model is composed by two transformer encoders: word-level and sentence-level. Each document is tokenized and encoded with the word-level transformer. We introduce a special to token \([CLS]\) at the end of each sentence. The output vector corresponding to this token serves as the sentence representation. All sentence representations plus their corresponding positions are input to a secondary sentence-level transformer. The word-level transformer is initialized with the pretrained model DeltaLM (Liu et al., 2017) whereas the sentence-level transformer is initialized with random values, and it is composed of only 2 layers. We use binary-cross entropy to train the model. In order to manage long input sequences, the documents are chunked into equal size block. Each chunk is encoded with the word-level transformer independently. Then the output of the chunks are concatenated and the rest of the model is the same. ### Integrating Segment Information We deem adequate to integrate the segment information in the sentence-level transformer. The referred segment information can be either the segment position in the document or the segment semantic representation. Both are relevant and serve different purposes, one learns the location of the relevant information, and the other spots the relevant segment depending on its content. #### 4.1.1. Segment position encoding We use a learned positional encoding with a maximum of 10 segments. To avoid position bias, we applied normalization to the number of segments in the document as follows: \(pos_{i}=I*max_{seg}/(n_{seg}+1)\), where \(pos_{i}\) is the final position, \(i\) is the segment index, \(max_{seg}=10\), and \(n_{seg}\) is the number of segments in the document. #### 4.1.2. Segment Embedding We calculate the segment embedding by applying pooling to its tokens embeddings. We used maximum, minimum, and mean pooling. Preliminary experiments showed that mean pooling has better results, thus we use it. #### 4.1.3. Segment Position HiStruct Following (Liu et al., 2017), we encode positions in a hierarchical manner by summing the segment position and sentence relative position in the corresponding segment. According the original experiments in the paper, using learnable embeddings and summing performed the best, thus we apply the same strategy. We integrated the segment information by either adding or concatenation the segment representations. ## 5. Data and Metrics For our experiments, we use Arxiv dataset (Chen et al., 2017). It contains scientific articles with annotation of sections and sentences. It is composed of 203,037 samples for training, 6,436 for validation, and 6,440 for testing. We use sections as segment markers. Table 1 shows the statistics of sentences per section in the training set. ### Summarization We use ROUGE score (Chen et al., 2017) for evaluating the summarization models. It measures the n-gram overlapping between the predicted summary and a reference. We report the F1 score of uni-grams (R1), bi-grams (R2), and the longest matching sequence (RL). ### Text Segmentation Two standard evaluation metrics are used to evaluate text segmentation: Pk (Chen et al., 2017) and WinDiff (Liu et al., 2017). Pk represents the probability that a randomly chosen pair of words at a distance of \(k\) is inconsistently classified; that is, for one segmentation the pair lies within the same segment, while for another the pair spans across segment boundaries. This is implemented by using a sliding window of size set to half of the average true segment length, and counting how many \begin{table} \begin{tabular}{l c} \hline \hline **Statistics** & \(\#\) \\ \hline Avg. sentences per summary & 11 \\ Avg. sections per document & 5.5 \\ Avg. sentences per document & 130 \\ Max. sentences per summary & 110 \\ Max. sections per document & 60 \\ Max. sentences per document & 1268 \\ \hline \hline \end{tabular} \end{table} Table 1. Sections and sentences statistics on Arxiv data-set. times the predictions differ from the reference. This probability can be further decomposed into two conditional probabilities: the miss and the false alarm probabilities. WinDiff is a modification of Pk where the algorithm slides a fixed-sized window across the text and penalizes whenever the number of predicted boundaries within the window does not match the true number of boundaries within the same window. ### Lead bias Following (Bang et al., 2017), we calculate the R1 score on three different label distributions - D-early, D-middle and D-last - which are obtained by first sorting documents by the average sentence position of the positive labeled sentences: D-early are the first 100 documents, D-middle are the middle 100 documents, and D-late are the last 100 documents. ## 6. Experimental Results In this section, we describe the results of both unsupervised and supervised text segmentation methods. We also compare the different methods for integrating text segmentation on the extractive summarization model based on ground truth segments. Finally, we use the best integration method to report final results on summarization. ### Unsupervised Segmentation We tuned the hyper-parameters of the model using 15 documents from the validation set. We evaluate a window size in the range \([1,5,10,15]\) and threshold in \([0.4,0.5,0.6]\) (see Table 2). We picked 0.5 and 5 as the threshold and the window size respectively. We include two simple baselines models: a _Random_ method that places segment boundaries uniformly at random, and an _Even_ method that places boundaries every \(k\) sentences. The results are shown in Table 3. ### Supervised Segmentation In the supervised segmentation model, a document is processed by sliding windows. The widow's size is the number of sentences to be processed in a chunk. We tested sizes in the range of \([10,40]\). Figure 1(a) shows the Pk score in the development set. The best score was obtained with a window of 20. Similarly, the stride is the number of overlapping sentences in the windows. We evaluated values in the range of \([3,12]\). Figure 1(b) shows the Pk score in the development set with a window size of 20. The best score was obtained with a stride of 7, the best value is consistent for all window sizes. Finally, the model outputs a score in the range of \([0,1]\) for each sentence. The score threshold determines whether the sentence is the start of a new segment. Figure 1(c) shows the WinDiff scores for different thresholds on the development set obtained with the best model. The optimum threshold value is 0.35. The final results are shown in Table 3. ### Extractive Summarization with Segmentation Table 4 shows the ROUGE scores of different methods of integrating segmentation information on the extractivw summarization model (ExtSum). As the objective is to evaluate the best integration method, we use the ground truth annotation for segments, named _Oracle_. We found that both segment position and segment embedding help to improve the model. We also found that concatenating the segment information to the sentences input works better than adding it. Finally, using a flat position embedding have equal or slightly better results that using a hierarchical position as purposed in (Kang et al., 2017). We compare the results of unsupervised and supervised methods with current SOTA (Kang et al., 2017) (Table 5). We use the best model: ExtSum with contenation of segment position and embedding. We also include the results of ExtSum with _oracle_ segmentation to define the upper limit of this method. We measure the ROUGE scores together with the the lead bias metrics _D-early_, _D-middle_ and _D-late_ described in Section 5.3. The unsupervised method rather decrease the scores for summarization. The low accuracy of the unsupervised segmentation could be adding noise instead of helping. The supervised segmentation method show improvement above all metrics, in particular in the middle and late part of the document as shown by _D-middle_ and _D-late_. To further analyze this results, we plot the relative position of extracted sentences in the document (Figure 2). We can see that adding segmentation information makes the ExtSum model extract less sentences from the middle part, and more from the early and late parts. This is consistent with Arxiv dataset, where the introduction and conclusions contains the most relevant information for the summary. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Op.** & **R1** & **R2** & **RL** \\ \hline ExtSum & & 48.91 & 20.62 & 43.85 \\ Seg. Position & Add. & 49.28 & 20.86 & 44.18 \\ Seg. Embedding & Add. & 49.01 & 20.68 & 43.93 \\ Seg. Pos. + Embed. & Add. & 49.25 & 20.89 & 44.14 \\ Seg. Pos. + Embed. & Concat. & **49.49** & **21.04** & **44.34** \\ Seg. Pos. Histruct + Embed. & Concat. & 49.46 & 21.01 & 44.31 \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of methods to integrate segmentation information in the extractive summarization model. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & 0.4 & \multicolumn{2}{c|}{**0.5**} & \multicolumn{2}{c}{0.6} \\ \hline **Window size** & Pk & WinDiff & Pk & WinDiff & Pk & WinDiff \\ \hline 1 & 0.491 & 0.535 & 0.469 & 0.506 & 0.494 & 0.509 \\ **5** & 0.419 & 0.483 & **0.418** & **0.461** & 0.455 & 0.482 \\ 10 & 0.469 & 0.531 & 0.476 & 0.514 & 0.492 & 0.537 \\ 15 & – & – & – & – & 0.490 & 0.516 \\ \hline \hline \end{tabular} \end{table} Table 2. Hyper-parameter tuning for the unsupervised text segmentation model. \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **Pk** & **WinDiff** \\ \hline Random & 0.544 & 0.703 \\ Even & 0.503 & 0.516 \\ Unsupervised segmentation & 0.403 & 0.437 \\ Supervised segmentation & **0.183** & **0.224** \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of text segmentation methods. ## 7. Conclusions In this papers we evaluated two text segmentation methods to detect sections in documents. One unsupervised method that uses similarity scores between adjacent text blocks, and a supervised method that detects sentence by sentence when a new section starts. We combine this predictions in a extractive summarization model to boost its accuracy. We evaluated a series of strategies to combine the segment information on summarization. We show that the supervised segment predictions improve the ROUGE scores of the summarization. According our analysis, segmentation helps to detect the sentences of most relevant sections in the dataset: introduction and conclusions (at beginning and end). However, the maximum improvement we can get using this method is about 1 point ROUGE score, as shown by our experiments using _Oracle_ sections. This improvement is significant but limited. Future work could include more elaborate information from the document, like hierarchical structure, section titles, and discourse information. Also, a end-to-end method to combine both summarization and segmentation.
2305.04502
MO-DEHB: Evolutionary-based Hyperband for Multi-Objective Optimization
Hyperparameter optimization (HPO) is a powerful technique for automating the tuning of machine learning (ML) models. However, in many real-world applications, accuracy is only one of multiple performance criteria that must be considered. Optimizing these objectives simultaneously on a complex and diverse search space remains a challenging task. In this paper, we propose MO-DEHB, an effective and flexible multi-objective (MO) optimizer that extends the recent evolutionary Hyperband method DEHB. We validate the performance of MO-DEHB using a comprehensive suite of 15 benchmarks consisting of diverse and challenging MO problems, including HPO, neural architecture search (NAS), and joint NAS and HPO, with objectives including accuracy, latency and algorithmic fairness. A comparative study against state-of-the-art MO optimizers demonstrates that MO-DEHB clearly achieves the best performance across our 15 benchmarks.
Noor Awad, Ayushi Sharma, Philipp Muller, Janek Thomas, Frank Hutter
2023-05-08T06:53:40Z
http://arxiv.org/abs/2305.04502v2
# MO-DEHB: Evolutionary-based Hyperband for Multi-Objective Optimization ###### Abstract Hyperparameter optimization (HPO) is a powerful technique for automating the tuning of machine learning (ML) models. However, in many real-world applications, accuracy is only one of multiple performance criteria that must be considered. Optimizing these objectives simultaneously on a complex and diverse search space remains a challenging task. In this paper, we propose MO-DEHB, an effective and flexible multi-objective (MO) optimizer that extends the recent evolutionary Hyperband method DEHB. We validate the performance of MO-DEHB using a comprehensive suite of 15 benchmarks consisting of diverse and challenging MO problems, including HPO, neural architecture search (NAS), and joint NAS & HPO, with objectives including accuracy, latency and algorithmic fairness. A comparative study against state-of-the-art MO optimizers demonstrates that MO-DEHB clearly achieves the best performance across our 15 benchmarks. ## 1 Introduction Designing the architecture and tuning hyperparameters for complex machine learning (ML) methods can be costly, time-consuming, and error-prone. In recent years, there has been a significant amount of research on methods such as hyperparameter optimization (HPO) [13] and Neural Architecture Search (NAS) [1] to address these challenges. Various methods have been developed to efficiently solve these tasks, however, many of them only focus on optimizing a single objective, such as predictive accuracy. In real-world applications, practitioners often need to optimize multiple objectives that may conflict with each other, such as finding a smaller model with minimal drop in predictive performance. This can be a difficult task, particularly when deploying models on resource-constrained devices. Another omnipresent example is ensuring predictive performance while guaranteeing a certain level of fairness. This poses a challenge and significant trade-offs between performance and different fairness notions need to be explored [16]. Despite efforts to adapt single-objective (SO) optimizers for multi-objective (MO) settings, there have been relatively few studies that have examined the use of these methods for HPO and NAS in complex search spaces. Because HPO and NAS require a lot of computational resources when executed naively, multi-fidelity (MF) techniques have gained popularity. These methods exploit cheap approximations, such as evaluations at fewer epochs or on smaller data subsamples, of the objective function to speed up the optimization. Successive Halving (SH) [1] is a first simple, yet effective MF method that allocates resources to configurations that perform sufficiently well, promoting them to run for larger computational budgets, rather than evaluating poor ones for the maximum resource budget. The Hyperband (HB) [1] algorithm instantiates SH with different levels of its lowest fidelity, to guard from sub-optimal choices of fidelity levels. While SH and HB are effective in choosing the fidelity level of the configurations they evaluate, they rely on simple random search for choosing those configurations and therefore do not learn over the course of the algorithm. To address this issue, BOHB [1] introduces the use of Bayesian Optimization (BO) via Tree Parzen Estimates (TPE) [1], and DEHB [1] uses an evolutionary algorithm (EA) based on Differential Evolution (DE) [12] to further improve the search at each budget level. A SO optimizer can be extended to handle MO by using (1) scalarization technique (i.e., optimizing a linear combination of multiple competing objectives to a single objective point) or (2) non-dominated sorting (NDS) (i.e., approaches that take the geometry of the whole Pareto front into account). Although scalarization is a simple convenient technique in MO, it has multiple shortcomings: (1) it captures only restricted pareto shapes, (2) linear scalarization can cause failures for non-convex pareto fronts, (3) it encourages a single direction exploration, (4) it fails to recover the full pareto front if the weights are not chosen carefully and (5) it is greatly affected by re-scaling the values of competing objectives. To mitigate the aforementioned issues of scalarization methods, in this work we extend DEHB to optimize multiple objectives by the use of NDS. To summarize, our main contributions are: 1. We extend the HB component in DEHB to MO considering two variants: (1) NSGA-II and (2) EpsNet. 2. We adapt the evolutionary search component in DEHB to handle MO by incorporating NDS and dominated hypervolume contribution in DE selection strategy to approximate the desired trade-offs between the objectives. 3. We comprehensively evaluate our two MO-DEHB variants on a wide range of 15 MO problems, and demonstrate that they establish the new state of the art in MO compared to many state-of-the-art baselines. The paper proceeds as follows: Section 2 presents related work. Section 3 provides background on MO, a summary of MO evolutionary algorithms (EAs), and an overview of DEHB. Section 4 presents our proposed method, MO-DEHB. Section 5 empirically evaluates MO-DEHB on a broad range of 15 benchmarks from three benchmark families, showing that it defines a new state-of-the-art performance. Finally, Section 6 concludes the paper. ## 2 Related Work BO and EAs are popular choices for solving MO HPO problems. MO EAs are usually based on Pareto dominance (e.g., NSGA-II [4]), decomposition (e.g., MOEA/D [13]) or indicators (e.g., SMSEMOA [1]) [1]. MO BO algorithms are usually based on scalarization (e.g., ParEGO [12]), aggregating acquisition functions (e.g., SMS-EGO [10]), multiple acquisition functions (e.g., MultiEGO [11]) or information theoretic (e.g., PESMO [13]). A recent review of these methods and their use in MO HPO can be found in [15]. However, these methods do not generally support multi-fidelity evaluations. Recent work introduces the first MO HB method by using a simple scalarization method [1] or NDS [14] to overcome the shortcomings of scalarization methods. The NDS approach has been used to jointly optimize hyperparameters and hardware configuration [21]. ## 3 Background ### Multi-objective Optimization Let \(f:\Lambda\rightarrow\mathds{R}^{n}\) be a function over the search space \(\Lambda\). A multi-objective optimization problem aims to minimize a function \(f:\Lambda\rightarrow\mathds{R}^{n}\) over the search space \(\Lambda\) where \(n\) is the number of the objectives. \[\min_{\lambda\in\Lambda}f(\lambda)=\min_{\lambda\in\Lambda}\left(f_{1}(\lambda ),f_{2}(\lambda),\ldots,f_{n}(\lambda)\right). \tag{1}\] In our application this would represent \(n\) different performance measures of a machine learning model, configured by the hyperparameter vector \(\lambda\). One objective is the estimated generalization error and others can, among other things, be related to model size, inference time and fairness. Given two configurations \(\lambda^{(1)},\lambda^{(2)}\in\Lambda\), we say \(\lambda^{(1)}\succ\lambda^{(2)}\) if \(\lambda^{(2)}\) is dominated by \(\lambda^{(1)}\), that is, \(f_{i}(\lambda^{(2)})\geq f_{i}(\lambda^{(1)})\forall i\in[n]\) and \(f_{j}(\lambda^{(2)})>f_{j}(\lambda^{(1)})\) for at least one \(j\in[n]\). Configurations \(\lambda^{*}\) that are not dominated by any other candidates are called Pareto optimal, and the set of Pareto optimal configurations forms the Pareto front. Any MO algorithm aims to find the true Pareto set or an approximation set of non-dominated solutions. In practice, it is rarely possible to compute the true Pareto set due to the complexity of the search space \(\Lambda\), potential noise in the objective functions and the difficulty in optimizing multiple competing objectives. Hypervolume (HV) is one of the most commonly used quality indicators in MO [12, 13]. The hypervolume indicator \(\mathcal{H}(A)\) of A is the measure of the region weakly dominated by A and bounded above by a reference point \(r\): \[\mathcal{H}(A)=\Lambda(\lambda\in\mathcal{R}^{d}|\exists p\in A:p\succeq \lambda\wedge\lambda\succeq r), \tag{2}\] where \(\Lambda\) is the Lebesgue measure. HV has been shown to promote the convergence towards the Pareto front, as well as to ensure a representative distribution of points along the front. Maximizing HV is equivalent to identifying the Pareto set, and with a limited number of configurations, maximizing HV metric results in subsets of the Pareto front that are evenly distributed [1, 1]. ### Multi-objective Evolutionary Algorithms Non-dominated sorting (NDS) is a widely used method in multi-objective EAs [13]. It is used to sort candidates based on their non-domination level, which is determined by partitioning the objective space into fronts. NDS partitions an objective space set into fronts \(F_{1}\prec F_{2}\prec\cdots\prec F_{n}\) such that a configuration \(\lambda^{1}\in F_{i}\) outperforms another candidate \(\lambda^{2}\in F_{j}\) with respect to all objectives if \(i<j\). We can easily compute the partitioning by computing all non-dominated points \(F_{1}\), removing them, then computing the next layer of non-dominated points \(F_{2}\), and so on. The candidates within the same front are further sorted by their crowding distance [4], which is calculated by comparing the distance between a candidate and its two nearest neighbors for each objective function. To calculate Crowding distance, we follow the following steps for every objective \(f_{j}\): **(i)** Sort points by \(f_{j}\). **(ii)** Normalize scores to [0,1]. **(iii)** The boundary solutions (solutions with smallest and largest function values) are assigned an infinite distance value. Therefore, they are always selected. **(iv)** For each point a distance score, which is the distance between its 2 next-neighbors w.r.t. the sorting of \(f_{j}\) is assigned. The overall crowding-distance value is calculated as the sum of individual distance values corresponding to each objective. Another approach to sort candidates within the same front is the EpsNet exploration strategy [21]. This strategy starts by removing the first point from the "front" set (which represents all points on the front that we aim to sort) and placing it in an empty set, referred to as the "ranked" set. Subsequently, the remaining points in the "front" set are iteratively selected based on their Euclidean distance from the points already in the "ranked" set. Specifically, the strategy selects the point that has the highest Euclidean distance with respect to its closest point in the "ranked" set. The selected point is then removed from the "front" set and placed into the "ranked" set. This process continues iteratively until all points have been ranked. By selecting the point that has the highest distance to its closest point in the "ranked" set, the strategy ensures that the selected candidates are as dissimilar as possible. This can be useful for exploring different regions of the solution space and for identifying trade-offs between different objectives. ### Overview of DEHB DEHB is a recently proposed multi-fidelity evolutionary optimizer that uses a new amalgam of Differential Evolution and Hyperband for HPO [1]. DEHB fulfills the desiderata of a good HPO optimizer such as: (1) conceptually simple (2) computationally cheap (3) strong final and anytime performance and (4) effectively takes advantage of parallel resources. Adding to that, DEHB is very effective in optimizing high dimensional and discrete search spaces. A key design principle of DEHB is to share information across the runs it executes at various fidelities. Unlike HB that only prompt the top performing configurations from lower fidelity to higher one, DEHB maintains a sub-population for each fidelity and uses an evolutionary search based on DE. Each sub-population is evolved and updated using mutation, crossover and selection operations. Figure 1 illustrates the framework of a DEHB iteration where \(\eta=3\)_minimum fidelity_\(b_{min}=3\) and _maximum fidelity_\(b_{max}=27\). The first SH bracket of DEHB iteration initializes all the sub-populations randomly and follows the SH subroutine as in vanilla HB. Starting from the second iteration, DEHB performs a modified SH using DE evolution. For each SH iteration (top to bottom), DEHB collects the top performing configurations for each fidelity level in a parent pool which is used to select the parents for the mutation instead of using the current sub-population itself. By doing so, the DE evolution can still incorporate information from the good performing region w.r.t. the lower fidelity. When the size of a sub-population is small 1 which usually occurs on full fidelity level, a global population that consists of all the configurations from all the sub-population is used to select additional parents to perform mutation. Footnote 1: if the population size is less than 3 (i.e., minimum number of parents) needed to perform the used _rand1_ mutation strategy in DE The crossover then combines each individual and its corresponding mutant vector to generate the final offspring. Next, the selection operation updates the sub-population on the current fidelity if the new offspring is better that its parent. The sub-populations (right to left) are forwarded and evolved from a SH iteration to a next one. The final evolved sub-populations in each SH are then forwarded to start a new DEHB iteration. ## 4 MO-DEHB: Algorithm Description Multi-objective EAs usually navigate the objective space and optimize the Pareto front using NDS as discussed in Section 3.2. To leverage suitability and flexibility of DEHB for the MO domain, we follow a similar approach and apply two changes. First, we follow a similar approach as proposed in [13] to adapt the HB component for selecting the top-performing configurations in MO domain using NDS with the use of NSGA-II or Epsnet to rank the Pareto fronts resulting in two variants. Second, we extended the evolutionary search using DE to MO based on NDS and dominated HV. The following subsections explain these changes in more detail. ### Multi-objective Hyperband The single-objective HB starts by instantiating a SH subroutine by making use of the input parameters (i.e., minimum fidelity, maximum fidelity and \(\eta\)) that used to trade-off the number of configurations and fidelity. The first iteration of SH samples \(N\) random configurations with a small initial budget, evaluates them and forwards the top \(1/\eta\) configurations to the next higher budget. This process is repeated until the highest budget is reached. Subsequent SH iterations promote the top \(1/\eta\) configurations from the lower fidelity to the parent pool which is further used in DE evolution. To select the top-performing configurations, we adapt HB for the MO domain by employing a methodology similar to that presented in [13]. This approach is based on NDS to rank the configurations in the lowest budget by taking into account the global geometry of the Pareto front approximations in two stages. Referring to Figure 1, for each SH bracket we first rank the population into multiple fronts using NDS. Then, to rank the candidates in the same front (i.e., to choose a subset of the population from a specific front) we employ one of two strategies: The first strategy is based on NSGA-II (we dub it MO-DEHB\({}_{NSGA-II}\)) which uses the crowding-distance to select the candidates with the lower density in the objective space. The second strategy, EpsNet (giving rise to MO-DEHB\({}_{EPSNET}\)), aims to iteratively select candidates by maximizing the distance between them and the previously chosen configurations in the objective space. Both of these strategies aim to select a sparse set of candidates that effectively represents the front, with the goal of promoting diversity of the parent pool when we later use them to perform the mutation in DE search to find good solutions. For example, in the \(SH_{1}\) bracket in Figure 1, we rank the Figure 1: DEHB iteration population, \(pop_{c_{1}}^{b_{1}}\), into multiple fronts using NDS. Let us assume \(pop_{c_{1}}^{b_{1}}\) is divided into three fronts \(F_{1}\), \(F_{2}\) and \(F_{3}\). Next, assuming the number of candidates to be promoted to higher fidelity (which is determined by HB) is greater than the number of candidates in \(F_{1}\) but smaller than \(F_{2}\), we select all the candidates from \(F_{1}\) and select the remaining ones from \(F_{2}\) by either crowding-distance or EpsNet. ### Multi-objective DE In DEHB, the configurations at each fidelity level form a sub-population and are evolved using an evolutionary search based on the following DE operations: mutation, crossover and selection. For each SH iteration (top to bottom in Figure 1), the parent pool is used to select three random parents to perform the mutation operation (i.e., _rand1_ mutation strategy [11]) and generate a new mutant configuration. We then apply the binomial crossover between the generated mutant and the target configuration to generate the final offspring. Next, the offspring is evaluated and compared to its target configuration to select which one performs better to replace the other. To adapt DE to MO, we extend the selection operation to handle multiple objectives as we present in Algorithm 1. We utilize the global population in DEHB that contains the most recent sub-populations that have evolved across all fidelities. We incorporate new offspring into it and sort it according to NDS, resulting in multiple fronts. The front rankings of both the target and offspring are then determined, which helps in positioning them in the objective space with their respective trade-offs. If the offspring is on a lower ranked front than the target, it replaces the target in the sub-population. If the target is on a lower ranked front, the sub-population is not updated. If they are on the same front, then the one with the least dominated hypervolume in the last front is chosen to be replaced by the offspring, as it has the least contribution in approximating the desired trade-offs. ``` Input:\(\rho\leftarrow\) parent_pool; - Parent pool from lower fidelity \(pop\leftarrow\) population; - Population for current fidelity \(global\_pop\leftarrow\) - Population across all fidelity Output: Return evolved population 1for\(parent\)in\(pop\)do 2\(mutant\) = generate_mutant(\(\rho\)); 3\(offspring\leftarrow\) crossover(\(mutant\), \(parent\)); 4\(f\) = evaluate(\(offspring\)); 5\(C\leftarrow\) MO-selection(\(pop\_parent\),\(offspring\)); 6 update population(\(C\), \(offspring\), \(global_{p}op\)) 7 end for 8Function 9\(\text{MO-selection}\) (\(global\_pop,parent,offspring\)) : 10\([F_{1},...,F_{m}]\gets NDS(global\_pop)\)\(R\_parent,R\_offspring\) = get_front_ranking(\(parent,offspring,[F_{1},...,F_{m}]\)) 11if\(R\_parent>R\_offspring\)then 12 return \(parent\) 13elseif\(R\_parent<R\_offspring\)then 14 return \(offspring\) 15else 16\(least\_hv\_contributor\) = minimum_HV(\(F\_m\)) 17 return \(least\_hv\_contributor\) return \(pop\) ``` **Algorithm 1**MO-DE ## 5 Experiments We evaluated MO-DEHB on a broad collection of MO problems with a total of \(15\) benchmarks from diverse domains: (i) NAS (9 benchmarks), (ii) joint NAS & HPO (5 benchmarks) and (iii) algorithmic fairness (1 benchmark). In the following subsection we explain our benchmarks, experimental setup and results in detail. ### Benchmarks #### Neural Architecture Search The complexity of ML models has led to a widespread use of NAS to find efficient architectures [10]. However, NAS methods often focus solely on achieving the optimal performance and do not take into account other important objectives such as latency. To address this, we conduct experiments on NAS-Bench-101 [21], NAS-Bench-1shot1 [11] and NAS-Bench-201 [4], which are 9 tabular benchmarks. Our experiments aim to optimize for both validation accuracy and model size. For more information about these benchmarks, refer to HPOBench suite [10]. #### Joint NAS & Hpo While there has been extensive research on optimizing architecture and model hyperparameters separately, few studies have investigated optimizing them together. In this experiment, we use two raw benchmarks. The first one, proposed by [11], involves tuning Convolutional Neural Networks (CNNs) composed of a variable number of fully-connected layers and a 15D mixed and hierarchical search space. These networks are trained on the Oxford-Flowers dataset [12] and Fashion-MNIST [20]. For these two datasets, we optimize validation accuracy and model size. For more information on the datasets and search spaces, refer to [11]. The second benchmark uses three surrogate benchmarks [11] from the recently introduced JAHS-Bench-201 suite [1] (CIFAR-10, Colorectal-Histology and Fashion-MNIST), and we optimize validation accuracy and latency. #### Algorithmic Fairness We adopt the experimental setup from [10] and train a multi-layer perceptron (MLP) to optimize both predictive accuracy and fairness. Specifically, we use the fairness metric of difference in statistical parity (DSP) which measures the absolute difference in predicted outcome between two subgroups distinguished by a protected attribute (i.e., sex). The MLP is trained on the Adult dataset [12]. For more information about this raw benchmark, refer to [10]. We provide more details about the benchmarks in Ap pendix B. ### Experimental Setup **Evaluation protocol** We ran our experiments for \(10\) repetitions for each benchmark using different seeds. We set the wallclock runtime limit to \(24\)h for raw benchmarks (Flower, Fashion and Adult) and \(4\)h for surrogate and tabular benchmarks. For all benchmarks, we further introduce a limit on the number of target algorithm executions, determined by \(\left\lceil 20+80\times\sqrt{\text{\#HPs}}\right\rceil\), similar to [14] where HPs denotes the number of hyperparameters. We report the difference in log HV from the best possible Pareto set to the one currently observed by an optimizer: \(\text{LogHVDiff}(\text{HV})=log_{10}(\text{HV}-\text{HV}_{emp})\) where \(\text{HV}_{emp}\) is the normalized HV of all observed configurations over all the runs. **Hardware** For all surrogate and tabular benchmarks, we run experiments on a compute cluster equipped with Intel(R) Xeon(R) Gold 6242 CPUs @ 2.80GHz. For the raw benchmarks, we ran all jobs on a compute cluster equipped with NVIDIA GeForce RTX 2080Ti. **Baselines** In our comparison, we compare our two proposed variants (MO-DEHB\({}_{NSGA-II}\) and MO-DEHB\({}_{EPSNET}\)) against state-of-the-art baselines. We use two multi-objective BO optimizers: QNPAREGO [1] and SMAC [1] and three multi-objective EAs: NSGA-III [1], AGE-MOEA [1] and MOEA/D [13]. We further add random search (RS) as a baseline. We use the default settings to run these baselines and report that as well as more information about them in Appendix A. ### Results In this section, for space limitations, we present the results for 6 representative benchmarks: 2 from NAS (Imagenet and Cifar100 from NAS-Bench-201), 3 from joint NAS & HPO (Flowers, JAHS-Fashion and JAHS-Colorectal-Histology) and algorithmic fairness on Adult dataset. The qualitatively similar plots for all the benchmarks can be found in Appendix C. Figure 2 presents the difference in dominated HV of the Pareto front approximations with respect to the combined front approximation over time on the 6 selected benchmarks for the different baseline methods. Overall, the MO-DEHB variants clearly perform best. As seen in Figure 2, both MO-DEHB variants outperform all other methods in benchmarks such as JAHS-Fashion and fairness-Adult, showing exceptional performance throughout the optimization process. However, in benchmarks such as NAS-201-Cifar100 and JAHS-Colorectal-Histology, the MO-DEHB variants performed poorly initially but later became competitive with other methods. Interestingly, RS performed well on the NAS-201-Imagenet benchmark, similar to QNPAREGO, and remained competitive till the end of optimization. Additionally, SMAC performed well during the first phase of exploration on this benchmark. The performance of the MO-DEHB variants varied in some benchmarks, such as on the Flower dataset where MO-DEHB\({}_{EPSNET}\) performed well initially, but later MO-DEHB\({}_{NSGA-II}\) performed better, although it was outperformed by AGE-MOEA. Figure 3 visualizes a summary of attainment surfaces to assess the method's ability to cover the entire Pareto front [12]. We show the first, median and ninth attainment surfaces to visualize the distribution of Pareto fronts achieved by the methods to simplify visually inspecting the differences than plotting all 10 attainment surfaces. In Figure 3 for the Adult dataset, it can be seen that MO-DEHB has a consistent performance as indicated by the small difference between the upper and lower bound. While SMAC performs better than the MO-DEHB variants on the Fashion and Colorectal-Histology benchmarks from the JAHS-Bench, the MO-DEHB variants still exhibit consistent and good performance. On the Imagenet benchmark, all methods perform similarly. This may be due to the high evaluation limits and the tabular nature of the benchmark. ### Results Summary We now compare MO-DEHB variants and the other baseline methods across all the used benchmarks. We compute the average rank by hypervolume over time for each method in Figure 4. Both MO-DEHB variants show exceptional performance and perform very well early on. Later during the optimization process and for a short period of time, RS, SMAC and QNPAREGO perform competitively. At the end, both MO-DEHB variants show superior performance. We also report the mean and standard deviation for log HV difference for 10 repetitions across all the benchmarks for all the compared algorithms in Table 1. ## 6 Conclusion In this paper, we extended DEHB, a very effective evolutionary-based hyperband, to multi-objective optimization in the context of HPO. We use techniques that optimize the Pareto geometry based on non-dominating sorting and propose two variants: (1) MO-DEHB\({}_{NSGA-II}\) and (2) MO-DEHB\({}_{EPSNET}\). We evaluate the performance of MO-DEHB using comprehensive benchmark suites consisting of diverse and challenging MO problems from three benchmark families: NAS, NAS+HPO and algorithmic fairness. We demonstrated that MO-DEHB outperforms state-of-the-art MO optimizers on most of the benchmark problems. These results suggest that MO-DEHB is a promising new MO optimizer that is well-suited to a wide range of real-world optimization problems. For future work, we aim to propose an asynchronous version of MO-DEHB that takes advantage of parallel resources as an improved version of the original parallel settings in DEHB. Our reference implementation of MO-DEHB is available at [https://anonymous.4open.science/r/MODEHB-F38E](https://anonymous.4open.science/r/MODEHB-F38E). ## Ethical Statement There are no ethical issues.
2306.02423
Tachyon constant-roll inflation in Randall-Sundrum II cosmology
We study inflation in a model with constant second slow-roll parameter $\eta$. In this case, the Hubble expansion rate equation has analytical solutions describing four possible, nontrivial inflation scenarios. The evolution of the inflaton governed by a tachyon field is studied in the framework of the standard and Randall-Sundrum II cosmology. The attractor behavior of the solution is briefly demonstrated. Finally, the calculated values of the parameters $n_{\rm s}$ and $r$ are compared with observational data.
Marko Stojanovic, Neven Bilic, Dragoljub D. Dimitrijevic, Goran S. Djordjevic, Milan Milosevic
2023-06-04T17:55:05Z
http://arxiv.org/abs/2306.02423v2
# Tachyon constant-roll inflation in Randall-Sundrum II cosmology ###### Abstract We study inflation in a model with constant second slow-roll parameter \(\eta\). In this case, the Hubble expansion rate equation has analytical solutions describing four possible, nontrivial inflation scenarios. The evolution of the inflaton governed by a tachyon field is studied in the framework of the standard and Randall-Sundrum II cosmology. The attractor behavior of the solution is briefly demonstrated. Finally, the calculated values of the parameters \(n_{\rm s}\) and \(r\) are compared with observational data. ## 1 Introduction The theory of cosmological inflation is the leading theory in describing the early universe. Inflation solves the flatness, horizon, and other problems in standard cosmology [1]. Besides, the quantum fluctuations of the field which govern inflation seed the large-scale structure of the universe. The physical mechanism that drives inflation is not entirely known and subject to speculation. However, it is widely accepted that at least one scalar field (inflaton) can describe the inflation mechanism. One of the candidates for inflaton is a tachyon field, whose origin is related to the instability of the perturbative vacuum of string theory [2]. This paper aims to analyze the constant-roll inflation with the slow-roll parameter \(\eta\) being constant. The tachyon field describes the dynamics of inflation in the framework of the braneworld cosmology, based on the second Randall-Sundrum model (RSII) [3]. This work is motivated by the model introduced in Ref. 4, where the idea of the constant-roll was introduced for the canonical scalar field in the framework of RSII cosmology. As a novelty, we apply this approach to the RSII cosmology with a tachyon field. It is known that a tachyon field and the corresponding inflatory expansion can be studied within RSII cosmology [5, 6]. Therefore, as a logical step, we extend the study to include constant-roll inflation. In this setup, the second slow-roll parameter \(\eta\) is chosen to be constant as it is usually done in the literature [7, 8, 9]. However, this choice is applicable only to a canonical scalar field. In contrast, if applied to the tachyon field inflation, the same choice does not lead to a correct analytical solution because the inflation stage never ends. Instead, we propose a model-independent definition of \(\eta\) in terms of the Hubble expansion rate and its derivatives. With this choice, the Hubble expansion rate can be found analytically. In the following, we calculate the inflation parameters and compare the predictions of the model with Planck results [10]. This paper is organized as follows. In Section 2, we introduce the slow-roll parameters. In Section 3, we calculate the parameters \(n_{\rm s}\) and \(r\) for different solutions of the equation for the Hubble rate. Section 4 is devoted to the constant-roll tachyon inflation in standard and RSII cosmology. Finally, the paper is concluded in Section 5. ## 2 Slow-roll parameters The slow-roll approximation is a common way to analyze the inflation models. In order to carry out this procedure, the slow-roll parameters have been introduced [11]. The Hubble slow-roll parameters are defined as \[\epsilon=-\frac{\dot{H}}{H^{2}}, \tag{1}\] \[\eta=-\frac{\ddot{H}}{2H\dot{H}}. \tag{2}\] In the slow-roll regime, the inflaton field is changing according to the slow-roll assumption \[\epsilon\ll 1\quad\mbox{and}\quad\eta\ll 1. \tag{3}\] The parameter \(\eta\) in the model with canonical scalar field \(\phi\) takes the form [7] \[\eta=-\frac{\ddot{\phi}}{H\dot{\phi}}. \tag{4}\] It is helpful to define the slow-roll parameter hierarchically [12] \[\varepsilon_{0} \equiv H_{*}/H, \tag{5}\] \[\varepsilon_{i+1} \equiv \frac{d\ln|\varepsilon_{i}|}{dN},\quad i\geq 0. \tag{6}\] These parameters satisfy the recurrence relation \[\dot{\varepsilon}_{i}=H\varepsilon_{i}\varepsilon_{i+1}. \tag{7}\] Here \(H_{*}\) is the Hubble parameter in some chosen time, and \(N\) is the number of e-folds in the exponential expansion of the universe, defined as \[N=\int_{t_{\rm i}}^{t_{\rm f}}Hdt. \tag{8}\] Here \(t_{\rm i}\) and \(t_{\rm f}\) denote the times of the beginning and the end of inflation, respectively. The slow-roll parameters defined by (5) and (6) are independent of the field driving inflation. Hence, we will express the parameter \(\eta\) through \(\varepsilon_{i}\). Combining the definitions of the first two slow-roll parameters \[\varepsilon_{1} = -\frac{\dot{H}}{H^{2}}, \tag{9}\] \[\varepsilon_{2} = \frac{\dot{\varepsilon}_{1}}{\varepsilon_{1}H}, \tag{10}\] with (1) and (2), one finds [13] \[\epsilon\equiv\varepsilon_{1}, \tag{11}\] \[\eta=\varepsilon_{1}-\frac{1}{2}\varepsilon_{2}. \tag{12}\] The expression (12) is model-independent and allows us to analyze and compare various inflation models. ## 3 Constant-roll inflation Equations for inflationary dynamics are simplified in the slow-roll approximation, and in some cases can be solved analytically. However, there are cases when this approach cannot be applied. When a potential posses an extremely flat region, or an inflection point, the conditions for slow-roll have been violated, and inflation passes through the ultra-slow-roll regime [14]. In this case, the second slow-roll parameter \(\eta=-\ddot{\phi}/(H\dot{\phi})=3\) becomes constant. The assumption that the parameter \(\eta\) can have arbitrary constant value corresponds to constant-roll inflation [15, 16]. The idea of constant parameter \(\eta\) during inflation has been generalized to other slow-roll parameters being constant [17]. This paper aims to study inflation with the constant parameter \(\eta\). Using expression (7) one finds \[\varepsilon_{2i}=2\varepsilon_{1}-2\eta,\quad\varepsilon_{2i+1}=2\varepsilon_ {1},\quad i\geq 1, \tag{13}\] which can be proved by induction. From expression (2) one finds a differential equation \[\ddot{H}+2\eta H\dot{H}=0. \tag{14}\] It is interesting to note that (14) is invariant under scaling \[H\rightarrow\ell H,\quad t\to t/\ell, \tag{15}\] where \(\ell\) has the dimension of length. From now on it is understood that the dimensionful quantities are measured in units of \(\ell\) to some power. For example, \(H\) is measured in units of \(\ell^{-1}\) and \(t\) in units of \(\ell\). A straightforward solution to (14) is provided by \[\dot{H}=-\eta H^{2}, \tag{16}\] yielding \[H=\frac{1}{\eta t+c}. \tag{17}\] Here \(c\) is an integration constant. It follows from (16) that \(\epsilon_{1}=\eta=\mbox{const}\) and \(\varepsilon_{2}=0\), so this solution is trivial. In addition, one finds four nontrivial solutions to (14) [18] \[H_{1}(t) = -\frac{\beta}{\eta}\tan(\beta t+\gamma), \tag{18}\] \[H_{2}(t) = \frac{\beta}{\eta}\cot(\beta t+\gamma),\] (19) \[H_{3}(t) = \frac{\beta}{\eta}\tanh(\beta t+\gamma),\] (20) \[H_{4}(t) = \frac{\beta}{\eta}\coth(\beta t+\gamma), \tag{21}\] where \(\beta\) and \(\gamma\) are integration constants. Using \(H_{1}\) solution, a straightforward integration of (8) gives \[N(t)=\frac{1}{\eta}\log\cos(\beta t+\gamma)+C, \tag{22}\] where \(C\) is another integration constant. Using (9) and (10), from (18) we find \[\varepsilon_{1}(t)=\frac{\eta}{\sin^{2}(\beta t+\gamma)}, \tag{23}\] \[\varepsilon_{2}(t)=2\eta\cot^{2}(\beta t+\gamma). \tag{24}\] Then, combining (22) with (23) and (24) we obtain \[\varepsilon_{1}(N)=\frac{\eta}{1-e^{2\eta(N-C)}}, \tag{25}\] \[\varepsilon_{2}(N)=\frac{2\eta e^{2\eta(N-C)}}{1-e^{2\eta(N-C)}}. \tag{26}\] From expressions (25) and (26), it is clear that the parameter \(\eta\) must be positive. Inflation ends approximately at the end of the slow-role regime, i.e., when the first few horizon-flow parameters \(\varepsilon_{i}\) are close to 1. It is natural to assume that inflation ends at the point \(t_{\rm f}\) where \(\varepsilon_{1}(t_{\rm f})=1\) and \(N=N_{\rm f}\). To fix the initial values \(\varepsilon_{1{\rm i}}\) and \(\varepsilon_{2{\rm i}}\), we assume \(N=0\) at the begining of inflation (\(t_{\rm i}=0\)). From this condition, the constant \(C\) can be calculated using (25) \[C=N_{\rm f}-\frac{1}{2\eta}(1-\eta), \tag{27}\] and expressions for \(\varepsilon_{1}(N)\) and \(\varepsilon_{2}(N)\) become \[\varepsilon_{1}(N)=\frac{\eta}{1-(1-\eta)e^{2\eta(N-N_{\rm f})}}, \tag{28}\] \[\varepsilon_{2}(N)=\frac{2\eta(1-\eta)e^{2\eta(N-N_{l})}}{1-(1-\eta)e^{2\eta(N-N_{l} )}}. \tag{29}\] In Fig. 1, we plot[1] the evolution of the slow-roll parameters. Note that \(H_{2}\) solution is obtained from \(H_{1}\) by replacing \(\gamma\rightarrow\gamma+\pi\). Then, we find \[\varepsilon_{1}(t)=\frac{\eta}{\cos^{2}(\beta t+\gamma)}, \tag{30}\] \[\varepsilon_{2}(t)=2\eta\tan^{2}(\beta t+\gamma), \tag{31}\] \[N(t)=\frac{1}{\eta}\log\sin(\beta t+\gamma)+C, \tag{32}\] which leads to the same expressions for parameters \(\varepsilon_{1}(N)\) and \(\varepsilon_{2}(N)\) as in the previous case, for \(H_{1}\). Applying the same procedure for \(H_{3}\) we obtain \[\varepsilon_{1}(t)=-\frac{\eta}{\sinh^{2}(\beta t+\gamma)}, \tag{33}\] \[\varepsilon_{2}(t)=-2\eta\coth^{2}(\beta t+\gamma), \tag{34}\] \[N(t)=\frac{1}{\eta}\log\cosh(\beta t+\gamma)+C. \tag{35}\] In this case, we find the same expressions for \(\varepsilon_{1}(N)\) and \(\varepsilon_{2}(N)\) as in equations (28) and (29). The solution \(H_{4}\) can not provide a valid model for inflation because the parameters \[\varepsilon_{1}(t)=\frac{\eta}{\cosh^{2}(\beta t+\gamma)}, \tag{36}\] \[\varepsilon_{2}(t)=-2\eta\tanh^{2}(\beta t+\gamma), \tag{37}\] cannot be simultaneously positive. In order to compare the prediction of our inflationary model with observational data, we need to calculate the observational parameters, such as the scalar spectral index \(n_{\rm s}\) and the tensor-to-scalar ratio \(r\). At the lowest order in the slow-roll parameters, these parameters are given by[12] \[n_{\rm s}\simeq 1-2\varepsilon_{\rm 1i}-\varepsilon_{\rm 2i}, \tag{38}\] \[r\simeq 16\varepsilon_{\rm 1i}, \tag{39}\] Figure 1: The slow-roll parameters \(\varepsilon_{1}\) (solid blue line) and \(\varepsilon_{2}\) (dashed red line) as a function of the e-fold number \(N\), for \(\eta=-0.013\) and \(N_{f}=70\). where \(\varepsilon_{\rm 1i}=\varepsilon_{\rm 1}(t_{\rm i})\) and \(\varepsilon_{\rm 2i}=\varepsilon_{\rm 2}(t_{\rm i})\). In Fig. 2, we depict the results in \(n_{\rm s}\)-\(r\) plane. Clearly, a better agreement with observations is obtained with the solutions in which the values of \(\eta\) are negative. The dependence of the parameters \(n_{\rm s}\) and \(r\) on \(\eta\) is shown in Fig. 3, from which one can conclude that the agreement with observations is better for negative and small \(\eta\). As the solutions in this section are model-independent, they could provide a feasible inflationary scenario in any model that satisfies the condition (12) with constant \(\eta\). We demonstrate this in the following section. ## 4 Constant-roll inflation with tachyon matter Here we analyze the constant-roll inflation with constant \(\eta\), with dynamics described by the tachyon field, in the framework of the RSII cosmology and standard cosmology. Our model is based on a braneworld Figure 3: \(n_{\rm s}\) (left panel) and \(r\) (right panel) versus \(\eta\). Figure 2: \(r\) versus \(n_{\rm s}\) diagram with observational constraints from Planck mission [10]. Dashed line - prediction for models with \(\eta>0\) (the solutions \(H_{1}\) and \(H_{2}\)), \(0<\eta<0.01\), solid line - prediction for model with \(\eta<0\) (the solution \(H_{3}\)), \(-0.03<\eta<0\), for \(N_{\rm f}=60\) (blue line) and \(N_{\rm f}=70\) (red line). scenario in which our universe can be considered as a brane, i.e., a four-dimensional hypersurface embedded in a higher-dimensional spacetime. The RSII model [3, 19] describes a universe containing two branes with opposite tensions, separated in the fifth dimension, where only gravity can propagate. Observers reside on the positive tension brane, and the negative tension brane is pushed off to infinity. ### The standard and the RSII cosmology with tachyon matter In a flat FLRW universe in standard cosmology, the Friedmann equations are of the form [12] \[H^{2}=\frac{8\pi}{3M_{4}^{2}}\rho, \tag{40}\] \[\dot{H}=-\frac{4\pi}{M_{4}^{2}}(\rho+p). \tag{41}\] On the other hand, the Universe expansion in RSII cosmology [3] is described by modified Friedmann equations [4, 20] \[H^{2}=\frac{8\pi}{3M_{4}^{2}}\rho(1+\frac{\rho}{2\lambda}), \tag{42}\] \[\dot{H}=-\frac{4\pi}{M_{4}^{2}}(1+\frac{\rho}{\lambda})(\rho+p), \tag{43}\] which apparently differ from the standard Friedmann equations. The parameter \(\lambda\) denotes the brane tension, which is related to the five-dimensional and four-dimensional Planck masses, \(M_{5}\) and \(M_{4}\), respectively: \[\lambda=\frac{3}{4\pi}\left(\frac{M_{5}^{3}}{M_{4}}\right)^{2}. \tag{44}\] Following Ref. [4], we consider the case when the energy density is much larger than the tension of the brane, i.e., \(\rho\gg\lambda\). As a consequence, equations (42) i (43) are simplified: \[H^{2}\simeq\frac{4\pi}{3M_{4}^{2}}\frac{\rho^{2}}{\lambda}, \tag{45}\] \[\dot{H}\simeq-\frac{4\pi}{M_{4}^{2}}\frac{\rho}{\lambda}(\rho+p). \tag{46}\] The dynamics of a tachyon field \(\theta\) is described by a Lagrangian of the Dirac-Born-Infeld (DBI) form [21]. In a homogeneous and isotropic background, the Lagrangian can be put in the form \[\mathcal{L}=-V(\theta)\sqrt{1-\dot{\theta}^{2}}. \tag{47}\] The energy density and pressure of the tachyon field are given by [12] \[\rho=\frac{V}{\sqrt{1-\dot{\theta}^{2}}}, \tag{48}\] \[p=-V\sqrt{1-\dot{\theta}^{2}}. \tag{49}\] Using the Hamilton-Jacobi formalism, we express the Hubble expansion rate as a function of the tachyon field \(H=H(\theta)\), and the time derivative of \(H\) via \(\dot{H}=H_{,\theta}\dot{\theta}\), where \(H_{,\theta}\) denotes a derivative of \(H\) respect to \(\theta\). Then, combining Friedmann's equations, (40) and (41), or (45) and (46), with (48) and (49) one obtains \[\dot{\theta}=-\frac{n}{3}\frac{H_{,\theta}}{H^{2}}. \tag{50}\] Here, the integer \(n\) can take two values: \(n=1\) for RSII cosmology and \(n=2\) for standard cosmology. As expected, the expression for \(\dot{\theta}\) differs from the expression in a model with a canonical scalar field [4]\(\phi\) \[\dot{\phi}=-\frac{M_{5}^{3}}{4\pi}\frac{H_{,\phi}}{H}, \tag{51}\] suggesting that the model with a tachyon field may give a different prediction for the observational parameters. ### The constant-roll inflation with a tachyon field Next, we calculate the observational parameters \(n_{\rm s}\) and \(r\). Equation (14), using (50), can be transformed to a differential equation with respect to the tachyon field \(\theta\) \[H_{,\theta\theta}H-H_{\,\theta}^{2}-3\frac{\eta}{n}H^{4}=0, \tag{52}\] with the solution of the form \[H(\theta)=\frac{2nC_{1}e^{\sqrt{C_{1}}(\theta+C_{2})}}{e^{2\sqrt{C_{1}}(\theta+ C_{2})}-3\bar{\eta}C_{1}}, \tag{53}\] where \(\bar{\eta}=n\eta\). The integration constants \(C_{1}\) i \(C_{2}\), in the expression (53), can be absorbed by rescaling \(H\), \(\theta\) and \(\bar{\eta}\). It is easy to check that, without loss of generality, we can set \(C_{1}=1\) and \(C_{2}=0\), yielding \[H=\frac{2ne^{\theta}}{e^{2\theta}-3\bar{\eta}}. \tag{54}\] One can combine (50) and (54) to find the time dependence of \(\theta\) and \(H\). First, one finds a simple expression \[\dot{\theta}=\frac{1}{6}(e^{\theta}+3\bar{\eta}e^{-\theta}). \tag{55}\] This may easily be integrated, yielding \[e^{\theta}=\sqrt{3\bar{\eta}}\,\tan\left(\sqrt{\bar{\eta}/12}\, t+C_{3}\right),\quad\bar{\eta}>0, \tag{56}\] \[e^{\theta}=-\sqrt{3|\bar{\eta}|}\,\tanh\left(\sqrt{|\bar{\eta}| /12}\,t+C_{3}\right),\quad\bar{\eta}<0. \tag{57}\] Plugging (56) and (57) in (54) one finds \[H(t) = -\frac{n}{\sqrt{3\bar{\eta}}}\tan(\sqrt{\bar{\eta}/3}\,t+2C_{3}),\quad\bar{\eta}>0, \tag{58}\] \[H(t) = -\frac{n}{\sqrt{3|\bar{\eta}|}}\tanh(\sqrt{|\bar{\eta}|/3}\,t+2C_ {3}),\quad\bar{\eta}<0. \tag{59}\] Using these expressions for \(H\), the slow-roll parameters become \[\varepsilon_{1}(t) = \frac{\bar{\eta}}{n\sin^{2}\left(\sqrt{\bar{\eta}/3}\,t+2C_{3} \right)},\quad\bar{\eta}>0, \tag{60}\] \[\varepsilon_{1}(t) = \frac{|\bar{\eta}|}{n\sinh^{2}\left(\sqrt{|\bar{\eta}|/3}\,t+2C_ {3}\right)},\quad\bar{\eta}<0. \tag{61}\] Integration constant \(C_{3}\) is fixed by the initial value of \(\varepsilon_{1{\rm i}}\). According to (28) \(\varepsilon_{1{\rm i}}\) depends on values of \(\bar{\eta}\) and \(N_{\rm f}\). Obviously, equations (58) and (59) agree with the more general solutions, (18) and (20) respectively, with \(\beta=\sqrt{|\bar{\eta}|/3}\) (and \(\gamma=2C_{3}\)). This confirms that the results (58) and (61) are independent of the chosen values of constants \(C_{1}\) and \(C_{2}\). Integrating \(H(t)\), we obtain the time evolution of the scale factor \[a(t) \propto \left[\cos\left(\sqrt{\bar{\eta}/3}\,t+2C_{3}\right)\right]^{ \frac{\eta}{\eta}},\quad\eta>0, \tag{62}\] \[a(t) \propto \left[\cosh\left(\sqrt{|\bar{\eta}|/3}\,t+2C_{3}\right)\right]^{- \frac{\eta}{|\bar{\eta}|}},\quad\eta<0, \tag{63}\] which is consistent with the results of Ref. 18. Plugging (54) in (9) and (10) and utilizing (50) we obtain the expressions for the slow-roll parameters \[\varepsilon_{1}(\theta) = -\frac{\dot{H}}{H^{2}}=\frac{1}{3}\left(\frac{H_{,\theta}}{H^{2}} \right)^{2}=\frac{1}{12}e^{-2\theta}\left(e^{2\theta}+3\bar{\eta}\right)^{2}, \tag{64}\] \[\varepsilon_{2}(\theta) = \frac{1}{6}e^{-2\theta}\left(e^{2\theta}-3\bar{\eta}\right)^{2}. \tag{65}\] In Fig. 4 we plot \(\varepsilon_{1}\) and \(\varepsilon_{2}\) as functions of \(\theta\). The parameter \(\bar{\eta}\) can be positive or negative. It is obvious from equation (54) that \(H>0\) when \(\bar{\eta}<0\). To prove that \(H>0\) when \(\bar{\eta}>0\) too, note that \(\varepsilon_{1}>\bar{\eta}\). Solving (64) as a quadratic equation for \(e^{\theta}\) we obtain two solutions, \(f_{1}\) and \(f_{2}\), as functions of \(\bar{\eta}\). The solution \(f_{1}\) (\(f_{1}>f_{2}\)) satisfies the inequality \(e^{2\theta}>3\bar{\eta}\), so \(H>0\). The other solution (\(f_{2}\)) leads to an unphysical Hubble expansion rate. Let us consider our model in RSII cosmology for \(H>0\) for \(N_{f}=70\) and \(\bar{\eta}=-0.013\). We prove that \(H>0\) for any value of \(\theta\). From (28) we obtain \(\varepsilon_{1i}=0.0025\). Then, solving (64) as a quadratic equation for \(\theta\) we find the initial value \(\theta_{\rm i}=-1.2\). Using (54), we find the corresponding value of the Hubble expansion rate \(H_{\rm i}=4.64\). Apparently, the field is negative at the beginning of inflation while the Hubble expansion rate is always greater than zero (see Fig. 5). The end-value of the field \(\theta_{\rm f}\) is obtained from the condition for the end of inflation \(\varepsilon_{1}(t_{\rm f})=1\). By making use of (64) we find \[\theta_{\rm f}=\ln\left(\sqrt{3}\sqrt{2+2\sqrt{1-\eta}-\eta}\right). \tag{66}\] For the model with \(\bar{\eta}=-0.013\), inflation ends at \(t_{\rm f}=22.07\) (see Fig. 5) with \(H_{\rm f}=0.57\), when the field reaches the value \(\theta_{\rm f}=1.24\). The observational parameters for the inflation model, driven by the tachyon field, have already been calculated in standard cosmology\({}^{12}\) and RSII cosmology.\({}^{22}\) The expressions for the scalar spectral index and the tensor-to-scalar ratio, at second order in the slow-roll parameters, read \[n_{\rm s}=1-2\varepsilon_{1i}-\varepsilon_{2i}-\left(2\varepsilon_{1i}^{2}+(2 C^{\prime}+3-2\alpha)\varepsilon_{1i}\varepsilon_{2i}+C^{\prime}\varepsilon_{2i} \varepsilon_{3i}\right), \tag{67}\] \[r=16\varepsilon_{1i}\left(1+C^{\prime}\varepsilon_{2i}-2\alpha\varepsilon_{1i} \right). \tag{68}\] The value of the parameter \(\alpha\) differs in standard (\(\alpha=1/6\)) and RSII cosmology (\(\alpha=1/12\)), while \(C^{\prime}=-0.72\). The distinction of the constant-roll inflation is also reflected in the value of the third slow-roll parameter \(\varepsilon_{3}\). According to (13), \(\varepsilon_{3i}=2\varepsilon_{1i}\), that holds only for the constant-roll inflation, i.e., for the model with \(\eta\) constant. Figure 4: The slow-roll parameters \(\varepsilon_{1}\) (solid blue line) and \(\varepsilon_{2}\) (dashed red line) versus \(\theta\) during inflation in RSII cosmology for \(\bar{\eta}=-0.013\). Figure 5: The Hubble parameter versus the tachyon field (left panel) and the behavior of the slow-roll parameters \(\varepsilon_{1}\) (solid blue line) and \(\varepsilon_{2}\) (dashed red line) during the inflationary times versus tachyon field (right panel), for \(\bar{\eta}=-0.013\). As in the standard cosmology, the RSII inflationary model with a tachyon field is fully analytical. In Fig. 6, we present the results for \(n_{\rm s}\) and \(r\) parameters, superimposed on the observational constraints. A better agreement of analytical and observational results is evident for a bit higher values of \(N\), e.g., \(N_{\rm f}=70\). It may be noted that the influence of the second order in the slow-roll parameters is insignificant. Finally, note that the difference between results in the standard tachyon inflationary model and the RSII inflationary model in the constant-roll inflation approach is small. The attractor behavior of the solution is a necessary condition for a successful inflation model. Our model possesses a good attractor behavior, as presented in Fig. 7. Details will be presented elsewhere. Figure 6: \(r\) versus \(n_{\rm s}\) diagram with observational constraints from Planck mission [10]. The analytical results are depicted for fixed \(N_{\rm f}=60\) (blue lines) and \(N_{\rm f}=70\) (red lines). The solid and dashed lines are obtained from the expressions for \(n_{\rm s}\) and \(r\) up to the first and the second order in the slow-role parameters in RSII cosmology, respectively. The parameter \(\eta\) varies along the lines in the interval \(-0.03<\eta<0.03\). Summary and conclusions We have studied the constant-roll inflation in the framework of the RSII cosmology with a tachyon field. The slow-roll parameter \(\eta\) with a fixed constant value leads to a differential equation for the Hubble expansion rate. This equation has exact analytical solutions. We have calculated the Hubble slow-roll parameters \(\varepsilon_{i}\) as a function of \(\eta\) for all four nontrivial solutions \(H(\theta)\). It has been shown that three of four solutions for \(H(\theta)\) provide a consistent inflationary model. Futhermore, all solutions lead to the same functions \(\varepsilon_{1}(N)\) and \(\varepsilon_{2}(N)\). We have calculated the values of the scalar spectral index \(n_{\rm s}\) and the tensor-to-scalar ratio \(r\). Besides, we have compared these values with the observation data. Using this comparison, we have estimated the values of parameter \(\eta\). A better agreement is achieved for negative and small values of the \(\eta\). In addition, we have calculated the observational parameters for standard and RSII cosmology at second order in the slow-roll parameters. No significant difference is obtained in these two cases. As a straightforward extension of this work, it would be of interest to apply the formalism of the constant-roll inflation to the holographic RSII model with a tachyon field.[23] ## Acknowledgments This work has been supported by the ICTP-SEENET-MTP project NT-03 Cosmology-Classical and Quantum Challenges and the COST Action CA18108 "Quantum gravity phenomenology in the multi-messenger approach". M. Stojanovic acknowledges the support provided by the Serbian Ministry for Education, Science, and Technological Development under contract 451-03-47/2023-01/2000113. D. D. Dimitrijevic, G. S. Djordjevic, and M. Milosevic acknowledge the support provided by the Serbian Ministry for Education, Science, and Technological Development under contract 451-03-47/2023-01/2000124. In addition, G. S. Djordjevic acknowledges the support of the CEEPUS Program RS-1514-03-2223 "Gravitation and Cosmology" and the hospitality of the colleagues at the University of Banja Luka.
2305.14711
Gender Biases in Automatic Evaluation Metrics for Image Captioning
Model-based evaluation metrics (e.g., CLIPScore and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks. However, their impact on fairness remains largely unexplored. It is widely recognized that pretrained models can inadvertently encode societal biases, thus employing these models for evaluation purposes may inadvertently perpetuate and amplify biases. For example, an evaluation metric may favor the caption "a woman is calculating an account book" over "a man is calculating an account book," even if the image only shows male accountants. In this paper, we conduct a systematic study of gender biases in model-based automatic evaluation metrics for image captioning tasks. We start by curating a dataset comprising profession, activity, and object concepts associated with stereotypical gender associations. Then, we demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations, as well as the propagation of biases to generation models through reinforcement learning. Finally, we present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments. Our dataset and framework lay the foundation for understanding the potential harm of model-based evaluation metrics, and facilitate future works to develop more inclusive evaluation metrics.
Haoyi Qiu, Zi-Yi Dou, Tianlu Wang, Asli Celikyilmaz, Nanyun Peng
2023-05-24T04:27:40Z
http://arxiv.org/abs/2305.14711v3
# Gender Biases in Automatic Evaluation Metrics: ###### Abstract Pretrained model-based evaluation metrics have demonstrated strong performance with high correlations with human judgments in various natural language generation tasks such as image captioning. Despite the impressive results, their impact on fairness is under-explored - it is widely acknowledged that pretrained models can encode societal biases, and utilizing them for evaluation purposes may inadvertently manifest and potentially amplify biases. In this paper, we conduct a systematic study in gender biases of model-based evaluation metrics with a focus on image captioning tasks. Specifically, we first identify and quantify gender biases in different evaluation metrics regarding profession, activity, and object concepts. Then, we demonstrate the negative consequences of using these biased metrics, such as favoring biased generation models in deployment and propagating the biases to generation models through reinforcement learning. We also present a simple but effective alternative to reduce gender biases by combining _n_-gram matching-based and pretrained model-based evaluation metrics. ## 1 Introduction Pretrained model-based evaluation metrics such as BERTScore Zhang et al. (2019), CLIPScore Hessel et al. (2021), and UniEval Zhong et al. (2022) have shown promising performance, achieving stronger correlations with human judgments over _n_-gram matching-based evaluation metrics such as BLEU Papineni et al. (2002), ROUGE Lin (2004), and CIDEr Vedantam et al. (2015) across various generation tasks. In contrast to merely measuring the surface-level overlap between references and generation outputs, pretrained models can capture similarities on the semantic level and thus provide more accurate estimations of the model quality. While promising, it is a well-known problem that pretrained models encode _societal biases_, including but not limited to gender, racial, and religious biases Kurita et al. (2019); Sheng et al. (2019); Agarwal et al. (2021); Nangia et al. (2020); Barikeri et al. (2021); Cho et al. (2022); Zhang et al. (2022). Consequently, adopting pretrained models for evaluating generative models may result in _fairness_ problems. With the growing utilization of _cross-modal_ pretrained model-based evaluation metrics such as CLIPScore, the _robustness_ or _efficiency_ of the metrics, especially the potential _societal bias_ issues, have been under-investigated. For example, biased generative models may be rewarded and selected because specific sensitive attributes (_e.g._, gender) are favored by biased cross-modal pretrained model-based evaluation metrics. Most of the prior works focused on the robustness and efficiency of _language-only_ evaluation metrics Hanna and Bojar (2021); Pu et al. (2021). Among this line of research, Sun et al. (2022) constructed a dataset based on WinoBias Zhao et al. (2018) and systematically investigated different metrics. However, they focused on synthetic model generations and failed to analyze the _implications and harms_ of biased metrics on real-world models. Besides, they only focus on _language-only_ metrics and leave out studies of biases encoded in _cross-modal_ evaluation metrics such as CLIPScore. We target this pressing issue and perform a systematic study of _gender biases_ in _cross-modal_ evaluation metrics on image captioning tasks. Following previous research Hendricks et al. (2018), we classify gender expression (_i.e._, how individuals express their identity through clothing, hair length, mannerisms, and makeup) instead of biological sex or gender identity (i.e., how individuals experience their own gender Dev et al. (2021)). We limit our analysis to two genders (_man_ and _woman_) in this study, but it is important to note that gender is non-binary. We acknowledge this limitation and refer readers to the ethics statement section for a more in-depth discussion on this topic. For the study, we first collect a large-scale dataset, PAO-EvalBias, consisting of 98,213 images of people of 90 professions, in 52 activities, and with 39 objects. Then, we use the proposed dataset to analyze potential gender biases in automatic evaluation metrics, and how biased evaluation metrics can affect generation models. We also propose a simple fix that combines model-based and \(n\)-gram matching-based evaluation metrics to reduce gender biases while maintaining high correlations with human judgments for generation quality. Our contributions and findings include: * Pretrained model-based evaluation metrics cannot distinguish between biased and unbiased outputs, underperforming the statistical metrics in this regard; * The biases encoded in the model-based metrics can be propagated to image captioning models through reinforcement learning; * A simple and effective hybrid similarity evaluation metric by linearly combining \(n\)-gram matching-based and pretrained model-based metrics, which can effectively reduce gender biases while maintaining a strong correlation with human judgments. ## 2 Bias Evaluation For Evaluation Metrics We aim to identify and quantify potential gender biases in evaluation metrics for language generation models. To do this, we first gather a dataset in Section 2.1. We then formally define _gender biases_ and conduct a comprehensive analysis of image captioning evaluation metrics on our dataset in Section 2.2. ### Dataset Construction Using the lexicons created by previous work (Zhang et al., 2022; Cho et al., 2022; Bansal et al., 2022), we collect images of people with various **p**rofessions, **a**tivities, and **o**bjects (PAO-EvalBias).1 For each word in the lexicons, we use templates to construct one _reference_ sentence as well as two _candidates_ containing the correct and incorrect gender, denoted as the _good_ and _bad_ captions respectively. The specific caption patterns are described in Table 1. Footnote 1: The data described here was accessed, collected, and used only by the co-authors at UCLA. We use the good captions to retrieve images from the web using both the Bing and Google Image Search API. 250 images for each lexicon were retrieved and irrelevant images were manually filtered following the criteria discussed in next paragraph. We carefully follow the creative common license and retrieve images without watermark protection, sourced from image collection websites instead of social media, and used non-commercially. Besides, we integrate the VL-Bias dataset from Zhang et al. (2022) to enrich our data collection, especially for the activity category. More specifically, we use the annotations in VL-Bias to split the dataset into two genders (man and woman) among two categories (profession and activity). There are 13 professions and 52 activities in VL-Bias. Data Cleaning.After collecting all the candidate images, we use the filtering criteria as follows to _remove_ the images if: (1) the content of the image does not reflect the good caption; (2) it already exists in the dataset. Two annotators were employed for the manual filtering process. Specifically, annotators first filtered on the same 100 images randomly selected from the dataset, where the agreement achieved Cohen \(\kappa=0.917\). Based on this, the remaining images only have one annotator to examine and filter out the irrelevant images. Statistics.In the end, we collect 98,213 images for PAO-EvalBias including 90 professions, 52 activities, and 39 objects. We list the detailed statistics of each profession, activity, and object concept in Appendix Tables 8, 9, and 10. We observe that most concepts contain over 200 images, ensuring that our analysis results are reliable and we believe it can be a valuable resource for future research. ### Evaluation Metrics Performance Analysis We then evaluate five _n_-gram matching-based evaluation metrics (BLEU-4, METEOR, ROUGE, CIDEr, and SPICE) and with a focus on one model-based metric (CLIPScore) on the PAO-EvalBias dataset, where CLIPScore uses the CLIP model (Radford et al., 2021) to compute the image-caption similarity and treat it as the evaluation score. Gender Bias Definition.To measure the gender bias present in these evaluation metrics, we calculate the _performance discrepancy_ between different protected groups (man and woman). More specifically, we first compute the evaluation metric scores for good and bad captions for every image in the dataset and then measure the average accuracy of each metric in differentiating good and bad captions of each concept per gender: \[\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\ Karpathy-split data (Lin et al., 2014; Karpathy and Fei-Fei, 2015). Our goal is to examine the impact of gender biases pre-encoded in evaluation metrics on generation models. Specifically, we reveal that the existing image-captioning models contain gender biases, and using biased model-based metrics will make this kind of biased model more favorable over less-bias ones in Section 3.1. Based on these findings, we further investigate whether using a biased metric as a reward may amplify biases in both the generation model and evaluation metric under the reinforcement learning setting in Section 3.2. ### Favoring Biased Models It has been pointed out that there exist societal biases in image captioning models (Hendricks et al., 2018), and we need to carefully calibrate them in real-world applications. However, using model-based metrics like CLIPScore for evaluation may make it hard to distinguish between biased and unbiased model generations and even lead to biased models being favored and applied over less-biased ones. In this section, we verify if this hypothesis is true under a controlled study. #### 3.1.1 Biases in Captioning Models We first find out if captioning models pre-encode biases in our setting. To this end, we perform inference on our PAO-EvalBias dataset with FIBER and analyze the gender prediction errors of the generated captions following Hendricks et al. (2018). Due to the captions design, we ensure that there is always one main character inside each image and thus no further labeling work is needed. Using the same analysis technique in Section 2.2, we find that FIBER makes gender prediction errors 6.3% of the time (Table 5) and exhibits significant biases over 58.6% of the words in our lexicon, including 60.0%, 57.7%, 56.4% of the profession, activity, and object words respectively. This result indicates that existing stereotypes in the profession between protected groups still significantly challenge the generation models compared to other concepts. Visualizations are provided in Figure 2 and Appendix Figures 5, 6. We also perform the same analysis on COCO as it has been widely used in previous image captioning work. Specifically, we use ground-truth captions to determine if an image contains a man or a woman, and we use the male and female lexicons in Hendricks et al. (2018). If at least one reference caption of an image contains a "female" word such as "woman" and no captions have "male" words such as "man" in them, we label the image as "woman". Similarly, we can label the image as "man" using the same principle. We do not consider images where both "male" and "female" words are mentioned. After labeling, we analyze if an image captioning model accurately predicts the gender of an image by searching for the gender-related words in the captions. To ensure the accuracy of our analysis, we also manually check each of the generations and make sure that they are indeed biased. Table 5 shows that FIBER can still make gender prediction errors on COCO with an error rate of 1.4%. Figure 1: Gender biases under the activity category in CLIPScore evaluation: Blue points are _man-biased_ and green points are _woman-biased_. Points in orange have _p_-value greater than 0.05 with bootstrap resampling. Figure 2: Gender biases under the activity category of FIBER: Blue points are _man-biased_ and green points are _woman-biased_. Points in orange have _p_-value greater than 0.05 with bootstrap resampling. #### 3.1.2 Error Correction We use a rule-based method to correct errors in the FIBER model's gender predictions in its generated captions to obtain a debiased FIBER model _in a specific setting_ where we only consider the words "man" and "woman". Specifically, if an image of a woman is captioned with only the word "man" and no female-associated words from a lexicon defined by Hendricks et al. (2018), we change "man" to "woman". Similarly, we change "woman" to "man" for images of men. The clean captions are used as the generated captions of the debiased FIBER model. It should be noted that this rule-based method only applies in these limited scenarios, and we exclude the sentences where the method cannot be applied for our analysis purpose. #### 3.1.3 Evaluating Models and Results We compute the CLIPScore for both biased and debiased FIBER on PAO-EvalBias and COCO. For PAO-EvalBias, we calculate two scores: _CLIPScore-Value_ denotes the specific numerical values calculated by CLIPScore and _CLIPScore-Win_ denotes the percentage of times a model is favored by CLIPScore over all instances. Table 3 shows the results of the experiments and we notice that (1) CLIPScore metric favors biased captions in 53.7% of cases, and (2) overall, CLIPScore cannot distinguish between biased and debiased model generations. This is concerning and highlights the need to debias evaluation metrics to prevent biased models from being used in real-world applications. Moreover, Table 4 shows the experiment results on COCO, which exhibits similar trends on PAO-EvalBias and thus further strengthens the statement. ### Bias Propagation through RL As previously demonstrated, the existing image-captioning models contain gender biases, and using biased model-based metrics will make this kind of biased model favored over less-bias ones, we investigate whether using a biased metric as a reward may _amplify_ biases in both the generation model and evaluation metric under the _reinforcement learning_ (RL) setting. RL using evaluation metric scores as rewards can improve language generation and reduce error propagation Shen et al. (2016); Rennie et al. (2017); Paulus et al. (2018), and optimizing towards model-based scores is more effective than _n_-gram-matching scores Wieting et al. (2019); Li et al. (2019). However, the use of a biased metric as a reward may reinforce biases in both the generation model and evaluation metric. Therefore, it is critical to investigate the impact of optimizing towards CLIPScore on _fairness_. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & BLEU-4 & METEOR & ROUGE & CIDEr & SPICE & CLIPScore & CLIPScore+CIDEr \\ \hline Biased-FIBER & 35.3 & 27.4 & 56.3 & 132.2 & 19.3 & **76.3** & 208.5 \\ Debiased-FIBER & **47.0** & **31.2** & **61.5** & **147.0** & **24.5** & 76.2 & **223.2** \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation scores of biased and debiased models on COCO. Biases in the evaluation metric can make biased and debiased models indistinguishable based on evaluation scores. However, _n_-gram matching metrics can hardly encode biases and CLIPScore+CIDEr can alleviate the bias issue. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{CLIPScore-Value} & \multicolumn{3}{c}{CLIPScore-Win} \\ \cline{2-7} & Profession & Activity & Object & All & Profession & Activity & Object & All \\ \hline Biased-FIBER & 65.3 & **67.4** & 65.4 & **66.2** & **54.8** & **55.9** & 39.7 & **53.7** \\ Debiased-FIBER & **65.4** & 66.8 & **67.8** & **66.2** & 45.2 & 44.1 & **60.3** & 46.3 \\ \hline \hline \end{tabular} \end{table} Table 3: CLIPScore evaluation of biased and debiased models on PAO-EvalBias. “CLIPScore-Value” denotes the specific numerical values calculated by CLIPScore and “CLIPScore-Win” denotes the percentage of times a model is favored by CLIPScore over all instances. CLIPScore favors the biased FIBER in 53.7% of the images in PAO-EvalBias. Overall, CLIPScore cannot distinguish between biased and debiased model generations. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{PAO-EvalBias} & \multicolumn{3}{c}{COCO} \\ \cline{2-7} & CLIPScore & Gender & Err. & CIDEr & CLIPScore & Gender Err. \\ \hline MLE & 69.7 & 6.3 & 128.6 & 75.4 & 1.4 \\ RL & 72.7\({}^{*}\) & 6.8\({}^{*}\) & 130.9\({}^{*}\) & 77.6\({}^{*}\) & 1.6 \\ \hline \hline \end{tabular} \end{table} Table 5: RL can improve the model generation performance on PAO-EvalBias and COCO. However, the use of CLIPScore as the reward can lead to gender prediction errors, which increases bias in the generated output. \({}^{*}\) indicates significant differences between MLE and RL (\(p<0.05\) with bootstrap resampling). #### 3.2.1 Setting We optimize FIBER with RL following Dou et al. (2022) on PAO-EvalBias and COCO-Karpathy image captioning dataset as it has been widely used in previous image captioning work. We utilize CIDEr, CLIPScore, or a linear combination of the two scores as reward functions. We finetune the MLE-trained FIBER using RL for 1 epoch for PAO-EvalBias and 3 epochs for COCO with the learning rate set to 1e-6. #### 3.2.2 Results Table 5 demonstrates that RL can enhance the model generation performance, as observed in the improvement of CLIPScore from 69.7 to 72.7 on PAO-EvalBias and from 75.4 to 77.6 on COCO. However, the use of CLIPScore as the reward can lead to gender prediction errors, which increases bias in the generated output. Notably, the gender prediction error rates rise _significantly_ (\(p<0.05\) with bootstrap resampling) from 6.3% to 6.8% on PAO-EvalBias and from 1.4% to 1.6% on COCO. Furthermore, the optimized model exhibits biases on 61.3% of the words on PAO-EvalBias, an increase from 58.6% prior to RL. These findings highlight that using biased metrics for model evaluation can _propagate_ gender biases to generation models, leading to negative outcomes. Moreover, Table 6 illustrates that reinforcement learning can generally enhance the model generation performance on COCO. It is worth noting that while using CIDEr as the reward does not result in increased bias, the same cannot be said for CLIPScore, which has the potential to introduce more bias to the model. Specifically, the gender prediction error rates _increase_ from 1.4% to 1.6% using CLIPScore as the reward. On the other hand, the gender prediction error rates _decrease_ from 1.4% to 1.2% using CIDEr as the reward. The advantage of using CIDEr scores as rewards is that it motivates the model to make accurate predictions on a word-by-word basis, leading to improvements in gender-related predictions. Conversely, since CLIPScore emphasizes the overall similarity between images and text, biases in the evaluation metrics can be carried over to generation models through the optimization process. As a result, utilizing biased metrics for language generation models may propagate biases, which is a potential drawback. ## 4 A Hybrid Similarity Metric While model-based metric contains biases, \(n\)-gram matching-based metrics can hardly encode gender biases. Therefore, it is natural to combine \(n\)-gram matching-based with pretrained model-based metrics to alleviate gender biases. Motivated by this, we investigate if a linear combination of the similarity between the generated text against a reference text and a reference image, _i.e._, adding CLIPScore and CIDEr together without normalization for model evaluation (denoted as CLIPScore+CIDEr) can combine the best of both worlds. We mainly focus on CIDEr because it is the most popular \(n\)-gram matching-based metric in image captioning tasks although our method is compatible with other \(n\)-gram matching-based metrics as well. ### Bias Evaluation In this part, we experiment with the hybrid metric following the setting in Section 2.2. From Table 2, we notice that CLIPScore+CIDEr does not encode gender biases on the PAO-EvalBias dataset, suggesting this method can successfully reduce the metric bias. Furthermore, we evaluate the human correlations of each evaluation metric on Flickr8K-Expert Hodosh et al. (2015) and as in Table 7, CLIPScore+CIDEr achieves an improved correlation with human judgments compared to CLIPScore and CIDEr, indicating that it can maintain its capability of model evaluation. Our success with CLIPScore+CIDEr shows our method is compatible with any other statistical metrics. That said, we also test CLIPScore+BLEU4 and CLIP \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & BLEU-4 (\(\uparrow\)) & METEOR (\(\uparrow\)) & ROUGE (\(\uparrow\)) & CIDEr (\(\uparrow\)) & SPICE (\(\uparrow\)) & CLIPScore (\(\uparrow\)) & Gender Error (\(\downarrow\)) \\ \hline MLE & 38.9 & 30.4 & 59.3 & 128.6 & 23.2 & 75.4 & 1.4 \\ RL-CLIPScore & 39.4 & 30.4 & 59.4 & 130.9 & 23.8 & 77.6 & 1.6 \\ RL-CIDEr & 42.7 & 30.9 & 61.4 & 142.2 & 24.1 & 75.3 & 1.2 \\ RL-CLIPScore+CIDEr & 43.2 & 31.3 & 61.7 & 143.4 & 24.6 & 76.6 & 1.3 \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results of MLE and RL models on COCO. Using a biased metric as a reward can amplify the gender biases encoded in the evaluation metric in the generation model under the RL setting. Combining CLIPScore and CIDEr can alleviate the negative outcome while maintaining good generation performance. Score+SPICE, resulting in 51.260 and 55.051 \(\tau_{c}\), respectively, which further strengthens our argument. To conclude, \(n\)-gram matching-based evaluation metrics calculate similarity based on the _lexical overlaps_ between reference and candidate captions, so they are less biased because they always favor more overlaps with the reference which has the correct gender terms. Although CLIPScore focuses on the overall similarity between images and texts and therefore has better human correlations, the existing biases encoded in CLIPScore will make it prefer biased models. Adding an \(n\)-gram matching-based metric onto a pre-trained model-based metric will encourage the model to generate captions that are lexically close to unbiased reference sentences while capturing more correlations between images and captions, which combines cross-modal similarity between the generated text against a reference text and a reference image. ### Impact on Generation Models Following the setting in Section 3.2, we also perform the same experiments with the hybrid metric. As we can see from the experiment results in Table 4 CLIPScore+CIDEr can alleviate the biases. Specifically, we find that (1) biases in the evaluation metric can make biased models favored on COCO although the difference is not significant; (2) \(n\)-gram matching metrics can hardly encode biases and CLIPScore+CIDEr can alleviate the bias issue (biased: 208.5 vs debiased: 223.2 on linear combination scores). In addition, as shown in Table 4, we observe that the linear combination of CIDEr and CLIPScore as rewards can enhance the model performance across various metrics, as evidenced by the increase in CLIPScore from 75.4 to 76.6. Moreover, this combination approach can mitigate the bias problem of CLIPScore, as indicated by the reduction in gender prediction errors from 1.6% to 1.3%. These findings corroborate our assertion and demonstrate the effectiveness of the hybrid metric. ## 5 Related Work In this section, we overview two lines of related work, including _evaluation metrics_ and _societal biases in pretrained models_. **Evaluation Metrics.**\(N\)-gram matching metrics such as BLEU Papineni et al. (2002), ROUGE Lin (2004), and CIDEr Vedantam et al. (2015) have been dominating in evaluating text generation models. However, these metrics typically compare references and candidates on the lexical level and fail to consider semantic similarities. To solve the issue, various approaches have been proposed Banerjee and Lavie (2005); Anderson et al. (2016) and models pretrained on large corpora have been leveraged because of their capacity to capture high-level semantic information of inputs Zhao et al. (2019); Zhang et al. (2019); Thompson and Post (2020); Rei et al. (2020); Sellam et al. (2020); Yuan et al. (2021). In image captioning, the CLIP model Radford et al. (2021) pretrained on millions of image-text pairs has been used and achieves strong performance. Based on CLIP model, Hessel et al. (2021) proposed a reference-free metric, CLIPScore, that achieves the highest correlation with human judgments. **Societal Biases in Pretrained Models.** It has been pointed out Bolukbasi et al. (2016); Zhao et al. (2017); Bender et al. (2021) that there are societal biases encoded in the model training data, and models pretrained on these data can amplify the biases and potentially harm marginalized populations. While there are several works on investigating the bias issue of pretrained models Kurita et al. (2019); Sheng et al. (2019); Agarwal et al. (2021); Cho et al. (2022); Zhang et al. (2022), biases in pretrained model-based evaluation metrics have received less attention. Among them, Sun et al. (2022) construct a dataset based on WinoBias Zhao et al. (2018) and perform a systematic investigation on different \begin{table} \begin{tabular}{l c} \hline \hline & \(\tau_{c}\) \\ \hline BLEU-4 & 30.776 \\ METEOR & 41.822 \\ ROUGE & 32.314 \\ CIDEr & 43.891 \\ SPICE & 44.888 \\ CLIPScore & 51.482 \\ \hline CLIPScore+BLEU-4 & 51.260 \\ CLIPScore+CIDEr & 53.768 \\ CLIPScore+SPICE & 55.051 \\ \hline \hline \end{tabular} \end{table} Table 7: Correlations (measured with \(\tau_{c}\)) with human judgment on Flickr8K-Expert. All metrics use 4-5 ground truth references, except for CLIPScore which does not use any references. Several combinations of \(n\)-gram matching-based and pretrained model-based evaluation metrics have been evaluated. Combining CLIPScore with CIDEr or SPICE can improve the correlation over both \(n\)-gram matching-based evaluation metrics and CLIPScore. types of metrics. However, the paper does not study evaluation metrics in the vision-language domain and fails to analyze the implications of the metric biases to real-world models. ## 6 Conclusion We analyze the gender biases issue of model-based evaluation metrics on image captioning tasks and investigate its potential impact on image captioning generation models. To do this, we create our own dataset and conduct a thorough analysis of the gender bias present in various evaluation metrics across multiple concepts. We also discuss the consequences of these biases in real-world applications and propose a hybrid metric as a solution to mitigate the issue. Experiments show that using biased model-based evaluation metrics initiates the biased generation models being favored and amplifies the model-encoded gender biases through reinforcement learning. The proposed cross-modal similarity evaluation metric can significantly reduce gender biases while maintaining a stronger correlation with human judgments than existing metrics. In the future, we plan to expand our analysis to include other protected attributes such as race and ethnicity, as well as other language generation tasks. Additionally, we aim to continue developing more effective methods for removing bias from generation evaluation metrics. ## Limitations We only consider two genders (_man_ and _woman_) in our paper and classify gender expressions instead of biological sex or gender identity in our setting, while it is important to note that gender is non-binary and a detailed discussion can be found in the ethics statement section. Also, we mainly focus on gender biases in our paper, but there are other types of biases such as racial and religious biases, where equal representation is desired. In addition, we only experiment with the image captioning task, while other multimodal generation tasks are worth investigating as well. ## Ethics Statement Our research aims to investigate the gender biases present in image captioning evaluation metrics using the PAO-EvalBias dataset. We focus on selected concepts such as profession, activity, and object within the gender axis, although other categories such as racism also require equal representation. Our goal is to assist practitioners and the community in evaluating existing pretrained model-based evaluation metrics from different perspectives. We are aware that gender is a complex and multi-faceted concept and although there are many different groups within gender, in this study we limit our analysis to classifying individuals as either "man" or "woman" based on their gender expression, which refers to how individuals express their identity through clothing, hair length, mannerisms, and makeup. We make a conscious decision not to evaluate an individual's gender identity or biological sex as it is not possible to infer this information based on appearance alone, and our goal is to focus on the perceptual biases and gender assumptions of the human annotators. We acknowledge that the use of binary categories may be offensive to underrepresented groups, but it is important to note that our research aims to provide a starting point for further discussion and research in this area. Our research also aims to review the existing pretrained model-based evaluation metrics in further dimensions, including fairness and bias. By doing so, we hope to help practitioners and the community to understand the limitations and potential harms of these metrics, and to develop better and more inclusive evaluation metrics.
2306.02345
Configuration spaces as commutative monoids
After 1-point compactification, the collection of all unordered configuration spaces of a manifold admits a commutative multiplication by superposition of configurations. We explain a simple (derived) presentation for this commutative monoid object. Using this presentation, one can quickly deduce Knudsen's formula for the rational cohomology of configuration spaces, prove rational homological stability, and understand how automorphisms of the manifold act on the cohomology of configuration spaces. Similar considerations reproduce the work of Farb--Wolfson--Wood on homological densities.
Oscar Randal-Williams
2023-06-04T12:38:09Z
http://arxiv.org/abs/2306.02345v2
# Configuration spaces as commutative monoids ###### Abstract. After \(1\)-point compactification, the collection of all unordered configuration spaces of a manifold admits a commutative multiplication by superposition of configurations. We explain a simple (derived) presentation for this commutative monoid object. Using this presentation, one can quickly deduce Knudsen's formula for the rational cohomology of configuration spaces, prove rational homological stability, and understand how automorphisms of the manifold act on the cohomology of configuration spaces. Similar considerations reproduce the work of Farb-Wolfson-Wood on homological densities. ## 1. Introduction Let \(M\) be the interior of a connected compact manifold with boundary. The \(1\)-point compactification of the space \(C_{n}(M)\) of unordered configurations in \(M\) may be written as \[C_{n}(M)^{+}=\left[\frac{(M^{+})^{\wedge n}}{\text{locus where two points coincide}}\right]_{\mathfrak{S}_{n}},\] the quotient formed in pointed spaces. Not-necessarily-disjoint union of unordered configurations defines a _superposition product_ \[C_{n}(M)^{+}\wedge C_{n^{\prime}}(M)^{+}\longrightarrow C_{n+n^{\prime}}(M)^{+}\] which is associative, commutative, and unital. This gives a unital commutative monoid object in the symmetric monoidal category \(\mathsf{Top}_{*}^{\mathbb{N}}\) of \(\mathbb{N}\)-graded pointed spaces: \[\mathbf{C}(M):n\longmapsto C_{n}(M)^{+}.\] The goal of this note is to explain and exploit this algebraic structure. More generally, let \(\pi:L\to M\) be a vector bundle, and let \[C_{n}(M;L)^{+}=\left[\frac{(L^{+})^{\wedge n}}{\text{locus where two points have the same projection in }M}\right]_{\mathfrak{S}_{n}}.\] These assemble in the same way to a unital commutative monoid object \(\mathbf{C}(M;L)\). (Of course more general spaces of labels can be implemented too, but the above suffices for us.) In the following we write \(X[n]\) for the \(\mathbb{N}\)-graded pointed space which consists of \(X\) in grading \(n\) and the point in all other gradings, and \(\mathbf{Com}(X)\) for the free commutative monoid on \(X\in\mathsf{Top}_{*}^{\mathbb{N}}\). **Theorem 1.1**.: _There is a pushout square_ \[\begin{CD}\mathbf{Com}([(L\oplus L)^{+}]_{\mathfrak{S}_{2}}[2])@>{\epsilon}>{}>S ^{0}[0]\\ @V{}V{\Delta}V@V{}V{}V\\ \mathbf{Com}(L^{+}[1])@>{}>{}>\mathbf{C}(M;L)\end{CD}\] _of unital commutative monoids in \(\mathsf{Top}_{*}^{\mathbb{N}}\), where \(\epsilon\) is the augmentation and \(\Delta\) is induced by the inclusion \([(L\oplus L)^{+}]_{\mathfrak{S}_{2}}\to[L^{+}\wedge L^{+}]_{\mathfrak{S}_{2}}= \mathbf{Com}(L^{+}[1])(2)\). Furthermore, this square is a homotopy pushout, i.e. there is an induced equivalence_ \[\mathbf{Com}(L^{+}[1])\otimes_{\mathbf{Com}([(L\oplus L)^{+}]_{\mathfrak{S}_ {2}}[2])}^{\mathbb{L}}S^{0}[0]\stackrel{{\sim}}{{\longrightarrow }}\mathbf{C}(M;L).\] Recall that the derived relative tensor product may be computed by the two-sided bar construction, formed in \(\mathsf{Top}^{\mathbb{N}}_{*}\), so the conclusion can equivalently be stated as an equivalence \[B(\mathbf{Com}(L^{+}[1]),\mathbf{Com}([(L\oplus L)^{+}]_{\mathfrak{S}_{2}}[2]),S ^{0}[0])\stackrel{{\sim}}{{\longrightarrow}}\mathbf{C}(M;L). \tag{1.1}\] As one application of this result we will show how to recover Knudsen's [16] formula for \(H^{*}(C_{n}(M);\mathbb{Q})\) in terms of the compactly-supported \(\mathbb{Q}\)-cohomology of \(M\) and its cup-product map, which in particular quickly implies homological stability. As another application we will show that the action on \(H^{*}(C_{n}(M);\mathbb{Q})\) of the group of proper homotopy self-equivalences of \(M\) factors over a surprisingly small group. Finally, in an appendix written with Quoc P. Ho, we show how similar considerations reproduces the work of Farb-Wolfson-Wood on homological densities. This note was inspired by a speculative remark of Banerjee [1, Remark 1.2], and is my attempt to make sense of it in a specific case. It also has much to do with the work of Ho [14, 15], Petersen [20], Knudsen [16], Getzler [21, 22], Bodigheimer-Cohen-Milgram [1], and Segal [23]. ## 2. Applications ### Homology of configuration spaces The space \(C_{n}(M;L)^{+}\) is the \(1\)-point compactification of the \(n\cdot(\dim(M)+\dim(L))\)-dimensional manifold \[C_{n}(M;L):=[L^{n}\setminus\{(l_{1},\ldots,l_{n})\,|\,\pi(l_{i})=\pi(l_{j})\text { and }i\neq j\}]_{\mathfrak{S}_{n}}\,.\] This is a vector bundle over \(C_{n}(M)\), but is a manifold itself and is orientable if and only if the manifold \(L\) is orientable and even-dimensional. To arrange this, we can take the vector bundle \(W\) given by the orientation line of \(M\) plus \((\dim(M)-1)\) trivial line bundles. Thus if \(M\) is \(d\)-dimensional then by Poincare duality we have \[H^{*}(C_{n}(M);\,\Bbbk)\cong H^{*}(C_{n}(M;W);\Bbbk)\cong\widetilde{H}_{2dn -*}(C_{n}(M;W)^{+};\Bbbk).\] In view of this, the bar construction description (1.1) can be used, in combination with the homology of free commutative monoids (see [13]), to investigate \(H^{*}(C_{n}(M);\Bbbk)\). We do not pursue this in general here, but rather focus on the case \(\Bbbk=\mathbb{Q}\), where a complete answer is possible, and reproduces a formula of Knudsen. ### Revisiting Knudsen's formula For an \(\mathbb{N}\)-graded pointed space we write \(H_{n,d}(X)=\widetilde{H}_{d}(X(n))\), and similarly for chains. Write \(S^{*}(V)\) for the free graded-commutative algebra on a graded vector space \(V\). If \(V\) is equipped with additional \(\mathbb{N}\)-grading, this is inherited by \(S^{*}(V)\) (but there is no Koszul sign rule associated to the \(\mathbb{N}\)-grading, only to the homological grading). We consider \(\mathbf{C}(M;W)\). There is a map \(\widetilde{C}_{*}(W^{+};\mathbb{Q})[1]\to\widetilde{C}_{*}(\mathbf{Com}(W^{+ }[1]);\mathbb{Q})\) and, using the Eilenberg-Zilber maps, it extends to a map of cdga's \[S^{*}(\widetilde{C}_{*}(W^{+};\mathbb{Q})[1])\longrightarrow C_{*,*}( \mathbf{Com}(W^{+}[1]);\mathbb{Q}),\] which is an equivalence (since the maps \([(W^{+})^{\wedge n}]_{\mathfrak{h}\mathfrak{S}_{n}}\to[(W^{+})^{\wedge n}]_{ \mathfrak{S}_{n}}\) are rational homology isomorphisms). Similarly, there is an equivalence of cdga's \[S^{*}(\widetilde{C}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}};\mathbb{Q})[2]) \longrightarrow C_{*,*}(\mathbf{Com}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}}[2] );\mathbb{Q}).\] Furthermore, one may choose formality equivalences \[\widetilde{H}_{*}(W^{+};\mathbb{Q}) \longrightarrow\widetilde{C}_{*}(W^{+};\mathbb{Q})\] \[\widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}};\mathbb{Q}) \longrightarrow\widetilde{C}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}}; \mathbb{Q}),\] i.e. chain maps inducing the identity on homology, and hence obtain equivalences \[S^{*}(\widetilde{H}_{*}(W^{+};\mathbb{Q})[1]) \longrightarrow S^{*}(\widetilde{C}_{*}(W^{+};\mathbb{Q})[1])\] \[S^{*}(\widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}}; \mathbb{Q})[2]) \longrightarrow S^{*}(\widetilde{C}_{*}([(W\oplus W)^{+}]_{ \mathfrak{S}_{2}};\mathbb{Q})[2])\] of cdga's. With such choices the square need not commute, but does commute up to homotopy in the category of cdga's because the two chain maps \(\widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}};\mathbb{Q})\to \widetilde{C}_{*}([(W^{+})^{\wedge 2}]_{\mathfrak{S}_{2}};\mathbb{Q})\) induce the same map on homology, so are chain homotopic. The bar construction description then gives an identification \[\operatorname{Tor}_{*}^{S^{*}(\widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{ e}_{2}};\mathbb{Q})[2])}(S^{*}(\widetilde{H}_{*}(W^{+};\mathbb{Q})[1]),\mathbb{Q} [0])\cong H_{*,*}(\mathbf{C}(M;W);\mathbb{Q}).\] The Koszul complex for calculating these Tor groups takes the form \[\left(S^{*}\big{(}\widetilde{H}_{*}(W^{+};\mathbb{Q})[1]\oplus\Sigma \widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}};\mathbb{Q})[2]\big{)}, \partial\right)\] with differential given by \(\partial(\Sigma x)=\Delta_{*}(x)\in S^{2}(\widetilde{H}_{*}(W^{+}[1];\mathbb{Q}))\) when \(x\in\widetilde{H}_{*}([(W\oplus W)^{+}]/\mathfrak{S}_{2}[2];\mathbb{Q})\), and extended by the Leibniz rule. This can be simplified as follows. If \(M\) is \(d\)-dimensional then the Thom isomorphism gives \(\widetilde{H}_{*}(W^{+};\mathbb{Q})=\Sigma^{d}\widetilde{H}_{*}(M^{+}; \mathbb{Q}^{w_{1}})\), where \(\mathbb{Q}^{w_{1}}\) is the orientation local system of \(M\). It also gives \(\widetilde{H}_{*}((W\oplus W)^{+};\mathbb{Q})=\Sigma^{2d}\widetilde{H}_{*}(M^{ +};\mathbb{Q})\). The involution swapping the two \(W\) factors acts as \((-1)^{d}\) on the Thom class, so as the map \([(W\oplus W)^{+}]_{h\mathfrak{S}_{2}}\to[(W\oplus W)^{+}]_{\mathfrak{S}_{2}}\) is a rational equivalence we find \[\widetilde{H}_{*}([(W\oplus W)^{+}]_{\mathfrak{S}_{2}};\mathbb{Q})=\begin{cases} \Sigma^{2d}\widetilde{H}_{*}(M^{+};\mathbb{Q})&d\text{ even}\\ 0&d\text{ odd.}\end{cases}\] This lets us write the complex as \[\left(S^{*}\big{(}\Sigma^{d}\widetilde{H}_{*}(M^{+};\mathbb{Q}^{w_{1}})[1] \oplus\begin{cases}\Sigma^{2d+1}\widetilde{H}_{*}(M^{+};\mathbb{Q})&d\text{ even}\\ 0&d\text{ odd}\end{cases}[2]\big{)},\partial\right), \tag{2.1}\] where the differential is dual to the map \(S^{2}(H_{c}^{*}(M;\mathbb{Q}^{w_{1}}))\to H_{c}^{*}(M;\mathbb{Q})\) induced by cup product, so following Knudsen we can recognise this complex as the Chevelley-Eilenberg complex for the bigraded Lie algebra \(H_{c}^{*}(M;\operatorname{Lie}(\Sigma^{d-1}\mathbb{Q}^{w_{1}}[1]))\). Thus \[H^{2nd-*}(C_{n}(M);\mathbb{Q})\cong\widetilde{H}_{*}(C_{n}(M;W)^{+};\mathbb{Q} )\cong H_{\operatorname{Lie}}^{*}(H_{c}^{*}(M;\operatorname{Lie}(\Sigma^{d-1} \mathbb{Q}^{w_{1}}[1])))(n).\] After appropriate dualisations and reindexings, this agrees with Knudsen's formula. ### Homological stability Knudsen has explained [14, Section 5.3] how his formula implies (co)homological stability for the spaces \(C_{n}(M)\). Let us briefly review this from the point of view taken here. There is a canonical element \([M]\in\widetilde{H}_{d}(M^{+};\mathbb{Q}^{w_{1}})\), a cycle representing which defines a map \[\Sigma^{2d}\mathbb{Q}[1]\overset{\sigma}{\longrightarrow}C_{*,*}(\mathbf{ Com}(W^{+}[1]);\mathbb{Q})\longrightarrow C_{*,*}(\mathbf{C}(M;W);\mathbb{Q}).\] Multiplication by this element defines a map \[(\sigma\cdot-)_{*}:\widetilde{H}_{n-1,2d(n-1)-i}(\mathbf{C}(M;W);\mathbb{Q}) \longrightarrow\widetilde{H}_{n,2dn-i}(\mathbf{C}(M;W);\mathbb{Q})\] which under Poincare duality gives a map \(H^{i}(C_{n-1}(M);\mathbb{Q})\to H^{i}(C_{n}(M);\mathbb{Q})\); this can be checked to be the transfer map which sums over all ways of forgetting one of the \(n\) points, see [14, Section 5.2][21, Section 2.6]. Writing \(C_{*,*}(\mathbf{C}(M;W);\mathbb{Q})/\sigma\) for the mapping cone of left multiplication by \(\sigma\), the discussion above shows that its homology is calculated by a complex \[\left(S^{*}\big{(}\Sigma^{d}\frac{\widetilde{H}_{*}(M^{+};\mathbb{Q}^{w_{1}}) }{\langle[M]\rangle}[1]\oplus\begin{cases}\Sigma^{2d+1}\widetilde{H}_{*}(M^{+} ;\mathbb{Q})&d\text{ even}\\ 0&d\text{ odd}\end{cases}[2]\big{)},\partial\right).\] As \(M\) is connected, if we assume that \(d\geq 3\) then the bigraded vector spaces \(\Sigma^{d}\frac{\widetilde{H}_{*}(M^{+};\mathbb{Q}^{w_{1}})}{\langle[M] \rangle}[1]\) and \(\Sigma^{2d+1}\widetilde{H}_{*}(M^{+};\mathbb{Q})[2]\) both vanish in bidegrees \((n,2dn-i)\) with \(2dn-i>(2d-1)n\), and hence so does the free graded-commutative algebra on them. This translates to \(H^{i}(C_{n-1}(M);\mathbb{Q})\to H^{i}(C_{n}(M);\mathbb{Q})\) being surjective for \(i<n\) and an isomorphism for \(i<n-1\). For \(d=2\) the same considerations give surjectivity for \(i<\frac{1}{2}n\) and so on. These ranges can be improved by closer inspection of the structure of (2.1). ### The action of automorphisms on unordered configurations Using Knudsen's formula it is possible to mislead yourself into thinking that homeomorphisms of \(M\) (or indeed pointed homotopy self-equivalences of \(M^{+}\)) act on \(H_{*}(C_{n}(M);\mathbb{Q})\) via their action on \(H_{*}(M;\mathbb{Q})\): in other words, that such maps which act trivially on the homology of \(M\) also act trivially on the homology of \(C_{n}(M)\). This is not true: in the case of surfaces see Bianchi [20, Section 7], Looijenga [15], and the complete analysis given by Stavrou [21]. From the point of view taken here this phenomenon can be explained as follows. For simplicity suppose that \(M\) is orientable, and first suppose that it is odd-dimensional. Then \(H^{*}(C_{n}(M);\mathbb{Q})\cong\widetilde{H}_{2dn-*}(C_{n}(M;M\times\mathbb{ R}^{d})^{+};\mathbb{Q})\) and the analysis of Section 2.2 applied to \(\mathbf{C}(M;M\times\mathbb{R}^{d})\) shows that \(\mathbf{Com}(S^{d}\wedge M^{+}[1])\to\mathbf{C}(M;M\times\mathbb{R}^{d})\) is a rational homology isomorphism. So we find: **Theorem 2.1**.: _If \(M\) is orientable and odd-dimensional, then a pointed homotopy self-equivalence of \(M^{+}\) which acts trivially on \(\widetilde{H}_{*}(M^{+};\mathbb{Q})\) also acts trivially on \(H^{*}(C_{n}(M);\mathbb{Q})\). _ The even-dimensional case is more interesting. As \(M\) is assumed orientable, in this case the twisting by \(W\) can be dispensed with. It is technically convenient here--for reasons of symmetric monoidality--to work in the category of simplicial \(\mathbb{Q}\)-modules rather than chain complexes. For a space \(X\) let us abbreviate \(\mathbb{Q}[X]:=\mathbb{Q}[\operatorname{Sing}_{\bullet}(X)]\), and if it is based then let \(\widetilde{\mathbb{Q}}[X]=\mathbb{Q}[X]/\mathbb{Q}[*]\). The discussion in the previous section, ignoring the formality step and translated to simplicial \(\mathbb{Q}\)-modules, shows that given the simplicial module \(\widetilde{\mathbb{Q}}[M^{+}]\) and the map \(\delta:\widetilde{\mathbb{Q}}[M^{+}]\to\big{[}\widetilde{\mathbb{Q}}[M^{+}]^ {\otimes 2}\big{]}_{\mathfrak{S}_{2}}\) induced by the diagonal \(M^{+}\to M^{+}\wedge M^{+}\), we may for the two-sided bar construction \[B(S^{*}(\widetilde{\mathbb{Q}}[M^{+}][1]),S^{*}(\widetilde{\mathbb{Q}}[M^{+} ][2]),\mathbb{Q}[0])\] whose homotopy groups are identified with the \(\widetilde{H}_{*}(C_{n}(M)^{+})\). The bar construction may be acted upon by the group-like simplicial monoid of homotopy automorphisms of the data \[\mathcal{Q}:=(\widetilde{\mathbb{Q}}[M^{+}],\delta:\widetilde{\mathbb{Q}}[M^{+ }]\to\big{[}\widetilde{\mathbb{Q}}[M^{+}]^{\otimes 2}\big{]}_{\mathfrak{S}_{2}}).\] This simplicial monoid has \(p\)-simplices given by--using the tensoring \(\odot\) of simplicial modules over simplicial sets--pairs of an equivalence \(\phi:\Delta^{p}\odot\widetilde{\mathbb{Q}}[M^{+}]\to\widetilde{\mathbb{Q}}[M^{ +}]\) and a homotopy the simplicial structure is given by the evident functoriality in \(\Delta^{p}\). This is neither the simplicial monoid of those automorphisms which preserve \(\delta\) strictly, nor is it the simplicial monoid of those automorphisms which preserve \(\delta\) up to homotopy: rather, a choice of homotopy is built in. This has the effect that the forgetful map \[\operatorname{Aut}(\mathcal{Q})\longrightarrow\operatorname{Aut}(\widetilde{ \mathbb{Q}}[M^{+}])\] is easily seen to be a Kan fibration, with fibre over the identity given by the group-like simplicial monoid of self-homotopies of the map \(\delta\). On homotopy groups this gives an exact sequence and so an extension \[1\to\operatorname{Hom}(\Sigma\widetilde{H}_{*}(M^{+}),S^{2}(\widetilde{H}_{*} (M^{+}))/\Delta_{*})\to\pi_{0}\operatorname{Aut}(\mathcal{Q})\to\operatorname {Aut}(\widetilde{H}_{*}(M^{+}),\Delta_{*})\to 1.\] The simplicial monoid of proper homotopy self-equivalences of \(M\) acts via \(\operatorname{Aut}(\mathcal{Q})\), by letting \(\phi\) be the induced map on \((\mathbb{Q}\)-linearised) singular simplices, and \(h\) be the trivial homotopy, because such \(\phi\) literally commute with the diagonal map \(\delta\). This implies the following. We continue to assume that \(M\) is even-dimensional and orientable. Let \(G\) denote the group of homotopy classes of pointed homotopy self-equivalences of \(M^{+}\) which act as the identity on \(\widetilde{H}_{*}(M^{+};\mathbb{Q})\). **Theorem 2.2**.: _If \(M\) is orientable and even-dimensional, then \(G\) acts on \(H^{*}(C_{n}(M);\mathbb{Q})\) via \(\operatorname{Hom}(\Sigma\widetilde{H}_{*}(M^{+};\mathbb{Q}),S^{2}(\widetilde {H}_{*}(M^{+};\mathbb{Q}))/\Delta_{*})\). \(\square\)_ _Example 2.3_.: When \(M\) is a punctured surface one has \(\widetilde{H}_{*}(M^{+})=\Sigma H_{1}(M)\oplus\Sigma^{2}\mathbb{Q}\) so the map \(\Delta_{*}:\widetilde{H}_{*}(M^{+})\to S^{2}(\widetilde{H}_{*}(M^{+}))\) has the form \[\Sigma H_{1}(M)\oplus\Sigma^{2}\mathbb{Q}\longrightarrow\Sigma^{2}\Lambda^{2 }(H_{1}(M))\oplus\Sigma^{3}H_{1}(M)\oplus\Sigma^{4}\mathbb{Q},\] which in grading \(2\) is the inclusion of the symplectic form \(\omega\in\Lambda^{2}(H_{1}(M))\) and is zero otherwise. Thus the above is \(\operatorname{Hom}(H_{1}(M),\Lambda^{2}(H_{1}(M))/\langle\omega\rangle)\oplus H _{1}(M)\). Using Poincare duality and \(\Lambda^{2}(H_{1}(M))\cong\mathbb{Q}\{\omega\}\oplus\Lambda^{2}(H_{1}(M))/ \langle\omega\rangle\), this can be identified with \(\operatorname{Hom}(H_{1}(M),\Lambda^{2}(H_{1}(M)))\). This is the target of the Johnson homomorphism, cf. [10]. _Remark 2.4_.: The results of this section should also follow from [10, Theorem 1.2] and some rational homotopy theory. ## 3. Proofs Recall that \(X\in\operatorname{\mathsf{Top}}_{*}\) is _well-based_ if the basepoint map \(i:*\to X\) is a closed cofibration: under this condition \(X\wedge-\) preserves weak equivalences between well-based spaces, and preserves closed cofibrations. Let us say that an \(\mathbb{N}\)-graded based space \(Y\) is well-based if \(Y(n)\) is well-based for each \(n\in\mathbb{N}\). Let us write \(\mathbf{R}:=\operatorname{\mathbf{Com}}(L^{+}[1])\) and \(\mathbf{S}:=\operatorname{\mathbf{Com}}([(L\oplus L)^{+}]_{\mathfrak{S}_{2}} [2])\) to ease notation, so \(\Delta:\mathbf{S}\to\mathbf{R}\) makes \(\mathbf{R}\) into a \(\mathbf{S}\)-module. **Lemma 3.1**.: \(\mathbf{S}\) _and \(\mathbf{R}\) are well-based. The subspace of \([(L^{+})^{\wedge p}]_{\mathfrak{S}_{p}}\) of those tuples which do not have distinct \(M\) coordinates is well-based, and this inclusion is a closed cofibration._ Proof.: Recall that \(M\) is the interior of a compact manifold with boundary \(\overline{M}\). This admits a collar, showing that \(i:M\to\overline{M}\) admits a homotopy inverse, and so the vector bundle \(L\to M\) extends to a vector bundle over \(\overline{M}\), which we also call \(L\). Furthermore, choosing an inner product on this bundle we can form the closed disc bundle \(D(L)\to\overline{M}\), and consider \(L\) as lying inside it as the open disc bundle. Now \(D(L)\) is a manifold with boundary \(\partial D(L)=S(L)\cup D(L)|_{\partial\overline{M}}\), and \(L^{+}=D(L)/\partial D(L)\). Observe that \((\overline{M},\partial\overline{M})\) is an compact manifold pair so (is an ENR pair and hence) can be expressed as a retract of a pair \((|X_{\bullet}|,|\partial X_{\bullet}|)\) of the geometric realisations of a simplicial set and a subset. We may pull \(L\) back to \(|X_{\bullet}|\) using the retraction; let us call this \(L_{X}\). Now \(D(L_{X})/S(L_{X})\cup D(L_{X})|_{|\partial X_{\bullet}|}\) can be given an evident cell-structure (by induction over the relative cells of \(|\partial X_{\bullet}|\to|X_{\bullet}|\)), and \(L^{+}=D(L)/\partial D(L)\) is a retract of it, so is well-based. More generally, for the exterior direct sum \(L_{X}^{\boxplus p}\to|X_{\bullet}^{p}|\) and writing \(\partial|X_{\bullet}^{p}|\) for the subcomplex where some factor lies in \(\partial X_{\bullet}\), there is a cell structure on \(D(L_{X}^{\boxplus p})/S(L_{X}^{\boxplus p})\cup D(L_{X}^{\boxplus p})|_{\partial |X_{\bullet}^{p}|}\) for which the group \(\mathfrak{S}_{p}\) acts cellularly, and so \([D(L_{X}^{\boxplus p})/S(L_{X}^{\boxplus p})\cup D(L_{X}^{\boxplus p})|_{\partial |X_{\bullet}^{p}|}]_{\mathfrak{S}_{p}}\) is a cell complex of which \([(L^{+})^{\wedge p}]_{\mathfrak{S}_{p}}\) is a retract, and so is well-based. This shows that \(\mathbf{R}\) is well-based, and similar reasoning shows \(\mathbf{S}\) is. For the second statement, \[inc:F:=\text{fat diagonal of }|X_{\bullet}|^{p}=|\text{fat diagonal of }X_{\bullet}^{p}|\longrightarrow|X_{\bullet}^{p}|=|X_{\bullet}|^{p}\] is the inclusion of a \(\mathfrak{S}_{p}\)-CW-subcomplex, and so has a \(\mathfrak{S}_{p}\)-equivariant open neighbourhood \(U\) which equivariantly deformation retracts to it. This may be chosen to preserve the subcomplexes where some factor lies in \(|\partial X_{\bullet}|\). Thus it lifts to a \(\mathfrak{S}_{p}\)-equivariant deformation retraction of an open neighbourhood of \(L_{X}^{\boxplus p}|_{F}\to L_{X}^{\boxplus p}\), and descends to the quotient by the subcomplexes where some factor lies in \(|\partial X_{\bullet}|\). As it is equivariant, it descends further to the \(\mathfrak{S}_{p}\)-quotient. That is, it proves the claim for \((\overline{M},\partial\overline{M},L)\) replaced by \((|X_{\bullet}|,|\partial X_{\bullet}|,L_{X})\); as the former data is a retract of the latter, the claim follows. **Lemma 3.2**.: \(\mathbf{R}\) _is a flat \(\mathbf{S}\)-module, in the sense that \(\mathbf{R}\otimes_{\mathbf{S}}-\) preserves weak equivalences between left \(\mathbf{S}\)-modules whose underlying objects are well-based._ Proof.: Recall that \(\mathbf{R}(n)=[(L^{+})^{\wedge n}]_{\mathfrak{S}_{n}}\). Define a filtration of \(\mathbf{R}\) by \(F_{0}\mathbf{R}=\mathbf{S}\) and \[F_{p}\mathbf{R}(n):=F_{p-1}\mathbf{R}(n)\cup\operatorname{Im}\bigl{(}(L^{+})^ {\wedge p}\wedge((L\oplus L)^{+})^{\wedge(n-p)/2}\to\mathbf{R}(n)\bigr{)},\] where the latter term is only taken when it makes sense: for \(n-p\) even. This is a filtration by right \(\mathbf{S}\)-modules. One checks that the diagram is a pushout (in \(\mathsf{Top}_{*}^{\mathbb{N}}\) and so in right \(\mathbf{S}\)-modules), where the horizontal maps are induced by the \(\mathbf{S}\)-module structure and the adjoints of the map \(inc:F_{p-2}\mathbf{R}(p)\to F_{p-1}\mathbf{R}(p)\), and the map \(id:\mathbf{R}(p)\to F_{p}\mathbf{R}(p)\). We prove by induction on \(p\) that \(F_{p}\mathbf{R}\) is a flat \(\mathbf{S}\)-module in the indicated sense. As \(F_{0}\mathbf{R}=\mathbf{S}\) these properties hold for \(p=0\). For \(\mathbf{M}\) a left \(\mathbf{S}\)-module whose underlying object is well-based, applying \(-\otimes_{\mathbf{S}}\mathbf{M}\) to the square above gives a pushout square (3.1) The map \(F_{p-2}\mathbf{R}(p)\to\mathbf{R}(p)\) is the inclusion of the subspace of those \(p\)-tuples of points in \(M\) labelled by \(L\) which do not have distinct \(M\) coordinates, so is a closed cofibration from a well-based space by the second part of Lemma 3.1. As \(\mathbf{M}\) is assumed well-based, the left-hand vertical map in (3.1) is a closed cofibration in each grading, and so this square is also a homotopy pushout. A weak equivalence \(f:\mathbf{M}\xrightarrow{\sim}\mathbf{M}^{\prime}\) then induces a map of homotopy pushout squares which is a weak equivalence on all but the bottom right corner, by inductive assumption, so also induces a weak equivalence on this corner. Thus each \(F_{p}\mathbf{R}\) is flat in the indicated sense, so \(\mathbf{R}\) is too because \(F_{p}\mathbf{R}\to\mathbf{R}\) is an isomorphism when evaluated on \(n<p\), so \(F_{p}\mathbf{R}\otimes_{\mathbf{S}}\mathbf{M}\to\mathbf{R}\otimes_{\mathbf{S}} \mathbf{M}\) is too. **Lemma 3.3**.: _The induced map \(\mathbf{R}\otimes_{\mathbf{S}}S^{0}[0]\to\mathbf{C}(M;L)\) is an isomorphism._ Proof.: By definition of the relative tensor product there is a coequaliser diagram \[\mathbf{R}\otimes\mathbf{S}\xrightarrow[\beta]{\alpha}\mathbf{R}\xrightarrow [\beta]{}\mathbf{R}\otimes_{\mathbf{S}}S^{0}[0]\] in \(\mathsf{Top}_{*}^{\mathbb{N}}\), where \(\alpha\) is given by the \(\mathbf{S}\)-module structure on \(\mathbf{R}\), and \(\beta\) is induced by the augmentation \(\epsilon:\mathbf{S}\to S^{0}[0]\). The image of \(\mathbf{R}\otimes\ker(\epsilon)(n)\to\mathbf{R}(n)=[(L^{+})^{\wedge n}]_{ \mathfrak{S}_{n}}\) is precisely the image of \((L^{+})^{\wedge n-2}\wedge(L\oplus L)^{+}\to[(L^{+})^{\wedge n}]_{\mathfrak{S} _{n}}\), whose cofibre is by definition \(\mathbf{C}(M;L)\). Proof of Theorem 1.1.: Apply Lemma 3.2 to the weak equivalence \(B(\mathbf{S},\mathbf{S},S^{0}[0])\xrightarrow{\sim}S^{0}[0]\), giving an equivalence \(B(\mathbf{R},\mathbf{S},S^{0}[0])\xrightarrow{\sim}\mathbf{R}\otimes_{ \mathbf{S}}S^{0}[0]\), and the latter is isomorphic to \(\mathbf{C}(M)\) by Lemma 3.3. _Remark 3.4_.: It is possible to fool oneself into thinking that the above argument can be adapted to the case of ordered configuration spaces. It cannot, because the statement (in the guise of the equivalence (1.1), for example) is false in this case. One can verify this directly in the case \(M=*\) with trivial \(0\)-dimensional Euclidean bundle, in grading \(3\). ## Appendix A Homological densities by Quoc P. Ho and Oscar Randal-Williams ### Spaces of 0-cycles It is easy to generalise Theorem 1.1 to the following variant of configuration spaces, called "spaces of 0-cycles" by Farb-Wolfson-Wood [11]. Let \(m,k\geq 1\), and for \(n_{1},n_{2},\ldots,n_{m}\in\mathbb{N}\) let \[Z^{k}_{n_{1},\ldots,n_{m}}(M)\subset\operatorname{Sym}_{n_{1},\ldots,n_{m}}(M ):=[M^{n_{1}}]_{\mathfrak{S}_{n_{1}}}\times[M^{n_{2}}]_{\mathfrak{S}_{n_{2}}} \times\cdots\times[M^{n_{m}}]_{\mathfrak{S}_{n_{m}}}\] be the open subspace of those \((\{x_{1}^{1},\ldots,x_{n_{1}}^{1}\},\{x_{1}^{2},\ldots,x_{n_{2}}^{2}\},\ldots, \{x_{1}^{m},\ldots,x_{n_{m}}^{m}\})\) such that no \(x_{j}^{i}\) has multiplicity \(\geq k\) in all of these \(m\) multisets. That is, \(Z^{k}_{n_{1},\ldots,n_{m}}(M)\) is the configuration space of particles of \(m\) different colours, \(n_{i}\) having colour \(i\), which may collide except that no point of \(M\) may carry \(\geq k\) points of every colour. The \(1\)-point compactifications \(Z_{n_{1},\ldots,n_{m}}(M)^{+}\) again have a composition product \[Z^{k}_{n_{1},\ldots,n_{m}}(M)^{+}\wedge Z^{k}_{n_{1}^{\prime},\ldots,n_{m}^{ \prime}}(M)^{+}\longrightarrow Z^{k}_{n_{1}+n_{1}^{\prime},\ldots,n_{m}+n_{m}^ {\prime}}(M)^{+},\] giving a commutative monoid \(\mathbf{Z}^{m,k}(M)\) in \(\mathbb{N}^{m}\)-graded pointed spaces. Just as before, we can introduce labels in a vector bundle \(L\to M\), giving \(Z^{k}_{n_{1},\ldots,n_{m}}(M;L)\) and \(\mathbf{Z}^{m,k}(M;L)\). Writing \(1_{i}=(0,\dots,0,1,0\dots,0)\in\mathbb{N}^{m}\) with the \(1\) in the \(i\)th position, there is a pushout square (A.1) of unital commutative monoids in \(\mathsf{Top}_{*}^{\mathbb{N}^{m}}\), where \(\Delta\) is now induced by the inclusion \([(L^{\oplus mk})^{+}]_{\mathfrak{S}_{k}^{m}}\to[(L^{+})^{\wedge k}]_{\mathfrak{ S}_{k}}\wedge\dots\wedge[(L^{+})^{\wedge k}]_{\mathfrak{S}_{k}}=\mathbf{Com}( \bigvee_{i=1}^{m}L^{+}[1_{i}])(k,\dots,k)\). The same argument as Theorem 1.1 shows that there is an equivalence (A.2) \[\mathbf{Com}(\vee_{i=1}^{m}L^{+}[1_{i}])\otimes_{\mathbf{Com}([(L^{\oplus mk}) ^{+}]_{\mathfrak{S}_{k}^{m}[k,\dots,k])}}^{\mathbb{L}}S^{0}[0,\dots,0] \stackrel{{\sim}}{{\longrightarrow}}\mathbf{Z}^{m,k}(M;L).\] ### Revisiting homological densities This can be used to revisit the work of Farb-Wolfson-Wood [11] and Ho [12] on homological densities, and in particular to explain coincidences of homological densities at the level of topology rather than algebra, as proposed in [12, 1.5.1]. The spaces \(Z^{k}_{n_{1},\dots,n_{m}}(M;L)\) are \(\mathbb{Q}\)-homology manifolds, being open subspaces of a product of coarse moduli spaces \([L^{n}]_{\mathfrak{S}_{n}}\) of the orbifolds \(L^{n}/\mathfrak{S}_{n}\). As before, we suppose \(M\) is \(d\)-dimensional and take \(L=W\) to be given by the sum of the orientation line of \(M\) plus \((d-1)\) trivial lines: then the \(Z^{k}_{n_{1},\dots,n_{m}}(M;W)\) are orientable \(\mathbb{Q}\)-homology manifolds, of dimension \(2d\cdot\sum n_{i}\). Again they are vector bundles over \(Z^{k}_{n_{1},\dots,n_{m}}(M)\), so Poincare duality gives \[H^{*}(Z^{k}_{n_{1},\dots,n_{m}}(M))\cong H^{*}(Z^{k}_{n_{1},\dots,n_{m}}(M;W)) \cong\widetilde{H}_{2d\sum n_{i}-*}(Z^{k}_{n_{1},\dots,n_{m}}(M;W)^{+}).\] On the other hand, the bar construction formula above together with the argument of Section 2.2 identifies the multigraded vector space \(H_{*,*}(\mathbf{Z}^{m,k}(M;W))\) with \[\mathrm{Tor}_{*}^{S^{*}(\widetilde{H}_{*}([(W^{\oplus mk})^{+}]_{\mathfrak{S} _{k}^{m}})[k,\dots,k])}(S^{*}(\bigoplus_{i=1}^{m}\widetilde{H}_{*}(W^{+})[1_{ i}]),\mathbb{Q}[0,\dots,0]).\] #### a.2.1. Odd-dimensional manifolds As in Section 2.2 we have \(\widetilde{H}_{*}([(W^{\oplus mk})^{+}]_{\mathfrak{S}_{k}^{m}})\cong[\Sigma^ {dmk}\widetilde{H}_{*}(M^{+})]_{\mathfrak{S}_{k}^{m}}\) by the Thom isomorphism. If \(d\) is odd then the permutation group \(\mathfrak{S}_{k}^{m}\) acts on the Thom class via \(\mathfrak{S}_{k}^{m}\leq\mathfrak{S}_{mk}\stackrel{{ sign}}{{\to}}\mathbb{Z}^{\times}\), so acts nontrivially if \(k\geq 2\) and trivially if \(k=1\). If \(k\geq 2\) this means that \(\widetilde{H}_{*}([(W^{\oplus mk})^{+}]_{\mathfrak{S}_{k}^{m}})=0\), showing that \[H_{*,*}(\mathbf{Com}(\vee_{i=1}^{m}W^{+}[1_{i}]))\stackrel{{ \sim}}{{\longrightarrow}}H_{*,*}(\mathbf{Z}^{m,k}(M;W))\] in this case. Using Poincare duality on both sides gives [11, Theorem 1.4], except that that theorem is erroneously claimed for all \(k\geq 1\). We will return to the case \(k=1\) below. #### a.2.2. Even-dimensional manifolds If \(d\) is even then \(\mathfrak{S}_{k}^{m}\) acts trivially on \(\Sigma^{dmk}\widetilde{H}_{*}(M^{+})\), and using the Thom isomorphism to identify \(\widetilde{H}_{*}(W^{+})\cong\Sigma^{d}\widetilde{H}_{*}(M^{+})\) too, the Koszul complex for computing the Tor-groups above is \[(S^{*}\big{(}\bigoplus_{i=1}^{m}\Sigma^{d}\widetilde{H}_{*}(M^{+};\mathbb{Q}^{ w_{1}})[1_{i}]\oplus\Sigma^{dmk+1}\widetilde{H}_{*}(M^{+};(\mathbb{Q}^{w_{1}}) ^{\otimes mk}][k,\dots,k]\big{)},\partial).\] The differential \(\partial\) is induced by the map \[\Sigma^{dmk}\widetilde{H}_{*}(M^{+};(\mathbb{Q}^{w_{1}})^{\otimes mk})\to S^{ k}(\Sigma^{d}\widetilde{H}_{*}(M^{+};\mathbb{Q}^{w_{1}}))\otimes\dots\otimes S^{ k}(\Sigma^{d}\widetilde{H}_{*}(M^{+};\mathbb{Q}^{w_{1}}))\] obtained by linearly dualising the cup product map (A.3) \[H^{*}_{c}(M;\mathbb{Q}^{w_{1}})^{\otimes mk}\longrightarrow H^{*}_{c}(M;(\mathbb{Q} ^{w_{1}})^{\otimes mk}),\] and so is trivial if (and only if) all \(mk\)-fold cup products of (\(w_{1}\)-twisted) compactly-supported cohomology classes on \(M\) vanish. When this cup product map is trivial, so \(\partial\) is trivial, the above just gives a formula for \(H_{*,*}(\mathbf{Z}^{m,k}(M;W))\). Using Poincare duality, and reindexing, to express this in terms of \(H^{*}(Z^{k}_{n_{1},\ldots,n_{m}}(M))\) and \(H^{*}(\mathrm{Sym}_{n_{1},\ldots,n_{m}}(M))\) we obtain an identity of multigraded vector spaces \[H^{*}(Z^{k}_{\bullet}(M))\cong H^{*}(\mathrm{Sym}_{\bullet}(M))\otimes S^{*}( \Sigma^{d(mk-1)-1}H^{*}(M;(\mathbb{Q}^{w_{1}})^{\otimes mk-1})[k,\ldots,k]).\] There are stabilisation maps \(\sigma_{i}:H^{*}(Z^{k}_{n_{1},\ldots,n_{m}}(M))\to H^{*}(Z^{k}_{n_{1}, \ldots,n_{i}+1,\ldots,n_{m}}(M))\) analogous to those constructed in Section 2.3, similarly for \(H^{*}(\mathrm{Sym}_{n_{1},\ldots,n_{m}}(M))\), and both stabilise as \(n_{j}\to\infty\), just as in Section 2.3: this recovers [19, Theorem 1.7]. We may take the colimit of all these stabilisations to obtain \[H^{*}(Z^{k}_{\infty,\ldots,\infty}(M))\cong H^{*}(\mathrm{Sym}_{\infty,\ldots,\infty}(M))\otimes S^{*}(\Sigma^{d(mk-1)-1}H^{*}(M;(\mathbb{Q}^{w_{1}})^{ \otimes mk-1})).\] Writing \(P_{Z^{m,k}}(t)\) and \(P_{Sym^{m}}(t)\) for the Poincare series of \(H^{*}(Z^{k}_{\infty,\ldots,\infty}(M))\) and \(H^{*}(\mathrm{Sym}_{\infty,\ldots,\infty}(M))\) respectively, this discussion identifies the _homological density_\(P_{Z^{m,k}}(t)/P_{Sym^{m}}(t)\) with the Poincare series of \(S^{*}(\Sigma^{d(mk-1)-1}H^{*}(M;(\mathbb{Q}^{w_{1}})^{\otimes mk-1}))\). This visibly only depends on the product \(mk\), giving "coincidences between homological densities": this recovers [19, Theorem 1.2]; in fact it also recovers the stronger Theorem 3.6 of that paper. #### a.2.3. Odd-dimensional manifolds, \(k=1\) Just as in the even-dimensional case, if the cup product map (A.3) is zero then one gets an explicit description of \(H^{*}(Z_{\bullet}(M))\), and the homological density is given by the Poincare series of the graded vector space \(S^{*}(\Sigma^{d(m-1)-1}H^{*}(M;(\mathbb{Q}^{w_{1}})^{\otimes m-1}))\). It follows from Section A.2.1 that the homological density is \(1\) for \(k>1\), so for odd-dimensional manifolds it is _not true_ that the homological density depends only on \(mk\). #### a.2.4. Euler characteristic If the cup product map (A.3) is not zero, and either \(d\) is even or \(d\) is odd and \(k=1\), then there is instead a nontrivial differential on the multigraded vector space \[H^{*}(\mathrm{Sym}_{\bullet}(M))\otimes S^{*}(\Sigma^{d(mk-1)-1}H^{*}(M;( \mathbb{Q}^{w_{1}})^{\otimes mk-1})[k,\ldots,k]),\] of degree \((+1,0)\), whose homology is \(H^{*}(Z_{*,\ldots,*}(M))\). Then one would not expect \(\frac{P_{Z}(t)}{P_{Sym}(t)}\) to agree with the Poincare series of \(S^{*}(\Sigma^{d(mk-1)-1}H^{*}(M;(\mathbb{Q}^{w_{1}})^{\otimes mk-1}))\), and indeed it does not [19, Remark 1.6]. However, as Euler characteristic commutes with taking homology we have the identity \[\sum_{n_{1},\ldots,n_{m}\geq 0}\chi(Z_{n_{1},\ldots,n_{m}}(M))s_{1}^{n_{1}}\cdots s _{m}^{n_{m}}=\big{(}\prod_{i=1}^{m}(1-s_{i})\big{)}^{-\chi(M)}\cdot(1-(s_{1} \cdots s_{m})^{k})^{\chi(M,(\mathbb{Q}^{w_{1}})^{\otimes mk-1})}\] in \(\mathbb{Z}[[s_{1},\ldots,s_{m}]]\). The left-hand factor is \(\sum_{n_{1},\ldots,n_{m}\geq 0}\chi(\mathrm{Sym}_{n_{1},\ldots,n_{m}}(M))s_{1}^{n_{ 1}}\cdots s_{m}^{n_{m}}\). This recovers [19, Theorem 1.9 1.]. ### Spectral densities The construction of homological densities can be promoted to the level of spectra, addressing [18, 1.5.1], as follows. Let us _assume that \(M\) is even-dimensional and orientable_: then we can dispense with twisting by the vector bundle \(W\to M\). We consider \(\mathbf{Z}^{m,k}(M)\) with its \(\mathbb{N}^{m}\)-grading reduced to an \(\mathbb{N}\)-grading via \(\operatorname{sum}:\mathbb{N}^{m}\to\mathbb{N}\). Collapsing the complement of a small neighbourhood of a point in \(M\) gives a map \(M^{+}\to S^{d}\), inducing a map of commutative monoids \[\operatorname{\mathbf{Com}}(\bigvee_{i=1}^{m}M^{+}[1])\longrightarrow \operatorname{\mathbf{Com}}(S^{d}[1]).\] If \(X\) is a left \(\operatorname{\mathbf{Com}}(S^{d}[1])\)-module, it is equipped with maps \(S^{d}\wedge X(n)\to X(n+1)\) and so we can define the spectrum \(\overline{X}:=\operatorname{hocolim}_{n\to\infty}S^{-nd}\wedge\Sigma^{\infty} X(n)\). Using these two constructions we may therefore form the spectrum \[\Delta^{m,k}:=\overline{\operatorname{\mathbf{Com}}(S^{d}[1])\otimes_{ \operatorname{\mathbf{Com}}(\bigvee_{i=1}^{m}M^{+}[1])}^{\mathbb{Z}^{m,k}(M)}}.\] By analogy with [10, Section 7.5] we propose \(\Delta^{m,k}\) as a spectral form of the stable density of \(Z^{k}_{n_{1},\dots,n_{m}}(M)\) in \(\operatorname{Sym}_{n_{1},\dots,n_{m}}(M)\). At the level of \(\mathbb{Q}\)-chains it recovers the construction from the proof of Theorem 7.5.1 of [10]. We can prove the spectral form of that theorem analogously: as \(\mathbb{N}\)-graded objects, there is an evident map from (A.1) to the analogous square for \(\mathbf{Z}^{1,mk}(M)\) which induces a map of spectra \(\Delta^{m,k}\to\Delta^{1,mk}\), and this is an equivalence by (A.2) as both are identified with \(\overline{\operatorname{\mathbf{Com}}(S^{d}[1])\otimes_{\operatorname{ \mathbf{Com}}(M^{+}[mk])}^{\mathbb{L}}S^{0}[0]}\). This may be simplified for \(mk\geq 2\) as follows. The map \(M^{+}\to[(S^{d})^{\wedge mk}]_{\mathfrak{S}_{mk}}\) with which the derived tensor product is formed factors over \((S^{d})^{\wedge mk}\) so is nullhomotopic when \(mk\geq 2\), and so \(\operatorname{\mathbf{Com}}(S^{d}[1])\otimes_{\operatorname{\mathbf{Com}}(M ^{+}[mk])}^{\mathbb{L}}S^{0}[0]\) is equivalent to \(\operatorname{\mathbf{Com}}(S^{d}[1])\otimes(S^{0}[0]\otimes_{\operatorname {\mathbf{Com}}(M^{+}[mk])}^{\mathbb{L}}S^{0}[0])\) as a left \(\operatorname{\mathbf{Com}}(S^{d}[1])\)-module. In this situation the \(\overline{(-)}\) construction gives \[\Delta^{m,k} \simeq\bigvee_{n\geq 0}S^{-nd}\wedge\Sigma^{\infty}(S^{0}[0] \otimes_{\operatorname{\mathbf{Com}}(M^{+}[mk])}^{\mathbb{L}}S^{0}[0])(n)\] \[\simeq\bigvee_{n\geq 0}S^{-nd}\wedge\Sigma^{\infty} \operatorname{\mathbf{Com}}(S^{1}\wedge M^{+}[mk])(n).\] ### Acknowledgements ORW supported by the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 756444).
2306.15639
Radiation of Optical Angular Momentum from a Dipole Source in a Magneto-birefringent disordered Environment
We investigate the radiation of optical angular momentum by a dipole gas under uniform magnetic field with an unpolarized source at its center. Conservation of angular momentum implies that the radiation of angular momentum results in a torque on both the source and the surrounding environment. Moreover, we study the spin and orbital contributions to the radiated angular momentum.
R. Le Fournis, B. A. Van Tiggelen
2023-06-27T17:30:58Z
http://arxiv.org/abs/2306.15639v2
Radiation of Optical Angular Momentum from a Dipole Source in a Magneto-birefringent disordered Environment ###### Abstract **Abstract**: We investigate the radiation of optical angular momentum by a dipole gas under uniform magnetic field with an unpolarized source at its center. Conservation of angular momentum implies a torque on both the source and the surrounding environment. We study the separate spin and orbital contributions to the radiated angular momentum. ## I Introduction The emission of radiation from a dipole source, such as an atom or a quantum emitter, can be significantly affected by its environment. An important phenomenon in this context is the Purcell effect Purcell (1954). It refers to the modification of the radiative properties of a source when placed in close proximity to an environment with specific optical properties. Notably, it has been shown theoretically and experimentally that the angular momentum emitted by a dipole source is affected by its environment Purcell (1954). Previous work has shown that a dipole source at the center of a Mie sphere under a magnetic field can experience a torque that results in the emission of angular momentum into space [3]. In this work we calculate the angular momentum emitted by an unpolarized source placed at the center of a magneto-birefringent heterogeneous environment composed of small electric dipole scatterers. Due to the conservation of angular momentum, not only the source experiences a torque but also the environment. We study the decomposition of the radiated angular momentum into spin and orbital components. Our aim is to understand the influence of multiple scattering of light on both the radiated angular momentum and the torque on the medium. ## II Radiation of Optical Angular Momentum We consider an unpolarized dipole source in the center of a spherical environment with radius \(R\) containing a gas of \(N\) point-like electric dipoles (Fig.1). The electric dipoles have polarizability [4]: \[\mathbf{\alpha}(\omega,\mathbf{B}_{0})=\frac{\alpha(0)\omega_{0}^{2}}{\omega_{0}^{2} -(\omega+\omega_{c}i\mathbf{\epsilon}\cdot\hat{\mathbf{B}}_{0})^{2}-i\gamma\omega} \tag{1}\] where \(\gamma\) is the radiative damping rate of the dipoles, \(\omega_{0}\) is the resonance frequency and \(\omega_{c}=eB_{0}/2mc_{0}\) is the cyclotron frequency; \(\alpha(0)\) is the static polarizability and can be related to the volume \(u\) of the dipoles as \(\alpha(0)=3u\)[5]. In a microscopic theory for a two-level atom, \(\hbar\omega_{c}\) represents the Zeeman splitting of the excited states due to the interaction of the atom with the magnetic field. The continuity equation for the angular momentum \(\mathbf{J}(r,t)\) confined in a sphere of radius \(r\) at time \(t\) around the source is given by [6]: \[\frac{d}{dt}\bigg{[}J_{k}^{\text{mech}}+J_{k}^{\text{rad}}\bigg{]} (r,t) =\frac{r^{3}}{8\pi}\epsilon_{klm}\] \[\times\text{Re} \int d\hat{\mathbf{r}}^{\prime}\,\hat{r}_{j}^{\prime}\hat{r}_{l}^{ \prime}(E_{j}\bar{E}_{m}+B_{j}\bar{B}_{m})(r\hat{\mathbf{r}}^{\prime},t)\] \[\equiv M_{L,k} \tag{2}\] The right-hand side of this equation represents the optical angular momentum (AM) radiated into space. For \(r>R\) the mechanical AM is independent on distance \(r\) Figure 1: Schematical picture of the geometry considered in this work. An unpolarized dipole source is located at the origin, surrounded by two shells: the first shell is empty to avoid scattering close to the source, while the second shell contains an on average homogeneous dipole gas. The external magnetic field is directed perpendicular to the plane. The electromagnetic Poynting vector exhibits an axymuthal component \(\mathbf{S}_{\phi}\) which is orthogonal to both the magnetic field and the radial vector. This component induces the emission of angular momentum into space. and \(d\mathbf{J}^{\rm{mech}}/dt\) gives the torque on both source and environment. If the source has been stationary for a time much longer than it takes light to travel a distance equal to the radius \(r\) (\(t\gg r/c_{0}\)), the total electromagnetic AM inside the sphere of radius \(r\) stays constant, so that \(d\mathbf{J}^{\rm{rad}}(r,t)/dt=\mathbf{0}\). In that case, the leakage is independent on the distance \(r\). According to Eq. (2), matter experiences a torque equal to the emitted AM. To determine the amount of AM emitted, we can evaluate the right-hand side of Eq. (2) at a large distance from the environment (\(r\rightarrow\infty\)). The electric and magnetic fields in Eq. (2) are determined by solving the Helmholtz equation for a classical monochromatic source with frequency \(\omega=kc_{0}\)[7]. The solutions for the electric and magnetic fields can be expressed as, \[\mathbf{E}(\mathbf{r})=\mathbf{G}^{E}(\mathbf{r})\cdot(-4\pi i\omega/c_{0}^{2})\mathbf{j}_{S}\] \[\mathbf{B}(\mathbf{r})=\mathbf{G}^{B}(\mathbf{r})\cdot(-4\pi i\omega/c_{0}^{2})\mathbf{j}_{S} \tag{3}\] We define the Green's functions as \(\mathbf{G}^{E}(\mathbf{R})\) and \(\mathbf{G}^{B}(\mathbf{R})\) according to \[\mathbf{G}^{E}(\mathbf{r})=\mathbf{G}_{0}(\mathbf{r})+\sum_{\alpha,\beta}\mathbf{G}_{0}(\mathbf{r}- \mathbf{R}_{\alpha})\cdot\mathbf{T}_{\alpha\beta}\cdot\mathbf{G}_{0}(\mathbf{R}_{\beta}) \tag{4}\] \[\mathbf{G}^{B}(\mathbf{r})=\mathbf{G}_{B}^{0}(\mathbf{r})+\sum_{\alpha,\beta}\mathbf{G}_{B}^{0}( \mathbf{r}-\mathbf{R}_{\alpha})\cdot\mathbf{T}_{\alpha\beta}\cdot\mathbf{G}_{0}(\mathbf{R}_{\beta}) \tag{5}\] To distinguish between the spatial components and the dipole indices of the tensors, we use Latin and Greek indices, respectively. The dot denotes the contraction of Latin indices associated with polarization. The Green's functions \(\mathbf{G}^{0}\) and \(\mathbf{G}_{B}^{0}\) describe how the electromagnetic waves generated by the electric dipole source propagate in free space and can be obtained from [6]. For a monochromatic electric dipole source at the origin with a dipole moment \(\mathbf{d}\), the source current is \(\mathbf{j}_{S}(\mathbf{r})=-i\omega\mathbf{d}\delta(\mathbf{r})\). The transmission matrix \(\mathbf{T}_{\alpha\beta}\), that appears in both Green's functions, is a \(3N\times 3N\) matrix. It describes how light emanating from a source and interacting with a gas of electric dipoles propagates through the environment. The factor 3 corresponds to the 3 polarization states of light. The T-matrix can be calculated as described in [5]. We will first put \(\mathbf{B}_{0}=\mathbf{0}\) and treat the external magnetic field later in perturbation. Without external magnetic field, the T-matrix is given by: \[\{\mathbf{T}_{\alpha\beta}^{(0)}\}=\frac{t_{0}(\omega)}{\{\mathbf{1}\delta_{\alpha \beta}-t_{0}(\omega)\mathbf{G}^{0}(\mathbf{R}_{\alpha}-\mathbf{R}_{\beta})(1-\delta_{ \alpha\beta})\}} \tag{6}\] The superscript \((0)\) indicates that this is the T-matrix in the absence of an external magnetic field; \(t_{0}(\omega)\) is the T-matrix of a single electric dipole and is expressed near its resonance frequency \(\omega_{0}=c_{0}k_{0}\) as: \[\mathbf{t}_{0}(\omega)=-\mathbf{\alpha}(\omega,\mathbf{B}_{0}=\mathbf{0})k^{2}=\frac{3\pi}{k_ {0}}\frac{1}{\delta+i/2}\mathbf{1} \tag{7}\] We introduced the detuning parameter \(\delta\)[5], which is the difference between the frequency \(\omega\) of the source and the resonance frequency \(\omega_{0}\) of the surrounding dipoles divided by the decay rate of the dipoles. The frequency \(\omega\) is assumed to be near the resonance and \(\gamma\ll\omega_{0}\), so that we set \(k=k_{0}\) in the Green's tensor. The T-matrix describes a non-absorbing medium, and in our numerical implementation we explicitly verified the optical theorem. The presence of an external magnetic field in the polarizability (1) modifies the T-matrix [8]. The change is small and can be treated perturbatively. In first order, the modified T-matrix is given by \[\mathbf{T}_{\alpha\beta}=\mathbf{T}_{\alpha\beta}^{(0)}+\mathbf{T}_{\alpha\gamma}^{(0)} \cdot\left[\delta_{\gamma\delta}(-i\frac{k_{0}}{6\pi}\mu\mathbf{\epsilon}\cdot\hat {\mathbf{B}}_{0})\right]\cdot\mathbf{T}_{\delta\beta}^{(0)} \tag{8}\] The matrix element in polarization space \((\mathbf{\epsilon}\cdot\hat{\mathbf{B}}_{0})_{ij}\) is defined as \(\epsilon_{ijk}\hat{B}_{0k}\). The dimensionless parameter \(\mu\) is deduced from the expression of the polarizability (1) in the presence of the magnetic field: \[\mu=\frac{12\pi}{\alpha(0)k_{0}^{3}}\frac{\omega_{c}}{\omega_{0}} \tag{9}\] All torques calculated in this paper will be linearly proportional to \(\mu\). In this diamagnetic picture, \(\mu\sim 5\times 10^{-7}\) is small (calculated e.g. for hydrogen atom and the transition \(1S\to 2P\)). The mechanical torque in Eq. (2) can be split into two parts: the torque on the source \(\mathbf{M}_{\rm{S}}\) and the torque on the environment. The torque on the source is given by [3], \[M_{{\rm{S}},i}=-\frac{2\pi}{3}k^{2}|\mathbf{d}|^{2}{\rm{Re}}\,\epsilon_{ijk}G_{kj}^ {E}(\mathbf{0},\mathbf{0}) \tag{10}\] By conservation of AM, the torque on the environment equals the difference between the torque on the source and the AM leakage \(\mathbf{M}_{\rm{E}}=\mathbf{M}_{\rm{L}}-\mathbf{M}_{\rm{S}}\) ## III Computation of the radiated optical angular momentum The right hand side of Eq. (2) represents the radiation of optical AM as mentioned, which includes a front factor proportional to \(r^{3}\). This factor means that the AM emitted at infinity can be found by keeping only those terms in \(\bar{E}_{l}E_{k}\) and \(\bar{B}_{l}B_{k}\) that are proportional to \(1/r^{3}\) and that the leading terms \(1/r^{2}\) vanish. We assume an unpolarised source, facilitated by a random direction of the dipole direction \(\mathbf{\hat{d}}\) of the source. The tensor for an unpolarized source is expressed as: \[\bar{E}_{j}(\mathbf{r})E_{m}(\mathbf{r})=\frac{(4\pi)^{2}}{3}k_{0}^{4}|\mathbf{d }|^{2}\bigg{[}G^{0}_{js}(\mathbf{r})\bar{G}^{0}_{ms}(\mathbf{r})\] \[+\sum_{\alpha,\beta}G^{0}_{js}(\mathbf{r})\bar{G}^{0}_{mn}(\mathbf{r}-\mathbf{ R}_{\alpha})(\bar{T}_{\alpha\beta})_{nn^{\prime}}\bar{G}^{0}_{n^{\prime}s}(\mathbf{R}_{ \beta})\] \[+\sum_{\alpha,\beta}G^{0}_{jn}(\mathbf{r}-\mathbf{R}_{\alpha})(T_{\alpha \beta})_{nn^{\prime}}\bar{G}^{0}_{n^{\prime}s}(\mathbf{R}_{\beta})\bar{G}^{0}_{ms} (\mathbf{r})\] \[+\sum_{\alpha,\beta,\gamma,\delta}G^{0}_{jn}(\mathbf{r}-\mathbf{R}_{ \alpha})(T_{\alpha\beta})_{nn^{\prime}}\bar{G}^{0}_{n^{\prime}s}(\mathbf{R}_{ \beta})\bar{G}^{0}_{mp}(\mathbf{r}-\mathbf{R}_{\gamma})\] \[\times(\bar{T}_{\gamma\delta})_{pp^{\prime}}\bar{G}^{0}_{p^{ \prime}s}(\mathbf{R}_{\delta})\bigg{]} \tag{11}\] A similar expression for average tensor \(\bar{B}_{j}B_{m}\) is obtained by the same procedure. The first term of Eq. (III) corresponds to the unscattered radiation of the source and does not contribute to the radiated AM. The second and third terms include the interference of the unscattered electric field emitted by the source and the electric field scattered by the dipole gas. The last term accounts for the interference among all multiple scattering paths inside the dipole gas. When considering the radiated optical AM at infinity, the angular integral in Eq. (2) simplifies and can be performed analytically. The general form of the expression for the radiated AM linear in the magnetic field is given by \[M_{\mathrm{L},k}=X_{k}(\mathbf{R}_{1},...,\mathbf{R}_{N})+A_{km}(\mathbf{R}_{1},...,\mathbf{R} _{N})\cdot\hat{B}_{0,m} \tag{12}\] The vector \(\mathbf{X}\) vanishes when only one dipole is considered, but not for an arbitrary rotationally variant environment. The second term in Eq. (12) represents the torque linear in the magnetic field. We restore rotational invariance by averaging over the orientation of the dipole gas distribution. This means that we rotate all position vectors of the scatterers by the same angle and then calculate the average over all possible rotations. The average of \(\langle\mathbf{M}_{\mathrm{L}}\cdot\mathbf{\hat{B}}_{0}\rangle_{o}\) over the dipole gas distribution orientation is physically equivalent to average over the direction of the magnetic field \(\hat{\mathbf{B}}_{0}\), \[\langle\mathbf{M}_{\mathrm{L}}\cdot\mathbf{\hat{B}}_{0}\rangle_{o}=\langle\mathbf{M}_{ \mathrm{L}}\cdot\mathbf{\hat{B}}_{0}\rangle_{\hat{\mathbf{B}}_{0}} \tag{13}\] Averaging over the orientation of the dipole gas restores rotational symmetry Consequently, only the direction of the external magnetic field remains for the radiated AM, this can be expressed as \[\langle\mathbf{M}_{\mathrm{L}}\rangle_{o}=\kappa_{\mathrm{L}}(\mathbf{R}_{1},...,\mathbf{R }_{N})\hat{\mathbf{B}}_{0} \tag{14}\] The scalar \(\kappa_{\mathrm{L}}\) is easily related to the second-rank tensor \(\mathbf{A}\) using Eqs. (13) and (12): \[\kappa_{\mathrm{L}}=\frac{1}{3}\mathrm{Tr}\,\mathbf{A} \tag{15}\] The same averaging of Eq. (10) leads to: \[\langle\mathbf{M}_{\mathrm{S}}\rangle_{o}=\kappa_{\mathrm{S}}\hat{\mathbf{B}}_{0} \tag{16}\] The calculation of \(\kappa_{L}\) for one single dipole, whose position is averaged over the same shell, has been done analytically. We used the analytical expression to calculate \(\kappa_{L}\) in the single scattering approximation (see Section V). Note that \(\kappa_{\mathrm{L}}\) and \(\kappa_{\mathrm{S}}\) can still fluctuate from one realization of dipole positions \(\{\mathbf{R}_{\alpha}\}\) to another. The full distributions will be obtained later. ## IV Orbital and spin angular momentum In this section we separate the optical radiated AM into optical spin and orbital AM, and associate the total torque with the magneto-transverse component \(\mathbf{S}_{\phi}\) of the Poynting vector in the far field. To establish the second we assume that the radiative AM propagates with the speed \(c_{0}\) from the source so that, \[\frac{d\mathbf{J}^{\mathrm{rad}}}{dt}=\frac{d}{dt}\bigg{[}\int_{0}^{c_{0}t}dr\,r^{ 2}\int d\hat{\mathbf{r}}\,\mathbf{r}\times\frac{\mathbf{S}(\mathbf{r})}{c_{0}^{2}}\bigg{]}=\bm {M}_{\mathrm{L}} \tag{17}\] where \(\mathbf{S}(\mathbf{r})\) is the Poynting vector. This equation shows that \(4\pi r^{3}\langle\hat{\mathbf{r}}\times\mathbf{S}_{\phi}\rangle_{\hat{\mathbf{r}}}/c_{0}\) for large \(r\) is equal to \(\mathbf{M}_{\mathrm{L}}\). Hence, after averaging over the orientation of the dipole gas distribution, we get the simple expression, \[\int d\hat{\mathbf{r}}\,\hat{\mathbf{r}}\times\langle\mathbf{S}_{\phi}\rangle_{o}(\mathbf{r}) =\frac{\kappa_{\mathrm{L}}c_{0}}{r^{3}}\hat{\mathbf{B}}_{0} \tag{18}\] We can separate the radiated AM into spin and orbital AM using [9], \[\mathbf{M}_{\mathrm{L}} =-\frac{r^{2}}{8\pi k}\mathrm{Im}\int d\hat{\mathbf{r}}\left[\mathbf{E} \times\mathbf{\bar{E}}+E_{m}(\mathbf{r}\times\mathbf{\nabla})\bar{E}_{m}\right]\] \[=\mathbf{M}_{\mathrm{L}}^{\mathrm{spin}}+\mathbf{M}_{\mathrm{L}}^{ \mathrm{orbit}} \tag{19}\] The angular integral in Eq. (19) must be evaluated on an arbitrary surface outside the environment. We evaluate it at infinity (\(r\rightarrow\infty\)), which allows us to perform the angular integral analytically. When evaluating the surface integral at infinity, only terms in the integrand that are proportional to \(1/r^{2}\) need to be considered. In conclusion, the scalar \(\kappa_{\mathrm{L}}\) associated with total leakage of AM, can be separated either into its spin and orbital components, or into torques exerted on environment and source, \[\kappa_{\mathrm{L}}=\kappa_{\mathrm{Sp}}+\kappa_{\mathrm{O}}=\kappa_{\mathrm{S }}+\kappa_{\mathrm{E}} \tag{20}\] ## V Numerical results In the numerical simulations, we calculate the dimensionless ratios \(\kappa\omega/P_{\mathrm{S}}\mu\). Here \(P_{\mathrm{S}}\) is the power emitted by the dipole source, which is also affected by the surrounding environment (Purcell effect [1]); \(\kappa\) refers to either \(\kappa_{\rm L}\), \(\kappa_{\rm Sp}\), etc. This ratio normalizes the torque to the amount of radiated energy and quantifies the amount of emitted AM in units of \(\hbar\) per emitted photon. The power emitted by the source is given by [3]: \[P_{\rm S}=-\frac{2\pi}{3}k_{0}^{3}c_{0}|\mathbf{d}|^{2}{\rm Im}G_{ii}^{E}(\mathbf{0}, \mathbf{0}) \tag{21}\] The different ratios \(\kappa\omega/P_{\rm S}\mu\) are computed as a function of two parameters \(\eta\) and \(\tau_{0}\) defined as, \[\eta=\frac{4\pi n}{k_{0}^{3}}\qquad\tau_{0}=\frac{R}{\ell_{0}}=Ln\sigma_{sc} \tag{22}\] The parameter \(\eta\) is the number of dipoles per cubic of wavelength, and \(\tau_{0}\) is (assuming independent scattering) the optical thickness, which is the ratio of the radius \(R\) of the sphere and the mean free path \(\ell_{0}=(n\sigma_{sc})^{-1}\); \(n\) is the scatterer density and \(\sigma_{sc}\) is the scattering cross section for one dipole. For \(\eta\ll 1\), the contribution of recurrent scattering and interference to multiple scattering is expected to be negligible, but as \(\eta\) approaches unity, recurrent scattering can no longer be neglected. The neglect of recurrent scattering and interference is referred to as the Independent Scattering Approximation (ISA)[10], so that \(\ell_{0}\) is the mean free path in the independent scattering approximation. For \(\eta\sim 1\), the mean free path is usually larger than \(\ell_{0}\)[5]. We will ignore this change of the mean free path in the optical thickness \(\tau_{0}\) and use \(\ell_{0}\) to calculate the optical thickness. In our numerical simulations, we have excluded a small spherical region with a radius of \(\lambda/4\) around the dipole source to avoid divergences due to the singular behavior of the Green's function at small \(\mathbf{r}\) (see Fig.1). We accounted for this small exclusion volume to obtain the value of \(\tau_{0}\) and \(\eta\). However, we did not impose any constraints on the distance between the dipoles that would induce spatial correlations. We did not encounter any problems due to close dipoles in the simulations. To ensure the reliability of our results, we carefully ensured that the medium was on average homogeneous. We performed the full statistics of the value of the various kappa's over at least \(10^{3}\) independent realizations for each data point, to be discussed later in Fig.6. We now present the numerical results. Figure 2 shows that for \(\eta=0.03\) the radiated AM is proportional to \(\tau_{0}^{2}\) and is directed opposite to the external magnetic field. This behavior is also expected to hold for smaller values of \(\eta\) that are more difficult to access numerically. For larger \(\eta\), however, the \(\tau_{0}^{2}\) scaling disappears as \(\tau_{0}\) increases, and the radiated AM decreases as \(\eta\) increases at constant optical thickness. This figure illustrates the influence of recurrent scattering and interference on the radiated AM. More optical thickness increases the amount of radiated optical AM per photon, but larger values for \(\eta\) reduce the amount of radiated AM. The small leak of AM for large \(\eta\) may imply that interference phenomena suppress the propagation of AM inside the sphere. The result obtained for \(\eta=0.3\) and \(\tau_{0}\sim 1.25\) is quite remarkable, since the leak of AM actually vanishes, although the torque on the source still has a significant value (Fig.4). This indicates that a direct transfer of AM exists from the source to the environment. For these parameters, the radiated spin and orbital AM are exaclty opposite \(\kappa_{\rm Sp}+\kappa_{\rm O}=0\), with no net leak. In Fig.3 we show the numerical results for the spin component of the radiated AM. Surprisingly, we can see that the spin component represents only a small part of the total radiated AM, showing that the radiated AM is dominated by orbital AM. Only for \(\eta=0.3\), the spin part Figure 2: Numerical results for the total leakage \(\kappa_{\rm L}\omega/P_{\rm S}\mu\) per emitted photon for different values of the parameters \(\eta\) and \(\tau_{0}\). The bars indicate the standard deviation of the full statistic of \(\kappa_{\rm L}\) values, and are not error bars. The dashed line denotes the level zero. Figure 3: Results of numerical simulations for the spin leakage \(\kappa_{\rm Sp}\omega/P_{\rm S}\mu\) per emitted photon as a function of the optical thickness for various values of the parameter \(\eta\). of total AM becomes significant. In Fig.4 we have plotted the torque acting on the source, given by the formula (10). It is strictly positive in the applied parameter range. Upon comparing Figs 4 and 2, it can be seen that for small values of \(\eta\), the torque exerted on the source is relatively insignificant compared to the torque exerted on the environment. This implies that it is primarily the environment that acquires AM. However, as \(\eta\) increases, this is no longer the case. For \(\eta=0.3\), the torque on the source becomes comparable to the torque on the environment, though with opposite sign, indicating that they acquire opposite AM. It is worth noting that the curve for \(\eta=0.3\) exhibits oscillations. The cause of these oscillations is not yet fully understood, and it is possible that they are the result of an imperfect average over the dipole gas distribution. The aim of Fig.5 is to compare the roles of multiple and single scattering. We compare the radiated AM calculated with multiple scattering (MS) and the radiated angular momentum calculated from \(N\) single dipoles formula: \[\kappa_{\rm L}^{\rm SS}=\sum_{\alpha}\kappa_{\rm L}(\mathbf{R}_{\alpha}) \tag{23}\] where \(\kappa_{\rm L}(\mathbf{R}_{\alpha})\) is the radiated angular momentum for a single dipole. The single dipole formula obviously neglects any kind of multiple scattering. As expected for \(\tau_{0}>1\), the multiple scattering result deviates significantly from the single dipole formula. The full statistics of the different \(\kappa\) are shown in Fig.6 for \(\eta=0.03\), \(\tau_{0}=2.074\) and \(\delta=0.1\). The standard deviations shown earlier in Fig.2 correspond to the widths of the probability distribution function for \(\kappa_{L}\) for the different values of \(\eta\) and \(\tau_{0}\) and are given by the usual formula: \[\sigma(\tau_{0},\eta)=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\kappa_{i}-\langle\kappa \rangle)^{2}} \tag{24}\] where \(N\) is the number of realizations, \(\kappa_{i}\) is the value of \(\kappa\) for the realization \(i\) and \(\langle\kappa\rangle\) is the mean value of \(\kappa\) over the \(N\) realizations. It can be seen from Fig.2 that as either \(\eta\) or \(\tau\) increase, the width of the distributions of the radiated AM increases. Thus, both the increase in recurrent scattering and the increase in multiple scattering tend to broaden the distributions. The distributions for the spin and orbital AM are seen to be quite different, again revealing the dominant role of orbital radiated AM. Contrary to \(\kappa_{\rm O}\), the average of \(\kappa_{\rm sp}\) is small because it takes both positive and negative values. In Figure 7, we show the effect of taking small opposite detunings on the radiated AM for \(\eta=0.03\) and \(\eta=0.1\). For small values of \(\tau_{0}\), the radiated angular momentum has opposite signs for opposite detunings. This is expected from the single dipole formula, proportional to \(\text{Im}\,t_{0}^{2}\). As \(\tau_{0}\) approaches unity and beyond, this is no longer the case, and the curves for opposite detunings show similar behavior, with only a slight shift with respect to each other. In Fig.8 we have made an attempt to find a single-parameter scaling of the AM leak. To this end we multiplied all curves by \(\eta/0.03\). There is a good agreement for the curves \(\eta=0.06\) and \(\eta=0.1\) but the curve \(\eta=0.3\) clearly deviates. This figure shows that for values of \(\eta\) up to \(0.1\), the total radiated angular momentum is proportional to the mean free path \(\ell_{0}\) for \(\tau_{0}\) constant. When \(\eta\sim 1\), the mean free path is no more given by the ISA formula (22) and the true value of the optical thickness \(\tau\) is smaller than \(\tau_{0}\). Figure 4: Numerical results showing the torque on the source per emitted photon as a function of the parameters \(\eta\) and \(\tau_{0}\). Figure 5: Effect of multiple scattering on the radiated optical AM. The multiple scattering curves are labelled ’MS’ and the single scattering approximation curves are labelled ’SS’. ## VI Conclusion and Outlooks In this paper we have investigated the transfer of angular momentum radiated by an electric dipole source into a magneto-birefringent environment. Two parameters have been considered for this study: \(\eta\) the density of scatterers per cubic of wavelength and \(\tau_{0}\) the optical thickness in the independent scattering approximation (ISA). We have shown that for \(\eta\ll 1\) the total radiated AM is proportional to \(\tau_{0}^{2}\) and opposite to the direction of the magnetic field. However, it is observed that as \(\eta\) approaches unity, the \(\tau_{0}^{2}\) behavior disappears as \(\tau_{0}\) increases. For constant \(\tau_{0}\), the radiated AM is proportional to the mean free path \(\ell_{0}\) as long as the density is small enough (\(\eta<0.1\)). The torque on the medium was separated into the torque on the source and the torque on the environment. Previous work [5] showed that the torque on a homogeneous environment vanishes. For small values Figure 7: Impact of opposite detunings on the total AM leakage for the curves \(\eta=0.03\) and \(\eta=0.1\). The bars indicate the standard deviation of the distribution of \(\kappa_{\mathrm{L}}\omega/P_{S}\mu\) values. Figure 8: Single parameter scaling of the normalized angular momentum leak for \(\eta=0.03\), \(\eta=0.06\), \(\eta=0.1\) and \(\eta=0.3\) obtained by multiplying them by \(\eta/0.03\). For \(\eta=0.3\), the scaling starts to break down. This figure shows that for values of \(\eta\) up to \(0.1\), the total radiated angular momentum is proportional to the mean free path \(\ell_{0}\) times \(f(\tau_{0})\sim\tau_{0}^{2}\) for \(\tau_{0}\leq 2\). of \(\eta\), the torque on the source is negligible and the torque environment dominates. As \(\eta\) increases, the torque on the source increases while the total radiated AM decreases. We conclude that there is more and more transfer of AM from the source directly to the environment. For \(\tau_{0}\sim 1.25\) and \(\eta=0.3\) we see a special case with no radiated AM at all, thus the torque on the source is fully compensated by the torque on the environment. Finally, the impact of small opposite detunings \(\delta\) on the total radiated AM is studied. We have also separated the total radiated angular momentum into spin and orbital components. Surprisingly, the radiated angular momentum turns out to be dominated in most cases by orbital AM. This work can be extended to consider a magnetic dipole field rather than a uniform magnetic field, without much qualitative difference. This could be relevant for astrophysical phenomena such as AM transport in stars. ## Acknowledgements This work was funded by the Agence Nationale de la Recherche (Grant No. ANR-20-CE30-0003 LOLITOP). The structure of this text has been improved with the help of DeepL write.
2304.03622
Neutrino spin and flavor oscillations in gravitational fields
We study spin and flavor oscillations of astrophysical neutrinos under the influence of external fields in curved spacetime. First, we consider spin oscillations in case of neutrinos gravitationally scattered off a rotating supermassive black hole surrounded by a thin magnetized accretion disk. We find that the gravitational interaction only does not result in the spin-flip of scattered ultrarelativistic neutrinos. Realistic magnetic fields lead to the significant reduction of the observed flux of neutrinos possessing reasonable magnetic moments. Second, we study neutrino flavor oscillations in stochastic gravitational waves (GWs). We derive the effective Hamiltonian for neutrinos interacting with a plane GW having an arbitrary polarization. Then, we consider stochastic GWs with arbitrary correlators of amplitudes. The equation for the density matrix for neutrino oscillations is solved analytically and the probabilities to detect certain neutrino flavors are derived. We find that the interaction of neutrinos, emitted by a core-collapsing supernova, with the stochastic GW background results in the several percent change of the neutrino fluxes. The observability of the predicted effects is discussed.
Maxim Dvornikov
2023-04-07T12:48:31Z
http://arxiv.org/abs/2304.03622v1
# Neutrino spin and flavor oscillations in gravitational fields ###### Abstract We study spin and flavor oscillations of astrophysical neutrinos under the influence of external fields in curved spacetime. First, we consider spin oscillations in case of neutrinos gravitationally scattered off a rotating supermassive black hole surrounded by a thin magnetized accretion disk. We find that the gravitational interaction only does not result in the spin-flip of scattered ultrarelativistic neutrinos. Realistic magnetic fields lead to the significant reduction of the observed flux of neutrinos possessing reasonable magnetic moments. Second, we study neutrino flavor oscillations in stochastic gravitational waves (GWs). We derive the effective Hamiltonian for neutrinos interacting with a plane GW having an arbitrary polarization. Then, we consider stochastic GWs with arbitrary correlators of amplitudes. The equation for the density matrix for neutrino oscillations is solved analytically and the probabilities to detect certain neutrino flavors are derived. We find that the interaction of neutrinos, emitted by a core-collapsing supernova, with the stochastic GW background results in the several percent change of the neutrino fluxes. The observability of the predicted effects is discussed. The experimental confirmation of neutrino oscillations is an unambiguous indication on the validity of certain extensions of the standard model in which neutrinos are massive and mixed particles. External fields, such as the electroweak interaction with matter and electromagnetic fields, are known to modify the process of neutrino oscillations. Gravity, in spite of its weakness in comparison with other interactions, can also significantly affect neutrino oscillations. In the present work, we are interested in the neutrino propagation and oscillations in various gravitational backgrounds, like a rotating black hole (BH) and gravitational waves (GWs). The subject of the present research is inspired by a significant experimental success in the study of these gravitational fields (see, e.g., Refs. [1, 2]). There are two main types of neutrino oscillations. First, we mention neutrino spin oscillations, in which left polarized active neutrinos are converted to right polarized sterile particles. Second, there are neutrino flavor oscillations, where transitions between different neutrino generations are possible. Previously, we considered both spin and flavor oscillations in gravitational fields in Refs. [3, 4, 5, 6, 7, 8, 9, 10, 11]. In the present work, we summarize our achievements in the studies of neutrino oscillations in a curved spacetime. First, we consider ultrarelativistic neutrinos which are scattered by a rotating BH surrounded by a thin magnetized accretion disk. A neutrino is supposed to be a Dirac particle possessing a nonzero magnetic moment. Using the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\), the spacetime of a rotating BH has the following metric: \[\mathrm{d}s^{2}=\left(1-\frac{rr_{g}}{\Sigma}\right)\mathrm{d}t^{2}+2\frac{rr_{g} a\sin^{2}\theta}{\Sigma}\mathrm{d}t\mathrm{d}\phi-\frac{\Sigma}{\Delta} \mathrm{d}r^{2}-\Sigma\mathrm{d}\theta^{2}-\frac{\Xi}{\Sigma}\sin^{2}\theta \mathrm{d}\phi^{2}, \tag{1}\] where \(\Delta=r^{2}-rr_{g}+a^{2}\), \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\), and \(\Xi=\left(r^{2}+a^{2}\right)\Sigma+rr_{g}a^{2}\sin^{2}\theta\). Here we take that the gravitational constant equals to one. Thus, we have that the angular momentum of BH is \(J=Ma\) and its mass is \(M=r_{g}/2\), where \(r_{g}\) is the Schwarzschild radius. The detailed description of the motion of an ultrarelativistic test particle in the Kerr metric in Eq. (1) is given in Ref. [12]. We just recall that the trajectory and the law of motion can be found in quadratures. Besides the conserved energy \(E\), any trajectory can be labeled by the angular momentum \(L\) and the Carter constant \(Q\), which is positive if we study the scattering problem. The quasiclassical description of the neutrino spin is based on the formalism developed in Refs. [3, 13], where a particle spin is supposed to be parallel transported along geodesics. It is convenient to follow the neutrino spin dynamics in the locally Minkowskian frame \(x_{a}=e_{a}^{\mu}x_{\mu}\). The explicit form of the vierbein vectors \(e_{a}^{\mu}\), \(a=0,\dots,3\), is given in Ref. [4]. They satisfy the relation \(e_{a}^{\mu}e_{b}^{\nu}g_{\mu\nu}=\eta_{ab}=\mathrm{diag}(1,-1,-1,-1)\). The evolution of the invariant three vector of the neutrino spin \(\mathbf{\zeta}\) obeys the equation \(\dot{\mathbf{\zeta}}=2(\mathbf{\zeta}\times\mathbf{\Omega})\), where \[\mathbf{\Omega}=\frac{1}{U^{t}}\left\{\frac{1}{2}\left[\mathbf{b}_{g}+\frac{1}{1+u ^{0}}\left(\mathbf{e}_{g}\times\mathbf{u}\right)\right]+\mu\left[u^{0}\mathbf{ b}-\frac{\mathbf{u}(\mathbf{u}\mathbf{b})}{1+u^{0}}+\left(\mathbf{e}\times \mathbf{u}\right)\right]\right\}. \tag{2}\] Here \(u^{a}=(u^{0},\mathbf{u})\) is the four velocity, \(U^{t}\) is the component of the four velocity in world coordinates, \(G_{ab}=(\mathbf{e}_{g},\mathbf{b}_{g})=\gamma_{abc}u^{c}\), \(\gamma_{abc}\) are the Ricci rotation coefficients, \(f_{ab}=(\mathbf{e},\mathbf{b})\) is the electromagnetic field tensor, and \(\mu\) is the diagonal neutrino magnetic moment. All quantities labeled with Latin indexes \(a,b,\dots\) are given in the locally Minkowskian frame. We consider the neutrino scattering off supermassive BH (SMBH) with the mass \(M=10^{8}M_{\odot}\). The incoming flux of neutrinos is taken to be parallel to the equatorial plane of BH. We neglect the electroweak interaction of neutrinos with plasma of the accretion disk since the disk is supposed to be slim. The distribution of the magnetic field around a rotating BH is studied in Ref. [14]. Additionally, we take that the amplitude of the magnetic field scales with the distance to the BH center as \(B\propto B_{0}r^{-5/4}\)[15]. It guarantees that the magnetic field vanishes at the accretion disk edge. Here, \(B_{0}=3.2\times 10^{2}\,\mathrm{G}\) (for \(M=10^{8}M_{\odot}\)) is the strength of the magnetic field in the vicinity of BH at \(r\sim r_{g}\)[16]. Moreover, we take into account only the poloidal magnetic field since the accretion disk is assumed to be thin. The value of the neutrino magnetic moment is in the range \(\mu=(10^{-14}-10^{-13})\mu_{\mathrm{B}}\), where \(\mu_{\mathrm{B}}\) is the Bohr magneton. It is consistent with the model independent constraint on the Dirac neutrino magnetic moment in Ref. [17] and with the astrophysical upper bound on the neutrino magnetic moment in Ref. [18]. We suppose that incoming neutrinos are left polarized, that is valid for ultrarelativistic particles in frames of the standard model. If the neutrino polarization changes in the process of the interaction with external fields in the gravitational scattering, we observe the reduced flux \(F_{\nu}=P_{\mathrm{LL}}F_{0}\), where \(P_{\mathrm{LL}}\) is the survival probability of neutrino spin oscillations, which is found in solving the spin evolution equation along each neutrino trajectory, and \(F_{0}\) is the flux of scalar particles. We shall study the ratio \(F_{\nu}/F_{0}\) for neutrinos gravitationally scattered off a Kerr SMBH taking into account the interaction of the neutrino magnetic moment with a magnetic field in an accretion disk. First, we study the impact of gravity only on the spin evolution in the scattering of ultrarelativistic neutrinos. The neutrino flux in shown in Fig. 1 for different spins of BH. We can see that \(F_{\nu}=F_{0}\) with very high accuracy. It means that gravity does not lead to a spin-flip of ultrarelativistic neutrinos. We can see that a nonzero magnetic interaction is necessary to produce a neutrino spin-flip. We show \(F_{\nu}/F_{0}\) for different \(V_{\rm B}=\mu B_{0}r_{g}\) and spins of BH in Fig. 2. The flux of neutrinos is reduced by up to \((5\pm 1)\%\) for \(\mu=10^{-14}\mu_{\rm B}\) in Figs. 2(a) and 2(b), which is consistent with the results of Ref. [8]. If we take \(\mu=10^{-13}\mu_{\rm B}\), the neutrino flux becomes almost vanishing for certain scattering angles \(\phi_{\rm obs}\) and \(\theta_{\rm obs}\); cf. Figs. 2(c) and 2(d). Now, we turn to the consideration of neutrino flavor oscillations in stochastic GWs. The system of three active flavor neutrinos \(\nu_{\lambda}\), \(\lambda=e,\mu,\tau\), with the nonzero mixing, as well as under the influence of a plane GW with an arbitrary polarization, obeys the following Schrodinger equation: \({\rm i}\dot{\nu}=(H_{0}+H_{1})\nu\), where \(\nu^{\rm T}=(\nu_{e},\nu_{\mu},\nu_{\tau})\), \(H_{0}=UH_{m}^{\rm(vac)}U^{\dagger}\) is the effective Hamiltonian for vacuum oscillations in the flavor eigenstates basis, \(H_{m}^{\rm(vac)}=\frac{1}{2E}{\rm diag}\left(0,\Delta m_{21}^{2},\Delta m_{31 }^{2}\right)\) is the vacuum effective Hamiltonian for the mass eigenstates \(\psi_{a}\), \(a=1,2,3\), and \(U\) is the unitary matrix which relates flavor and mass bases: \(\nu_{\lambda}=U_{\lambda a}\psi_{a}\). The Hamiltonian \(H_{1}\), which describes the neutrino interaction with GW, has the form \(H_{1}=UH_{m}^{(g)}U^{\dagger}\)[11], where \(H_{m}^{(g)}=H_{m}^{\rm(vac)}\left(A_{c}h_{+}+A_{s}h_{\times}\right)\), and \[A_{c}=\frac{1}{2}\sin^{2}\vartheta\cos 2\varphi\cos\Phi,\quad A_{s}=\frac{1}{2 }\sin^{2}\vartheta\sin 2\varphi\sin\Phi. \tag{3}\] Here, \(h_{+,\times}\) are the amplitudes corresponding to 'plus' and 'times' polarizations of GW, \(\Phi=\omega t(1-\cos\vartheta)\), \(\omega\) is the frequency of GW, \(\vartheta\) and \(\varphi\) are the spherical angles fixing the neutrino momentum with respect to the wave vector of GW, which is supposed to propagate along the \(z\)-axis. Now we consider the situation when a neutrino interacts with stochastic GWs. In this case, the angles \(\vartheta\) and \(\varphi\), as well as the amplitudes \(h_{+,\times}\), are random functions of time. To study the neutrino motion in such a background, it is more convenient to deal with the density matrix \(\rho\). We can also introduce the density matrix in the interaction picture, Figure 1: The ratio \(F_{\nu}/F_{0}\) for neutrinos gravitationally scattered off BHs with different angular momenta. (a) \(a=2\times 10^{-2}M\); (b) \(a=0.98M\). Figure 2: The same as in Fig. 1 accounting for the neutrino magnetic interaction. Panels (a) and (b): \(\mu=10^{-14}\mu_{\rm B}\); panels (c) and (d): \(\mu=10^{-13}\mu_{\rm B}\). Panels (a) and (c): \(a=2\times 10^{-2}M\); Panels (b) and (d): \(a=0.98M\). \(\exp({\rm i}H_{0}t)\rho\exp(-{\rm i}H_{0}t)\). Assuming that the background of GWs is Gaussian and appropriately averaging the evolution equation for \(\rho_{\rm int}\), we get that \(\frac{{\rm d}}{{\rm d}t}\left\langle\rho_{\rm int}\right\rangle(t)=-g(t)\frac{ 3}{64}[H_{0},[H_{0},\left\langle\rho_{\rm int}\right\rangle(t)]]\), where \[g(t)=\frac{1}{2}\int_{0}^{t}{\rm d}t_{1}\left(\left\langle h_{+}(t)h_{+}(t_{1} )\right\rangle+\left\langle h_{\times}(t)h_{\times}(t_{1})\right\rangle\right). \tag{4}\] Here the correlators of the amplitudes \(\left\langle h_{+,\times}(t)h_{+,\times}(t_{1})\right\rangle\) are supposed to be arbitrary. Using the function \(\Omega(f)=\frac{f}{\rho_{c}}\frac{{\rm d}\rho_{\rm GW}}{{\rm d}f}\)[20], where \(\rho_{\rm GW}\) is the energy density of GW and \(\rho_{c}=0.53\times 10^{-5}\,{\rm Gev}\cdot{\rm cm}^{-3}\) is the closure energy density of the universe, we can rewrite \(g(t)\) in Eq. (4) in the form, \(g(t)=\frac{4G\rho_{c}}{\pi^{2}}\int_{0}^{\infty}\frac{{\rm d}f}{f^{4}}\sin(2 \pi ft)\Omega(f)\). Here \(G=6.9\times 10^{-39}\,{\rm GeV}^{-2}\) is the Newton's constant and \(f\) is the frequency measured in Hz. We can find the analytical solution for the density matrix and write down the probability to observe a certain neutrino flavor. If we study the propagation of supernova (SN) neutrinos, which are characterized by the ratio of the initial fluxes \(\left(F_{\nu_{e}}:F_{\nu_{\mu}}:F_{\nu_{\tau}}\right)_{\rm S}=(1:0:0)\), the deviation of the probability from the vacuum oscillations value, \(\Delta P_{\lambda}=P_{\lambda}^{(g)}-P_{\lambda}^{(\rm vac)}\), is \[\Delta P_{\lambda}(x)= 2\left[1-\exp{(-\Gamma)}\right]\left[{\rm Re}\left[U_{\lambda 2}U_{ \lambda 1}^{*}U_{e2}^{*}U_{e1}\right]\cos\left(2\pi\frac{x}{L_{21}}\right)\right.\] \[+\left.{\rm Im}\left[U_{\lambda 2}U_{\lambda 1}^{*}U_{e2}^{*}U_{e1} \right]\sin\left(2\pi\frac{x}{L_{21}}\right)\right]\!,\quad\Gamma=\frac{3\pi^{ 2}}{16L_{21}^{2}}\int_{0}^{x}g(t){\rm d}t, \tag{5}\] where \(L_{21}=\frac{4\pi E}{\Delta m_{21}^{2}}\) is the oscillations length for the solar neutrino oscillations channel and \(x\approx t\) is the distance traveled by the neutrino beam. Now we should fix the source of the GW background. We suppose that stochastic GWs are emitted by merging SMBHs. In this case [21], \(\Omega(f)=\Omega_{0}\sim 10^{-9}\) if \(f_{\rm min}<f<f_{\rm max}\), where \(f_{\rm min}\sim 10^{-10}\,{\rm Hz}\) and \(f_{\rm max}\sim 10^{-1}\,{\rm Hz}\), and \(\Omega(f)=0\) otherwise. In Fig. 3(a), we show the dependence of \(\Gamma\) in Eq. (5) versus the distance passed by neutrinos \(\tau=x/L\), where \(L=10\,{\rm kpc}\) is the typical Galaxy size. We can see that, at \(\tau\to 1\), \(\Gamma\to\Gamma_{\oplus}=8\times 10^{-2}\). In Fig. 3(b), we show the deviation of the flux of \(\nu_{e}\) from the vacuum oscillations value owing to the interaction with stochastic GWs. We can see that the interaction with GWs results in the \(\sim 3\%\) change of the observed flux. In the present work, we have studied neutrino oscillations in gravitational fields. First, we have considered spin oscillations of neutrinos in their gravitational scattering off rotating BH. We have started with the formulation of the main equations for the neutrino spin under the influence of external fields in curved spacetime. It is defined in the locally Minkowskian frame. Then, we have applied this result to the neutrino spin evolution in the gravitational field of a rotating SMBH surrounded by a thin magnetized accretion disk. We have supposed that the incoming flux of ultrarelativistic neutrinos is parallel to the equatorial plane of BH. However, it is both above and below this plane. Thus, we have generalized our previous results in Refs. [7, 8], where only the equatorial neutrino motion is studied. The neutrino electroweak interaction with plasma of the disk has not been taken into account since the disk is supposed to be thin. Only the poloidal component of the magnetic field in the disk was considered. The strength of the magnetic field is typical for SMBH. We have obtained that only the gravitational interaction does not result in the spin-flip of ultrarelativistic neutrinos in their scattering off BH. This our finding is in agreement with the results of Ref. [19]. Thus, the variation of the observed neutrino flux, which depends on the survival probability of neutrino oscillations, is possible only if a magnetic field is present in the system besides gravity. We have found that the neutrino flux can be significantly reduced if the neutrino magnetic moment has realistic values consistent with theoretical and astrophysical constraints. Then, we have studied neutrino flavor oscillations under the influence of stochastic GWs. We have obtained the effective Hamiltonian for flavor oscillations for massive neutrinos interacting with a plane GW [9]. We have also accounted for two independent polarizations of GW [11]. The obtained Hamiltonian was used to describe neutrino flavor oscillations in stochastic GWs with arbitrary correlators of amplitudes [10]. These findings were applied to analyze the impact of stochastic GWs on the flavor content of neutrinos emitted by a core-collapsing SN. In this situation, the source of neutrinos is almost point like. Moreover, we know the flavor content of emitted SN neutrinos. We have obtained that the major contribution to the observed fluxes is for the GW background created by merging SMBHs. The deviation of the fluxes caused by the neutrino interaction with stochastic GWs can be up to several percent. The maximal effect is for the electron neutrinos flux. Both spin and flavor oscillations phenomena, predicted in our work, can be observed for astrophysical neutrinos using existing and especially future neutrino telescopes (see, e.g., Ref. [22]). These facilities have sufficient sensitivity to detect the deviation of neutrino fluxes owing to the neutrino interaction with external fields in curved spacetime.
2305.14077
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that rate-optimal benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.
Moritz Haas, David Holzmüller, Ulrike von Luxburg, Ingo Steinwart
2023-05-23T13:56:29Z
http://arxiv.org/abs/2305.14077v2
# Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension ###### Abstract The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets. ## 1 Introduction While neural networks have shown great practical success, our theoretical understanding of their generalization properties is still limited. A promising line of work considers the phenomenon of benign overfitting, where researchers try to understand when and how models that interpolate noisy training data can generalize (Zhang et al., 2021; Belkin et al., 2018, 2019). In the high-dimensional regime, where the dimension grows with the number of sample points, consistency of minimum-norm interpolants has been established for linear models and kernel regression (Hastie et al., 2022; Bartlett et al., 2020; Liang and Rakhlin, 2020; Bartlett et al., 2021). In fixed dimension, minimum-norm interpolation with standard kernels is inconsistent (Rakhlin and Zhai, 2019; Buchholz, 2022). In this paper, we shed a differentiated light on benign overfitting with kernels and neural networks. We argue that the dimension-dependent perspective does not capture the full picture of benign overfitting. In particular, we show that harmless interpolation with kernel methods and neural networks is possible, even in small fixed dimension, with adequately designed kernels and activation functions. The key is to properly design estimators of the form'signal+spike'. While minimum-norm criteria have widely been considered a useful inductive bias, we demonstrate that designing unusual norms can resolve the shortcomings of standard norms. For wide neural networks, harmless interpolation can be realized by adding tiny fluctuations to the activation function. Such networks do not require regularization and can simply be trained to overfit (Figure 1). On a technical level, we additionally prove that overfitting in kernel regression can only be consistent if the estimators have large derivatives. Using neural tangent kernels or neural network Gaussian process kernels, we can translate our results from kernel regression to the world of neural networks (Neal, 1996; Jacot et al., 2018). In particular, our results enable the design of activation functions that induce benign overfitting in fixed dimension: the spikes in kernels can be translated into infinitesimal fluctuations that can be added to an activation function to achieve harmless interpolation with neural networks. Such small high frequency oscillations can fit noisy observations without affecting the smooth component too much. Training finite neural networks with gradient descent shows that spiky-smooth activation functions can indeed achieve good generalization even when interpolating small, low-dimensional data sets (Figure 1 b,c). Thanks to new technical contributions, our inconsistency results significantly extend existing ones. We use a novel noise concentration argument (Lemma D.6) to generalize existing inconsistency results on minimum-norm interpolants to the much more realistic regime of overfitting estimators with comparable Sobolev norm scaling, which includes training via gradient flow and gradient descent with "late stopping" as well as low levels of ridge regression. Moreover, a novel connection to eigenvalue concentration results for kernel matrices (Proposition 4) allows us to relax the smoothness assumption and to treat heteroscedastic noise in Theorem 5. Lastly, our Lemma E.1 translates inconsistency results from bounded open subsets of \(\mathbb{R}^{d}\) to the sphere \(\mathbb{S}^{d}\), which leads to results for the neural tangent kernel and neural network Gaussian processes. ## 2 Setup and prerequisites **General approach.** We consider a general regression problem on \(\mathbb{R}^{d}\) with an arbitrary, fixed dimension \(d\) and analyze kernel-based approaches to solve this problem: kernel ridge regression, kernel gradient flow and gradient descent, minimum-norm interpolation, and more generally, overfitting norm-bounded estimators. We then translate our results to neural networks via the neural network Gaussian process and the neural tangent kernel. Let us now introduce the formal framework. **Notation.** We denote scalars by lowercase letters \(x\), vectors by bold lowercase letters \(x\) and matrices by bold uppercase letters \(X\). We denote the eigenvalues of \(A\) as \(\lambda_{1}(\mathbf{A})\geq\ldots\geq\lambda_{n}(\mathbf{A})\) and the Moore-Penrose pseudo-inverse by \(\mathbf{A}^{+}\). We say that a probability distribution \(P\) has lower and upper bounded density if its density \(p\) satisfies \(0<c<p(\mathbf{x})<C\) for suitable constants \(c,C\) and all \(x\) on a given domain. **Regression setup.** We consider a data set \(D=((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n}))\in( \mathbb{R}^{d}\times\mathbb{R})^{n}\) with i.i.d. samples \((\mathbf{x}_{i},y_{i})\sim P\), written as \(D\sim P^{n}\), where \(P\) is a probability distribution on \(\mathbb{R}^{d}\times\mathbb{R}\). We define \(\mathbf{X}\coloneqq(\mathbf{x}_{1},\ldots,\mathbf{ x}_{n})\) and \(\mathbf{y}\coloneqq(y_{1},\ldots,y_{n})^{\top}\in\mathbb{R}^{n}\). Random variables \((\mathbf{x},y)\sim P\) denote test points Figure 1: **Spiky-smooth overfitting in 2 dimensions.****a.** We plot the predicted function for ridgeless kernel regression with the Laplace kernel (blue) versus our spiky-smooth kernel (4) with Laplace components (orange) on \(\mathbb{S}^{1}\). The dashed black line shows the true regression function, black ’x’ denote noisy training points. Further details can be found in Section 6.2. **b.** The predicted function of a trained 2-layer neural network with ReLU activation (blue) versus ReLU plus shifted high-frequency sin-function (8) (orange). Using the weights learned with the spiky-smooth activation function in a ReLU network (green) disentangles the spike component from the signal component. **c.** Training error (solid lines) and test error (dashed lines) over the course of training for b. evaluated on \(10^{4}\) test points. The dotted black line shows the optimal test error. The spiky-smooth activation function does not require regularization and can simply be trained to overfit. independent of \(D\), and \(P_{X}\) denotes the probability distribution of \(\mathbf{x}\). The (least squares) _empirical risk_\(R_{D}\) and _population risk_\(R_{P}\) of a function \(f:\mathbb{R}^{d}\to\mathbb{R}\) are defined as \[R_{D}(f)\coloneqq\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2},\qquad R_{P}(f) \coloneqq\mathbb{E}_{\mathbf{x},y}[(y-f(\mathbf{x}))^{2}]\.\] We assume \(\operatorname{Var}(y|\mathbf{x})<\infty\) for all \(\mathbf{x}\). Then, \(R_{P}\) is minimized by the target function \(f_{P}^{*}(\mathbf{x})=\mathbb{E}[y|\mathbf{x}]\), and the _excess risk_ of a function \(f\) is given by \[R_{P}(f)-R_{P}(f_{P}^{*})=\mathbb{E}_{\mathbf{x}}(f_{P}^{*}(\mathbf{x})-f(\mathbf{x}))^{2}\.\] We call a data-dependent estimator \(f_{D}\)_consistent for \(P\)_ if its excess risk converges to \(0\) in probability, that is, for all \(\varepsilon>0\), \(\lim_{n\to\infty}P^{n}\left(D\in(\mathbb{R}^{d}\times\mathbb{R})^{n}\ |\ \ R_{P}(f_{D})-R_{P}(f_{P}^{*})\geq \varepsilon\right)=0.\) We call \(f_{D}\)_consistent in expectation for \(P\)_ if \(\lim_{n\to\infty}\mathbb{E}_{D}R_{P}(f_{D})-R_{P}(f_{P}^{*})=0\). We call \(f_{D}\)_universally consistent_ if is it consistent for all Borel probability measures \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\). **Solutions by kernel regression.** Recall that a kernel \(k\) induces a reproducing kernel Hilbert space \(\mathcal{H}_{k}\), abbreviated RKHS (more details in Appendix B). For \(f\in\mathcal{H}_{k}\), we consider the objective \[\mathcal{L}_{\rho}(f)\coloneqq\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(\mathbf{x}_{i}))^ {2}+\rho\|f\|_{\mathcal{H}_{k}}^{2}\] with regularization parameter \(\rho\geq 0\). Denote by \(f_{t,\rho}\) the solution to this problem that is obtained by optimizing on \(\mathcal{L}_{\rho}\) in \(\mathcal{H}_{k}\) with gradient flow until time \(t\in[0,\infty]\), using fixed a regularization constant \(\rho>0\), and initializing at \(f=0\in\mathcal{H}_{k}\). We show in Appendix C.1 that it is given as \[f_{t,\rho}(\mathbf{x})\coloneqq k(\mathbf{x},\mathbf{X})\left(\mathbf{I}_{n}-e^{-\frac{2}{n}t (k(\mathbf{X},\mathbf{X})+\rho n\mathbf{I}_{n})}\right)\left(k(\mathbf{X},\mathbf{X})+\rho n\mathbf{I} _{n}\right)^{-1}\mathbf{y}\, \tag{1}\] where \(k(\mathbf{x},\mathbf{X})\) denotes the row vector \((k(\mathbf{x},\mathbf{x}_{i}))_{i\in[n]}\) and \(k(\mathbf{X},\mathbf{X})=(k(\mathbf{x}_{i},\mathbf{x}_{j}))_{i,j\in[n]}\) the kernel matrix. \(f_{t,\rho}\) elegantly subsumes several popular kernel regression estimators as special cases: (i) classical kernel ridge regression for \(t\to\infty\), (ii) gradient flow on the unregularized objective for \(\rho\searrow 0\), and (iii) kernel "ridgeless" regression \(f_{\infty,0}(\mathbf{x})=k(\mathbf{x},\mathbf{X})k(\mathbf{X},\mathbf{X})^{+}\mathbf{y}\) in the joint limit of \(\rho\to 0\) and \(t\to\infty\). If \(k(\mathbf{X},\mathbf{X})\) is invertible, \(f_{\infty,0}\) is the interpolating function \(f\in\mathcal{H}_{k}\) with the smallest \(\mathcal{H}_{k}\)-norm. **From kernels to neural networks: the neural tangent kernel (NTK) and the neural network Gaussian process (NNGP).** Denote the output of a NN with parameters \(\mathbf{\theta}\) on input \(\mathbf{x}\) by \(f_{\mathbf{\theta}}(\mathbf{x})\). It is known that for suitable random initializations \(\mathbf{\theta}_{0}\), in the infinite-width limit the random initial function \(f_{\mathbf{\theta}_{0}}\) converges in distribution to a Gaussian Process with the so-called Neural Network Gaussian Process (NNGP) kernel (Neal, 1996; Lee et al., 2018; Matthews et al., 2018). In Bayesian inference, the posterior mean function is then of the form \(f_{\infty,\rho}\). With minor modifications (Arora et al., 2019; Zhang et al., 2020), training infinitely wide NNs with gradient flow corresponds to learning the function \(f_{t,0}\) with the neural tangent kernel (NTK) (Jacot et al., 2018; Lee et al., 2019). If only the last layer is trained, the NNGP kernel should be used instead (Daniely et al., 2016). For ReLU activation functions, the RKHS of the infinite-width NNGP and NTK on the sphere \(\mathbb{S}^{d}\) is typically a Sobolev space (Bietti and Bach, 2021; Chen and Xu, 2021), see Appendix B.4. ## 3 Related work We here provide a short summary of related work. A more detailed account is provided in Appendix A. **Kernel regression.** With appropriate regularization, kernel ridge regularization with typical universal kernels like the Gauss, Matern, and Laplace kernels is universally consistent (Steinwart and Christmann, 2008, Chapter 9). Optimal rates in Sobolev RKHS can also be achieved using cross-validation of the regularization \(\rho\)(Steinwart et al., 2009) or early stopping rules (Yao et al., 2007; Raskutti et al., 2014; Wei et al., 2017). In the high-dimensional regime, the class of functions that is learnable with rotation-invariant kernels is quite limited (Donhauser et al., 2021; Ghorbani et al., 2021; Liang et al., 2020). **Inconsistency results.** Besides Rakhlin and Zhai (2019) and Buchholz (2022), Beaglehole et al. (2022) derive inconsistency results for ridgeless kernel regression given assumptions on the spectral tail in the Fourier basis, and Li et al. (2023) show that polynomial convergence is impossible for common kernels including ReLU NTKs. Mallinar et al. (2022) conjecture inconsistency for interpolation with ReLU NTKs based on their semi-rigorous result, which essentially assumes that the eigenfunctions can be replaced by structureless Gaussian random variables. Lai et al. (2023) show an inconsistency-type result for overfitting two-layer ReLU NNs with \(d=1\), but for fixed inputs \(\mathbf{X}\). They also note that an earlier inconsistency result by Hu et al. (2021) relies on an unproven result. Mucke and Steinwart (2019) show that global minima of NNs can overfit both benignly and harmfully, but their result does not apply to gradient descent training. Overfitting with typical linear models around the interpolation peak is inconsistent (Ghosh and Belkin, 2022; Holzmuller, 2021). **Classification.** For binary classification, benign overfitting is a more generic phenomenon than for regression (Muthukumar et al., 2021; Shamir, 2022), and consistency has been shown under linear separability assumptions (Montanari et al., 2019; Chatterji and Long, 2021; Frei et al., 2022), through complexity bounds for reference classes (Cao and Gu, 2019; Chen et al., 2019) or as long as the total variation distance of the class conditionals is sufficiently large and \(f^{*}(\mathbf{x})=\mathbb{E}[y|\mathbf{x}]\) lies in the RKHS with bounded norm (Liang and Recht, 2023). Chapter 8 of Steinwart and Christmann (2008) discusses how the overlap of the two classes may influence learning rates under positive regularization. ## 4 Inconsistency of overfitting with common kernel estimators We consider a regression problem on \(\mathbb{R}^{d}\) in arbitrary, fixed dimension \(d\) that is solved by kernel regression. In this section, we derive several new results, stating that overfitting estimators with moderate Sobolev norm are inconsistent, in a variety of settings. In the next section, we establish the other direction: overfitting estimators can be consistent when we adapt the norm that is minimized. ### Beyond minimum-norm interpolants: general overfitting estimators with bounded norm Existing generalization bounds often consider the perfect minimum norm interpolant. This is a rather theoretical construction; estimators obtained by training with gradient descent algorithms merely overfit and, in the best case, approximate interpolants with small norm. In this section, we extend existing bounds to arbitrary overfitting estimators whose norm does not grow faster than the minimum norm that would be required to interpolate the training data. Before we can state the theorem, we need to establish some technical assumptions. Assumptions on the data generating process.The following assumptions (as in Buchholz (2022)) allow for quite general domains and distributions. They are standard in nonparametric statistics. 1. Let \(P_{X}\) be a distribution on a bounded open Lipschitz domain \(\Omega\subseteq\mathbb{R}^{d}\) with lower and upper bounded Lebesgue density. Consider data sets \(D=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\}\), where \(\mathbf{x}_{i}\sim P_{X}\) i.i.d. and \(y_{i}=f^{*}(\mathbf{x}_{i})+\varepsilon_{i}\), where \(\varepsilon_{i}\) is i.i.d. Gaussian noise with positive variance \(\sigma^{2}>0\) and \(f^{*}\in C_{c}^{\infty}(\Omega)\backslash\{0\}\) denotes a smooth function with compact support. Assumptions on the kernel.Our assumption on the kernel is that its RKHS is equivalent to a Sobolev space. For integers \(s\in\mathbb{N}\), the norm of a Sobolev space \(H^{s}(\Omega)\) can be defined as \[\|f\|_{H^{s}(\Omega)}^{2}:=\sum_{0\leq|u|\leq s}\|D^{\alpha}f\|_{L_{2}(\Omega)} ^{2},\] where \(D^{\alpha}\) denotes partial derivatives in multi-index notation for \(\alpha\). It measures the magnitude of derivatives up to some order \(s\). For general \(s>0\), \(H^{s}(\Omega)\) is (equivalent to) an RKHS if and only if \(s>d/2\). For example, Laplace and Matern kernels (Kanagawa et al., 2018, Example 2.6) have Sobolev RKHSs. The RKHS of the Gaussian kernel \(\mathcal{H}^{\mathrm{Gauss}}\) is contained in every Sobolev space, \(\mathcal{H}^{\mathrm{Gauss}}\subsetneq H^{s}\) for all \(s\geq 0\)(Steinwart and Christmann, 2008, Corollary 4.36). Due to its smoothness, the Gaussian kernel is potentially even more prone to harmful overfitting than Sobolev kernels (Mallinar et al., 2022). We make the following assumption on the kernel: 1. Let \(k\) be a positive definite kernel function whose RKHS \(\mathcal{H}_{k}\) is equivalent to the Sobolev space \(H^{s}\) for \(s\in(\frac{d}{2},\frac{3d}{4}]\). Now we are ready to state the main result of this section: **Theorem 1** (**Overfitting estimators with small norms are inconsistent**).: _Let assumptions (D1) and (K) hold. Let \(c_{\mathrm{fit}}\in(0,1]\) and \(C_{\mathrm{norm}}>0\). Then, there exist \(c>0\) and \(n_{0}\in\mathbb{N}\) such that the following holds for all \(n\geq n_{0}\) with probability \(1-O(1/n)\) over the draw of the data set \(D\) with \(n\) samples: Every function \(f\in\mathcal{H}_{k}\) that satisfies the following two conditions_ 1. \(\frac{1}{n}\sum_{i=1}^{n}(f(x_{i})-y_{i})^{2}\leq(1-c_{\mathrm{fit}})\cdot \sigma^{2}\) _(training error of_ \(f\) _is below Bayes risk)_ 2. \(\|f\|_{\mathcal{H}_{k}}\leq C_{\mathrm{norm}}\|f_{\infty,0}\|_{\mathcal{H}_{k}}\) _(norm comparable to minimum-norm interpolant (_1_)),_ _has an excess risk that satisfies_ \[R_{P}(f)-R_{P}(f^{*})\geq c>0. \tag{2}\] In words: In fixed dimension \(d\), every differentiable function \(f\) that overfits the training data and is not much "spikier" than the minimum RKHS-norm interpolant is inconsistent! **Proof idea.** Our proof follows a similar approach as Rakhlin and Zhai (2019); Buchholz (2022), and also holds for kernels with adaptive bandwidths. For small bandwidths, \(\|f_{\infty,0}\|_{L_{2}(P_{X})}\) is too small, because \(f_{\infty,0}\) decays to \(0\) between the training points, which shows that purely "spiky" estimators are inconsistent. For all other bandwidths, interpolating \(\Theta(n)\) many noisy labels \(y_{i}\) incurs \(\Theta(1)\) error in an area of volume \(\Omega(1/n)\) around \(\Theta(n)\) data points with high probability, which accumulates to a total error \(\Omega(1)\). Our observation is that the same logic holds when overfitting by a constant fraction. Formally, we show that \(f^{*}\) and \(f\) must then be separated by a constant on a constant fraction of training points, with high probability, by using the fact that a constant fraction of the total noise cannot concentrate on less than \(\Theta(n)\) noise variables, with high probability (Lemma D.6). The full proof can be found in Appendix D. Assumption (O) is necessary in Theorem 1, because optimally regularized kernel ridge regression fulfills all other assumptions of Theorem 1 while achieving consistency with minimax optimal convergence rates (see Section 3). The necessity of Assumption (N) is demonstrated by Section 5. The following proposition establishes that Theorem 1 covers the entire overfitting regime of the popular (regularized) gradient flow estimators \(f_{t,\rho}\) for all times \(t\in[0,\infty]\) and any regularization \(\rho\geq 0\). The proof in Appendix C.2 also covers gradient descent. **Proposition 2** (**Popular estimators fulfill the norm bound (N)**).: _Let \(t\in[0,\infty]\) and let \(\rho\geq 0\) arbitrary. Then \(f_{t,\rho}\) as defined in (1) fulfills Assumption (N) with \(C_{\mathrm{norm}}=1\)._ ### Inconsistency of overfitting with neural kernels We would now like to apply the above results to neural kernels, which would allow us to translate our inconsistency results from the kernel domain to neural networks. However, to achieve this, we need to take one more technical hurdle: the equivalence results for NTKs and NNGPs only hold for probability distributions on the sphere \(\mathbb{S}^{d}\) (detailed summary in Appendix B.4). Lemma E.1 provides the missing technical link: It establishes a smooth correspondence between the respective kernels, Sobolev spaces, and probability distributions. The inconsistency of overfitting with (deep) ReLU NTKs and NNGP kernels then immediately follows from adapting Theorem 1 via Lemma E.1. **Theorem 3** (**Overfitting with neural network kernels in fixed dimension is inconsistent**).: _Let \(c\in(0,1)\), and let \(P\) be a probability distribution with lower and upper bounded Lebesgue density on an arbitrary spherical cap \(T\coloneqq\{\mathbf{x}\in\mathbb{S}^{d}\mid x_{d+1}<v\}\subseteq\mathbb{S}^{d}\), \(v\in(-1,1)\). Let \(k\) either be_ 1. _the fully-connected ReLU NTK with_ \(0\)_-initialized biases of any fixed depth_ \(L\geq 2\)_, and_ \(d\geq 2\)_, or_ _(ii) the fully-connected ReLU NNGP kernel without biases of any fixed depth \(L\geq 3\), and \(d\geq 6\)._ _Then, if \(f_{t,\rho}\) fulfills Assumption \((O)\) with probability at least \(c\) over the draw of the data set \(D\), \(f_{t,\rho}\) is inconsistent for \(P\)._ Theorem 3 also holds for more general estimators as in Theorem 1, cf. the proof in Appendix E. Mallinar et al. (2022) already observed empirically that overfitting common network architectures yields suboptimal generalization performance on large data sets in fixed dimension. Theorem 3 now provides a rigorous proof for this phenomenon since sufficiently wide trained neural networks and the corresponding NTKs have a similar generalization behavior (e.g. (Arora et al., 2019, Theorem 3.2)). ### Relaxing smoothness and noise assumptions via spectral concentration bounds In this section, we consider a different approach to derive lower bounds for the generalization error of overfitting kernel regression: through concentration results for the eigenvalues of kernel matrices. On a high level, we obtain similar results as in the last section. The novelty of this section is on the technical side, and we suggest that non-technical readers skip this section in their first reading. We define the convolution kernel of a given kernel \(k\) as \(k_{*}(\mathbf{x},\mathbf{x}^{\prime})\coloneqq\int k(\mathbf{x},\mathbf{x}^{\prime\prime})k( \mathbf{x}^{\prime\prime},\mathbf{x}^{\prime})\,\mathrm{d}P_{X}(\mathbf{x}^{\prime\prime})\), which is possible whenever \(k(\mathbf{x},\cdot)\in L_{2}(P_{X})\) for all \(\mathbf{x}\). The latter condition is satisfied for bounded kernels. Our starting point is the following new lower bound: **Proposition 4** (**Spectral lower bound**).: _Assume that the kernel matrix \(k(\mathbf{X},\mathbf{X})\) is almost surely positive definite, and that \(\mathrm{Var}(y|\mathbf{x})\geq\sigma^{2}\) for \(P_{X}\)-almost all \(\mathbf{x}\). Then, the expected excess risk satisfies_ \[\mathbb{E}_{D}R_{P}(f_{t,\rho})-R_{P}^{*}\geq\frac{\sigma^{2}}{n}\sum_{i=1}^{ n}\mathbb{E}_{\mathbf{X}}\frac{\lambda_{i}(k_{*}(\mathbf{X},\mathbf{X})/n)\left(1-e^{-2t( \lambda_{i}(k(\mathbf{X},\mathbf{X})/n)+\rho)}\right)^{2}}{(\lambda_{i}(k(\mathbf{X},\mathbf{ X})/n)+\rho)^{2}}. \tag{3}\] Using concentration inequalities for kernel matrices and the relation between the integral operators of \(k\) and \(k_{*}\), it can be seen that for \(t=\infty\) and \(\rho=0\), every term in the sum in Eq. (3) should converge to \(1\) as \(n\to\infty\). However, since the number of terms in the sum increases with \(n\) and the convergence may not be uniform, this is not sufficient to show inconsistency in expectation. Instead, relative concentration bounds that are even stronger than the ones by Valdivia (2018) would be required to show inconsistency in expectation. However, by combining multiple weaker bounds and further arguments on kernel equivalences, we can still show inconsistency in expectation for a class of dot-product kernels on the sphere, including certain NTK and NNGP kernels (Appendix B.4): **Theorem 5** (**Inconsistency for Sobolev dot-product kernels on the sphere**).: _Let \(k\) be a dot-product kernel on \(\mathbb{S}^{d}\), i.e., a kernel of the form \(k(\mathbf{x},\mathbf{x}^{\prime})=\kappa(\langle\mathbf{x},\mathbf{x}^{\prime}\rangle)\), such that its RKHS \(\mathcal{H}_{k}\) is equivalent to a Sobolev space \(H^{s}(\mathbb{S}^{d})\), \(s>d/2\). Moreover, let \(P\) be a distribution on \(\mathbb{S}^{d}\times\mathbb{R}\) such that \(P_{X}\) has a lower and upper bounded density w.r.t. the uniform distribution \(\mathcal{U}(\mathbb{S}^{d})\), and such that \(\mathrm{Var}(y|\mathbf{x})\geq\sigma^{2}>0\) for \(P_{X}\)-almost all \(\mathbf{x}\in\mathbb{S}^{d}\). Then, for every \(C>0\), there exists \(c>0\) independent of \(\sigma^{2}\) such that for all \(n\geq 1\), \(t\in(C^{-1}n^{2s/d},\infty]\), and \(\rho\in[0,Cn^{-2s/d})\), the expected excess risk satisfies_ \[\mathbb{E}_{D}R_{P}(f_{t,\rho})-R_{P}^{*}\geq c\sigma^{2}>0\.\] The assumptions of Theorem 5 and Theorem 3 differ in several ways. Theorem 5 applies to arbitrarily high smoothness \(s\) and therefore to ReLU NTKs and NNGPs in arbitrary dimension \(d\). Moreover, it applies to distributions on the whole sphere and allows more general noise distributions. On the flip side, it only shows inconsistency in expectation, which we believe could be extended to inconsistency for Gaussian noise. Moreover, it only applies to functions of the form \(f_{t,\rho}\) but provides an explicit bound on \(t\) and \(\rho\) to get inconsistency. For \(t=\infty\), the bound \(\rho=O(n^{-2s/d})\) appears to be tight, as larger \(\rho\) yield consistency for comparable Sobolev kernels on \(\mathbb{R}^{d}\)(Steinwart et al., 2009, Corollary 3). The spectral lower bounds in Theorem F.2 show that our approach can directly benefit from developing better kernel matrix concentration inequalities. Conversely, the investigation of consistent kernel interpolation might provide information about where such concentration inequalities do not hold. Consistency via spiky-smooth estimators - even in fixed dimension In Section 4, we have seen that when common kernel estimators overfit, they are inconsistent for many kernels and a wide variety of distributions. We now design consistent interpolating kernel estimators. The key is to violate Assumption (N) and allow for quickly exploding derivatives. ### Almost universal consistency of spiky-smooth ridgeless kernel regression In high dimensional regimes (where the dimension \(d\) is supposed to grow with the number of data points), benign overfitting of linear and kernel regression has been understood by an additive decomposition of the minimum-norm interpolant into a smooth regularized component that is responsible for good generalization, and a spiky component that interpolates the noisy data points while not harming generalization (Bartlett et al., 2021). This inspires us to enforce such a decomposition in arbitrary fixed dimension by adding a sharp kernel spike \(\rho\tilde{k}_{\gamma_{n}}\) to a common kernel \(\tilde{k}\). In this way, we can still generate any Sobolev RKHS (see Appendix G.2). **Definition 6** (Spiky-smooth kernel).: Let \(\tilde{k}\) denote any universal kernel function on \(\mathbb{R}^{d}\). We call it the smooth component. Consider a second, translation invariant kernel \(\tilde{k}_{\gamma}\) of the form \(k_{\gamma}(\mathbf{x},\mathbf{y})=q(\frac{\mathbf{x}-\mathbf{y}}{\gamma})\), for some function \(q:\mathbb{R}^{d}\to\mathbb{R}\). We call it the spiky component. Then we define the \(\rho\)_-regularized spiky-smooth kernel with spike bandwidth \(\gamma\)_ as \[k_{\rho,\gamma}(\mathbf{x},\mathbf{y})=\tilde{k}(\mathbf{x},\mathbf{y})+\rho\cdot\tilde{k}_{ \gamma}(\mathbf{x},\mathbf{y}),\qquad\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}. \tag{4}\] We now show that the minimum-norm interpolant of the spiky-smooth kernel sequence with properly chosen \(\rho_{n},\gamma_{n}\to 0\) is consistent for a large class of distributions, on a space with fixed (possibly small) dimension \(d\). We establish our result under the following assumption (as in Mucke and Steinwart (2019)), which is weaker than our previous Assumption (D1). * There exists a constant \(\beta_{X}>0\) and a continuous function \(\phi:[0,\infty)\to[0,1]\) with \(\phi(0)=0\) such that the data generating probability distribution satisfies \(P_{X}(B_{t}(\mathbf{x}))\leq\phi(t)=O(t^{\beta_{X}})\) for all \(\mathbf{x}\in\Omega\) and all \(t\geq 0\) (here \(B_{t}(\mathbf{x})\) denotes the Euclidean ball of radius \(t\) around \(\mathbf{x}\)). **Theorem 7** (Consistency of spiky-smooth ridgeless kernel regression).: _Assume that the training set \(D\) consists of \(n\) i.i.d. pairs \((\mathbf{x},y)\sim P\) such that the marginal \(P_{X}\) fulfills (D2) and \(\mathbb{E}y^{2}<\infty\). Let the kernel components satisfy:_ * \(\tilde{k}\) _is a universal kernel, and_ \(\rho_{n}\to 0\) _and_ \(n\rho_{n}^{4}\to\infty\)_._ * \(\tilde{k}_{\gamma_{n}}\) _denotes the Laplace kernel with a sequence of positive bandwidths_ \((\gamma_{n})\) _fulfilling_ \(\gamma_{n}=O\left(n^{-(2+\alpha)/\beta_{X}}/\log(n)\right)\)_, where_ \(\alpha>0\) _arbitrary._ _Then the minimum-norm interpolant of the \(\rho_{n}\)-regularized spiky-smooth kernel sequence \(k_{n}\coloneqq k_{\rho_{n},\gamma_{n}}\) is consistent for \(P\)._ Proof idea.With sharp spikes \(\gamma\to 0\), it holds that \(\tilde{k}_{\gamma}(\mathbf{X},\mathbf{X})\approx\mathbf{I}_{n}\), with high probability. Hence, ridgeless kernel regression with the spiky-smooth kernel interpolates the training set while approximating kernel ridge regression with the smooth component \(\tilde{k}\) and regularization \(\rho\). The theorem even holds under much weaker assumptions on the decay behavior of the spike component \(\tilde{k}_{\gamma_{n}}\), including Gaussian and Matern kernels. The full version of the theorem and its proof can be found in Appendix G. It also applies to kernels and distributions on the sphere \(\mathbb{S}^{d}\). Figure 2: The spiky-smooth kernel with Laplace components (orange) consists of a Laplace kernel (blue) plus a Laplace kernel of height \(\rho\) and small bandwidth \(\gamma\). ### From spiky-smooth kernels to spiky-smooth activation functions So far, our discussion revolved around the properties of kernels and whether they lead to estimators that are consistent. We now turn our attention to the neural network side. The big question is whether it is possible to specifically design activation functions that enable benign overfitting in fixed, possibly small dimension. We will see that the answer is yes: similarly to adding sharp spikes to a kernel, we add tiny fluctuations to the activation function. Concretely, we exploit (Simon et al., 2022, Theorem 3.1). It states that any dot-product kernel on the sphere that is a dot-product kernel in every dimension \(d\) can be written as an NNGP kernel or an NTK of two-layer fully-connected networks with a specifically chosen activation function. Further details can be found in Appendix H. **Theorem 8** (**Connecting kernels and activation functions**(Simon et al., 2022)).: _Let \(\kappa:[-1,1]\to\mathbb{R}\) be a function such that \(k_{d}:\mathbb{S}^{d}\times\mathbb{S}^{d}\to\mathbb{R},k_{d}(\mathbf{x},\mathbf{x}^{ \prime})=\kappa(\langle\mathbf{x},\mathbf{x}^{\prime}\rangle)\) is a kernel for every \(d\geq 1\). Then, there exist \(b_{i}\geq 0\) with \(\sum_{i=0}^{\infty}b_{i}<\infty\) such that \(\kappa(t)=\sum_{i=0}^{\infty}b_{i}t^{i}\), and for any choice of signs \((s_{i})_{i\in\mathbb{N}}\subseteq\{-1,+1\}\), the kernel \(k_{d}\) can be realized as the NNGP or NTK of a two-layer fully-connected network with activation function_ \[\phi_{NNGP}^{k_{d}}(x)=\sum_{i=0}^{\infty}s_{i}(b_{i})^{1/2}h_{i}(x),\qquad \phi_{NTK}^{k_{d}}(x)=\sum_{i=0}^{\infty}s_{i}\left(\frac{b_{i}}{i+1}\right)^{ 1/2}h_{i}(x). \tag{5}\] _Here, \(h_{i}\) denotes the \(i\)-th Probabilist's Hermite polynomial normalized such that \(\|h_{i}\|_{L_{2}(\mathcal{N}(0,1))}=1\)._ The following proposition justifies the approach of adding spikes \(\rho^{1/2}\phi^{\mathds{k}_{\gamma}}\) to an activation function to enable harmless interpolation with wide neural networks. Here we state the result for the case of the NTK; an analogous result holds for induced NNGP activation functions. **Proposition 9** (**Additive decomposition of spiky-smooth activation functions**).: _Fix \(\tilde{\gamma},\rho>0\) arbitrary. Let \(k=\tilde{k}+\rho\tilde{k}_{\gamma}\) denote the spiky-smooth kernel where \(\tilde{k}\) and \(\tilde{k}_{\gamma}\) are Gaussian kernels of bandwidth \(\tilde{\gamma}\) and \(\gamma\), respectively. Assume that we choose signs \(\{s_{i}\}_{i\in\mathbb{N}}\) and then the activation functions \(\phi_{NTK}^{k}\), \(\phi_{NTK}^{\tilde{k}}\) and \(\phi_{NTK}^{\tilde{k}}\) as in Theorem 8. Then, for \(\gamma>0\) small enough, it holds that_ \[\|\phi_{NTK}^{k}-(\phi_{NTK}^{\tilde{k}}+\sqrt{\rho}\cdot\phi_{NTK}^{\tilde{k} _{\gamma}})\|_{L_{2}(\mathcal{N}(0,1))}^{2}\leq 2^{1/2}\rho\gamma^{3/2}\exp \left(-\frac{1}{\gamma}\right)+\frac{4\pi(1+\tilde{\gamma})\gamma}{\tilde{ \gamma}}.\] **Proof idea.** When the spikes are sharp enough (\(\gamma\) small enough), the smooth and the spiky component of the activation function are approximately orthogonal in \(L_{2}(\mathcal{N}(0,1))\) (Figure 3c), so that the spiky-smooth activation function can be approximately additively decomposed into the smooth activation component \(\phi^{\tilde{k}}\) and the spike component \(\phi^{\tilde{k}}\) responsible for interpolation. To motivate why the added spike functions \(\rho^{1/2}\phi^{\tilde{k}_{\gamma}}\) should have small amplitudes, observe that Gaussian activation components \(\phi^{\tilde{k}_{\gamma}}\) satisfy \[\|\phi_{NNGP}^{\tilde{k}_{\gamma}}\|_{L_{2}(\mathcal{N}(0,1))}^{2}=1,\qquad\| \phi_{NTK}^{\tilde{k}_{\gamma}}\|_{L_{2}(\mathcal{N}(0,1))}^{2}=\frac{\gamma}{ 2}\left(1-\exp\left(-\frac{2}{\gamma}\right)\right). \tag{6}\] Hence, the average amplitude of NNGP spike activation components \(\rho^{1/2}\phi^{\tilde{k}_{\gamma}}\) does not depend on \(\gamma\), while the average amplitude of NTK spike components decays to \(0\) with \(\gamma\to 0\). Since consistency requires the quasi-regularization \(\rho\to 0\), the spiky component of induced NTK as well as NNGP activation functions should vanish for large data sets \(n\to\infty\) to achieve consistency. ## 6 Experiments Now we explore how appropriate spiky-smooth activation functions might look like and whether they indeed enable harmless interpolation for trained networks of finite width on finite data sets. Further experimental results are reported in Appendix I.1 ### What do common activation functions lack in order to achieve harmless interpolation? To understand which properties we have to introduce into activation functions to enable harmless interpolation, we plot NTK spike components \(\phi^{k_{\gamma}}\) induced by the Gaussian kernel (Figure 3a,b) as well as their Hermite series coefficients (Figure 3c). Remarkably, the spike components \(\phi^{k_{\gamma}}\) approximately correspond to a shifted, high-frequency \(\sin\)-curve, when choosing the signs \(s_{i}\) in (5) to alternate every second \(i\), that is \(s_{i}=+1\) iff \(\lfloor i/2\rfloor\) even (Figure 3a). We empirically determine (Appendix I.6) that the NNGP activation functions are well approximated by the fluctuation function \[\omega_{\text{NNGP}}(x;\gamma)\coloneqq\sqrt{2}\cdot\sin\left(\sqrt{2/\gamma }\cdot x+\pi/4\right)=\sin\left(\sqrt{2/\gamma}\cdot x\right)+\cos\left(\sqrt{ 2/\gamma}\cdot x\right), \tag{7}\] where the last equation follows from the trigonometric addition theorem. For small bandwidths \(\gamma\), the NTK activation functions are increasingly well approximated by \[\omega_{\text{NTK}}(x;\gamma)\coloneqq\sqrt{\gamma}\cdot\sin\left(\sqrt{2/ \gamma}\cdot x+\pi/4\right)=\sqrt{\gamma/2}\left(\sin\left(\sqrt{2/\gamma} \cdot x\right)+\cos\left(\sqrt{2/\gamma}\cdot x\right)\right). \tag{8}\] With decreasing bandwidth \(\gamma\to 0\) the frequency increases, while the amplitude decreases for the NTK and remains constant for the NNGP (see Eq. (6)). Plotting equivalent spike components \(\phi^{k_{\gamma}}\) with different choices of the signs \(s_{i}\) (Figure 3b and Appendix I.5) suggests that harmless interpolation requires activation functions that contain **small high-frequency oscillations** or that **explode at large**\(|x|\), which only affects few neurons. The Hermite series expansion of suitable activation functions should contain **non-negligible weight spread across high-order coefficients** (Figure 3c). While Simon et al. (2022) already truncate the Hermite series of induced activation functions at order 5, Figure 3c shows that an accurate approximation of spiky-smooth activation functions requires the truncation index to be larger than \(2/\gamma\). Only a careful implementation allows us to capture the high-order fluctuations in the Hermite series of the spiky activation functions. ### Training neural networks to achieve harmless interpolation in low dimension In Figure 1, we plot the results of (a) ridgeless kernel regression and (b) trained 2-layer neural networks with standard choices of kernels and activation functions (blue) as well as our spiky-smooth alternatives (orange). We trained on 15 points sampled i.i.d. from \(x=(x_{1},x_{2})\sim\mathcal{U}(\mathbb{S}^{1})\) and \(y=x_{1}+\varepsilon\) with \(\varepsilon\sim\mathcal{N}(0,0.25)\). The figure shows that both the Laplace kernel and standard ReLU networks interpolate the training data too smoothly in low dimension, and do not generalize well. However, our spiky-smooth kernel and neural networks with spiky-smooth activation functions achieve close to optimal generalization while interpolating the training data with sharp spikes. We achieve this by using the adjusted activation function with high-frequency oscillations \(x\mapsto\text{ReLU}(x)+\omega_{\text{NTK}}(x;\frac{1}{5000})\) as defined in Eq. (8). With this choice, we avoid activation functions with exploding behavior, Figure 3: **a., b. Gaussian NTK activation components \(\phi^{k_{\gamma}}_{NTK}\) defined via (5) induced by the Gaussian kernel with varying bandwidth \(\gamma\in[0.2,0.1,0.05]\) (the darker, the smaller \(\gamma\)) for **a.** bi-alternating signs \(s_{i}=+1\) iff \(\lfloor i/2\rfloor\) even, and **b.** randomly iid chosen signs \(s_{i}\sim\mathcal{U}(\{-1,+1\})\). **c.** Coefficients of the Hermite series of a Gaussian NTK activation component with varying bandwidth \(\gamma\). Observe peaks at \(2/\gamma\). For reliable approximations of activation functions use a truncation \(\geq 4/\gamma\). The sum of squares of the coefficients follows Eq. (6). Figure 1.8 visualizes NNGP activation components. which would induce exploding gradients. Other choices of amplitude and frequency in Eq. (8) perform worse. Over the course of training (Figure 0(c)), the standard ReLU network exhibits harmful overfitting, whereas the NN with a spiky-smooth activation function quickly interpolates the training set with nearly optimal generalization. Training details and hyperparameter choices can be found in Appendix I.1. Although the high-frequency oscillations perturb the gradients, the NN with spiky smooth activation has a stable training trajectory using gradient descent with a large learning rate of 0.4 or stochastic gradient descent with a learning rate of 0.04. Since our activation function is the sum of two terms, we can additively decompose the network into its ReLU-component and its \(\omega_{\text{NTK}}\)-component. Figure 0(b) and Appendix I.2 demonstrate that our interpretation of the \(\omega_{\text{NTK}}\)-component as'spiky' is accurate: The oscillations in the hidden neurons induced by \(\omega_{\text{NTK}}\) interfere constructively to interpolate the noise in the training points and regress to 0 between training points. This entails immediate access to the signal component of the trained neural network in form of its ReLU-component. ## 7 Conclusion Conceptually, our work shows that inconsistency of overfitting is quite a generic phenomenon for regression in fixed dimension. However, particular spiky-smooth estimators enable benign overfitting, even in fixed dimension. We translate the spikes that lead to benign overfitting in kernel regression into infinitesimal fluctuations that can be added to activation functions to consistently interpolate with wide neural networks. Our experiments verify that neural networks with spiky-smooth activation functions can exhibit benign overfitting even on small, low-dimensional data sets. Technically, our inconsistency results cover many distributions, Sobolev spaces of arbitrary order, and arbitrary RKHS-norm-bounded overfitting estimators. Lemma E.1 serves as a generic tool to extend generalization bounds to the sphere \(\mathbb{S}^{d}\), allowing us to cover (deep) ReLU NTKs and ReLU NNGPs. Future work.While our experiments serve as a promising proof of concept, it remains unclear how to design activation functions that enable harmless interpolation of more complex neural network architectures and data sets. As another interesting insight, our consistent kernel sequence shows that although kernels may have equivalent RKHS (see Appendix G.2), their generalization error can differ arbitrarily much; the constants of the equivalence matter and the narrative that depth does not matter in the NTK regime as in Bietti and Bach (2021) is too simplified. More promisingly, analyses that extend our analysis in the infinite-width limit to a joint scaling of width and depth could help us to understand the influence of depth (Fort et al., 2020; Li et al., 2021; Seleznova and Kutyniok, 2022). ## Acknowledgements Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016 and EXC 2064/1 - Project 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Moritz Haas and David Holzmuller. We want to thank Tizian Wenzel for interesting discussions. We also thank Nadine Grosse, Jens Wirth, and Daniel Winkle for helpful comments on Sobolev spaces.
2305.13718
Exploring Self-supervised Logic-enhanced Training for Large Language Models
Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks. The development of Large Langauge Models (LLMs) has demonstrated the capacity of compressing abundant knowledge into a single proxy, enabling them to tackle multiple tasks effectively. Our preliminary experiments, nevertheless, show that LLMs do not show capability on logical reasoning. The performance of LLMs on logical reasoning benchmarks is far behind the existing state-of-the-art baselines. In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training, and activating it via in-context learning, which we termed as LogicLLM. Specifically, we devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion. The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM. Besides, we conduct extensive ablation studies to analyze the key factors in designing logic-oriented proxy tasks.
Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty
2023-05-23T06:13:10Z
http://arxiv.org/abs/2305.13718v7
# LogicLLM: Exploring Self-supervised Logic-enhanced Training for Large Language Models ###### Abstract Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks. The development of Large Langauge Models (LLMs) has demonstrated the capacity of compressing abundant knowledge into a single proxy, enabling them to tackle multiple tasks effectively. Our preliminary experiments, nevertheless, show that LLMs do not show capability on logical reasoning. The performance of LLMs on logical reasoning benchmarks is far behind the existing state-of-the-art baselines. In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training, and activating it via in-context learning, which we termed as LogicLLM. Specifically, we devise an auto-regressive objective variant of MERIt (Jiao et al., 2022) and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion. The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM. Besides, we conduct extensive ablation studies to analyze the key factors in designing logic-oriented proxy tasks. 1 Footnote 1: The code, training data, and trained models will be released at github.com/SparkJiao/MERIt-v2. ## 1 Introduction Logical reasoning has long been treated as a complex yet essential ability in cognition that goes beyond fundamental language understanding. It also serves as a necessary foundation in negotiation, debate and writing (Yu et al., 2020). Thus, extensive efforts have been made in recent years to enhance logical reasoning ability of language models. Previous approaches have primarily focused on incorporating the knowledge about logic. For instance, some have utilized graph neural networks to acquire relational representations of semantic elements (Huang et al., 2021), while others have employed data augmentation techniques to introduce first-order logic (Wang et al., 2022). However, these approaches still require a substantial amount of annotated training data for fine-tuning, which restricts the model's ability to generalize to new tasks due to discrepancies in data distribution and optimization objectives. On the other hand, the recent development of Large Language Models (LLMs), e.g., PaLM (Chowdhery et al., 2022), LLAMA(Touvron et al., 2023), ChatGPT2, GPT-4 (OpenAI, 2023)3, and Bard4, suggests that LLMs can serve as a proxy to compress abundant knowledge, and solve various tasks following human instructions with less specialization. However, as shown in Table 1, our experiments on logical reasoning benchmarks reveal that LLMs also lack the ability to do logical reasoning. There are also results showing that the promotion brought by few-shot exemplars or Chain-of-Thought (CoT) prompting (Wei et al., 2022) is \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{**ReClor**} & \multicolumn{3}{c}{**LogiQA-v2**} \\ Model / Dataset & Dev & Test & Dev & Test \\ & Acc. & Acc. & Acc. & Acc. \\ \hline RoBERTa-L & 62.6 & 55.6 & 59.8 & 57.0 \\ MERIt (RoBERTa-L) & 69.4 & 61.6 & **62.6** & **59.3** \\ MERIt (DeBERTa-XXL) & **80.6** & **78.1** & — & — \\ \hline LLaMA-7B & 28.8 & 28.3 & 24.4 & 23.7 \\ LLAMA-13B & 31.6 & 34.4 & 31.6 & 31.1 \\ \hline GPT-3.5-turbo & 56.6 & 61.2 & 54.5 & 52.7 \\ w/ CoT & 58.8 & 57.7 & — & 53.1 \\ \hline \hline \end{tabular} \end{table} Table 1: The overall accuracy of LLMs, i.e., ChatGPT (GPT-3.5-turbo) and LLaMA, and existing state-of-the-art methods (Jiao et al., 2022) on logical reasoning benchmarks. The evaluation of LLMs follows zero-shot in-context learning setting, where the models are expected to decode the answer based on the given instruction, context, and question. also limited (Zhong et al., 2023). As the example shown in Figure 1, logical reasoning in natural language can be challenging due to the various relations and expressions involved, which are difficult to transcribe into symbolic or formal languages. Existing work that aims to enhance particular capabilities of LLMs can be categorized into two broad approaches. The first category relies on external tools or APIs, such as search engine (information retriever) (Schick et al., 2023; Mialon et al., 2023), SQL query (Cheng et al., 2022), or calculator (Gao et al., 2022; Chen et al., 2022). These approaches allow LLMs to focus on argument parsing and semantic understanding, resulting in more accurate step-by-step results compared to those generated through auto-regressive decoding. However, it can be circumscribed when applied to logical reasoning due to the absence of explicit symbolic or formal language for problem description. Another approach is instruction tuning, involving augmenting data from existing datasets (Aribandi et al., 2022; Wei et al., 2022; Longpre et al., 2023; Xu et al., 2022; Sanh et al., 2022) or annotating human queries with high quality responses and/or human feedback (Wang et al., 2022b; Xu et al., 2023), which is also non-trivial for logical reasoning given its sparsity and high annotation cost. In this paper, we aim to explore the logic-enhanced meta-training5 for LLMs. As exemplified above, processing the diverse relational expressions entailed in natural language in a key challenge to enhance logical reasoning. To address this, we introduce MERIt+, a self-supervised pretraining method by relation discrimination. The original MERIt framework includes data construction for contrastive learning and counterfactual data augmentation for eliminating information shortcut. The positive instance is a set of sentences describing the relation between an anchor entity pair from two different views, where one view is direct and the other is indirect. And the negative instances are synthesized via relation editing. The counterfactual data augmentation further emphasizes the relation-centric reasoning process with entity replacement to remove the bias from world knowledge. Footnote 5: [https://github.com/allenai/acl2022-zerofewshot-tutorial](https://github.com/allenai/acl2022-zerofewshot-tutorial). However, the contrastive learning objective could be inconsistent with auto-regressive decoding, and the computation complexity grows exponentially with the number of negative candidates. In MERIt+, we propose an auto-regressive variant by directly optimizing on the positive instance. Specifically, given a direct/indirect description about the anchor entity pair, the model is asked to generate the indirect/direct one via auto-regressive decoding. The entity replacement-based counterfactual data augmentation is kept to make sure LLM cannot copy the results from its memory. The experiments are conducted on two LLM series, FLAN-T5 (Longpre et al., 2023) and LLaMA (Touvron et al., 2023), with different parameter sizes. We design experiments to study two questions: (1) is it feasible to enhance logic ability via only self-supervised meta-training for LLMs, i.e., without task-specific supervised fine-tuning? and (2) does the logic-enhanced training improves the general language understanding of LLMs? For the first question, we find that the LLMs with the MERIt+ objective performs better on logical reasoning benchmarks without the need of further fine-tuning. As for the second question, we find that the learned logic ability also boosts the performance on general language understanding. i.e., MMLU (Hendrycks et al., 2021) and BIG-Bench-Hard (Suzgun et al., 2022). The contributions of this paper can be summarized as follows: 1. We evaluate the capability of LLMs on logical reasoning benchmarks, demonstrating weakness of LLMs on logical reasoning. Figure 1: An example logical reasoning from LogiQA-v2 dataset (Liu et al., 2020). The relations between different constituents, e.g., _agriculture_ and _development of Andean society_, include various predicates, and the example can be hard to converted into logical form through either first-order logic or formal language. 2. We propose an auto-regressive and self-supervised framework for logic-enhanced meta-training, MERIt+, and our experiments have verified the feasibility to introduce complex ability into LLMs without task-specific fine-tuning. 3. We conduct extensive experiments on both FLAN-T5 and LLaMA series, with model parameter sizes ranging from 3 to 13 billion. The results on logical reasoning and language understanding demonstrate the effectiveness of our approach. We have also explored the possible effects of different pre-training strategies to bring in some insights to future research. ## 2 Related Work ### Reasoning in Natural Language As a hallmark of human intelligence, natural language reasoning has long been attracting the persistent attention of researchers. Since the release of SQuAD Rajpurkar et al. (2018), the emphasis of this area is going ahead from _explicit_, _superficial_ reading comprehension Reddy et al. (2019), into _implicit_, _complex_, and _specific_ reasoning capability, e.g., multi-hop reasoning Yang et al. (2018); Welbl et al. (2018), numerical reasoning Dua et al. (2019); Zhu et al. (2021), and logical reasoning Yu et al. (2020); Liu et al. (2020), etc. Prior work to facilitate the natural language reasoning task can be roughly divided into three categories. The first kind of reasoning methods is based on explicit prior knowledge, such as discourse structure or linguistic knowledge, to model the implicit reasoning process Gao et al. (2020); Huang et al. (2021). However, since the prior knowledge is often task-specific, these methods can be easily affected by dataset bias. The second kind of methods focuses on general domain pre-training to enhance task-specific reasoning abilities. For example, Jiao et al. (2021) proposed a retrieval-based self-supervised objective to augment evidence extraction, where the training data is constructed from Wikipedia by masking the key information for sentence reordering. Deng et al. (2021) and Liu et al. (2022) selectively masked the entities and relations in a document based on the co-occurrence or logical relations and employed masked language modeling (MLM) Devlin et al. (2019) to reconstruct them, enhancing the reasoning ability of pre-trained models. The third category is neural-symbolic reasoning, where the variables are first parsed and then some pre-defined programs are executed to obtain the final results Wang et al. (2022); Zhong et al. (2021). An inevitable problem in these methods is that extracting the variables or arguments requires a strong semantic parser or available annotations, which is impractical for natural language based logic reasoning. Our approach falls within the second category, which improves the efforts of Jiao et al. (2022) by transforming it into auto-regressive framework and enhancing the logic of LLMs under in-context learning setting. ### Pre-trained Language Model Over the past few years, significant progress has been made in natural language processing (NLP) with the development of pre-trained language models. Starting with BERT Devlin et al. (2019) and GPT Radford and Narasimhan (2018), pre-training followed by fine-tuning has become the predominant trend for several years and has also influenced other research fields, such as computer vision He et al. (2022); Bao et al. (2022) and biology Rives et al. (2021). The large text corpus used to pre-train these models has enabled them to achieve strong generalization capabilities across different domains Liu et al. (2021); He et al. (2021). Furthermore, more pre-training frameworks have been developed to improve learning efficiency Liu et al. (2019); He et al. (2021); Clark et al. (2020) or enhance generative ability Raffel et al. (2020); Lewis et al. (2020). On the other hand, the zero-shot task dealing capability has also emerged Brown et al. (2020), including some complex abilities, such as commonsense and arithmetic reasoning Wei et al. (2022);. And in-context learning has become a new paradigm for using large language models by providing instruction or few-shot exemplars as prefixes Dong et al. (2022); Ding et al. (2022). Instruction tuning Ouyang et al. (2022); Wei et al. (2022); Longpre et al. (2023) also serves as a foundation for large language models to learn following human instructions and maintaining fluent conversation. Nevertheless, the utilization of prompt-based approach in large language models presents inherent limitations in terms of incorporating novel knowledge due to its non-alteration of the model parameters, thereby posing considerable challenges in the introduction of new information Zhao et al. (2023). Our experiments also demonstrate that, despite having strong in-context learning capabilities, it can still be challenging for LLMs to learn complex reasoning abilities merely from limited observed examples. ## 3 Method ### MERIt+ MERIt (Jiao et al., 2022) is a self-supervised pre-training method for logical reasoning, which is implemented by automatically constructing training samples for contrastive learning from Wikipedia documents. Given an anchor entity pair, a positive instance describes a pair of logically consistent relations between these two entities, namely direct and indirect relations. And the negative instance is synthesized via relation replacement. Besides, in order to eliminate the information shortcut caused by the factual knowledge from LM pre-training, the auto-generated counterfactual data via entity replacement is mixed with specific ratio. In this paper, we omit the details of data construction and we refer readers to Jiao et al. (2022) for more details. We mainly focus on the integration of the original contrastive learning objective into the training process of LLMs. Given an anchor sample \(x\) composed of a series of sentences, the logically consistent sample is denoted as \(x^{+}\), and the negatively paired samples is represented as \(\mathcal{X}^{-}\). The training objective for contrastive learning is formulated as: \[\begin{split}\mathcal{L}_{\mathrm{CL}}&=L(x,x^{+},\mathcal{X}^{-})\\ &=-\log\frac{\exp f(x,x^{+})}{\sum_{x^{\prime}\in\mathcal{X}^{-} \cup\{x^{+}\}}\exp f(x,x^{\prime})},\end{split} \tag{1}\] where \(f(\cdot)\) is LLM taking the concatenation of \([x,x^{+}]\) or \([x,x^{-}]\) as input to compute the score. Different from the conventional approach (Lewis et al., 2020), where the representation of EOS token is employed for score computing, we employ a token-wise scoring implementation to contribute fine-grained optimization. The process can be formulated as \[f(x,x^{+})=\frac{1}{|x^{\prime}|}\sum_{i}^{|x|\leq i\leq|x|+|x^{\prime}|}\mathbf{ W}\mathbf{h}_{i}+\mathbf{b}, \tag{2}\] where \(\mathbf{W}\in\mathbb{R}^{d\times 1}\) and \(\mathbf{h}_{i}\) is the representation for the \(i\)-th token from the LLM. However, the contrastive learning based method could be unsuitable for LLM pre-training because it involves negative candidates and leads to exponential computation cost, which especially prolong the training time. To this end, we propose an auto-regressive variant by directly optimizing positive sequence, which is defined as: \[\begin{split}\mathcal{L}_{\mathrm{AR}}&=-\log P(x^ {+}|x)\\ &=-\sum_{i}^{|x^{+}|}\log P(x_{i}^{+}|x_{<i}^{+},x),\end{split} \tag{3}\] where \(x_{i}^{+}\) denotes the \(i\)-th token of \(x^{+}\). It should be noted that in this case, the carefully constructed negative samples are ignored, only the counterfactual data augmentation strategy of original MERIt works. ### LLM Prompting In order to test the generalization across different tasks after post-training, we evaluate LLMs through prompting, where the input to LLMs can be formulated as Instruction [Exemplars] Task input. The instruction depends on specific task and exemplars are used only in few-shot setting. Each exemplar is composed by the task input and output. As shown in Figure 2, for multiple choice question answering, the task input is the concatenation of the context, question and all options, with the correct option index as the output. In CoT setting, we also include the reasoning process in natural language between the task input and output. Following Hendrycks et al. (2021), during inference, for questions with candidate options, we enumerate each option to form different response template, e.g., _The answer is D_. Then we can directly parse the prediction according to the last token probability instead of using regular expression. This is of great help to accelerate the evaluation process. For free-form answer and the evaluation of API-based model, e.g., ChatGPT, we still use regular expression to extract the answer from the generated content. ## 4 Experiment ### Foundation Model and Evaluation Benchmark We incorporate our pre-training approach into two representative LLMs, namely, LLaMA (Touvron et al., 2023) and FLAN-T5 (Wei et al., 2022). These models have parameters ranging from 3 billion to 13 billion. To test the logical reasoning capabilities from different perspectives, we have selected five datasets spanning three categories. ReClor Yu et al. (2020) and LogiQA-V2 Liu et al. (2020) are two logical reasoning benchmarks, sourced respectively from standardized graduate admission examinations and logical examination papers intended for reading comprehension. RACE Lai et al. (2017) and MMLU Hendrycks et al. (2021) are used for evaluating general language understanding of our proposed approach. BIG-Bench-Hard is a series of challenging tasks for LLMs Suzgun et al. (2022). We employ this dataset to test whether the model can apply logical reasoning to complex tasks, such as causal judgment and tracking shuffled objects, and to explore which tasks or abilities might be negatively impacted by logic-oriented pre-training. ### Implementation Details **Data** We follow the original setting in MERIt to construct the logic-oriented pre-training corpus from Wikipedia. For FLAN-T5 series models, we use FLAN collection with MERIt+ corpus for joint training. FLAN collection is a large instruction tuning comprising 1.8k tasks and we include this data collection to avoid the catastrophic forgetting of instruction following and foundation capabilities. We randomly sample a subset of FLAN collection following the pre-defined task data ratios in the original paper. For LLaMA series models, we only use the MERIt+ corpus. **Settings**. We set the batch size to 4096 in the pre-training process, which is implemented via gradient accumulation. The max sequence length for FLAN collection is truncated at 1024 and 512 for MERIt+ corpus. The training step for FLAN-T5 series models is set to 200 and 500 for LLaMA series models. The learning rate is set to 1e-4, 5e-5, 1e-5, and 5e-6 for FLAN-T5-3B, FLAN-T5-11B, LLaMA-7B, and LLaMA-11B, respectively. The training process is conducted on 8 NVIDIA A100 80G GPUs. **Evaluation**. For each dataset, we use identical instructions and few-shot samples across different models to ensure fair comparison. For the BIG-Bench-hard test, we used the officially provided annotated exemplars for both direct few-shot and CoT evaluations. We use accuracy scores as the evaluation metrics. For models with accessible options, such as FLAN-T5 and LLaMA, we directly compute the probability of each option index using a fixed template, and select the option with the highest probability as the answer. For API-based models like ChatGPT, we follow the method outlined by Zhang et al. (2023), using regular expressions to parse predictions from the generated responses. Some questions in the MMLU benchmarks have free-form answers, for which we calculate accuracy based on exact matches. ## 5 Results and Analysis ### Overall Results #### 5.1.1 Logical Reasoning Table 2 shows the results on ReClor and LogiQA-v2 under both zero-shot and 3-shot settings. **Zero-shot** For zero-shot experiments, we first directly tested ChatGPT using its API. This yielded accuracy scores of 56.6 and 61.2 on the dev and test sets of ReClor, respectively, and scores of 54.5 and 52.7 on the dev and test sets of LogiQA-v2, respectively. Although these results generally lag behind the current state-of-the-art models under supervised fine-tuning settings, they remain promising. As shown in Table 1, the zero-shot results of ChatGPT are comparable to the fully supervised results of MERIT-Roberta-L (61.2 vs. 61.6). For models based on LLaMA, they significantly underperform compared to ChatGPT. The original Figure 2: Prompting format of different tasks. [Exemplars] refer to the samples annotated with the correct answer from training set, keeping exactly the same format with the following input. LLaMA-7B model yields accuracy scores of 30.3 and 28.1 on the test sets of ReClor and LogiQA-v2, respectively. The LLaMA-13B model improves upon LLaMA-7B by 3.2 and 4.0 points on the test sets of ReClor and LogiQA-v2, respectively. By training LLaMA models with MERIt+, we can significantly improve their zero-shot logical reasoning capabilities. For example, for the LLaMA-13B model, our logic-oriented meta-training boosts results by 7.0 and 3.8 points on the dev and test sets of ReClor, respectively, and by 1.1 and 2.1 points on the dev and test sets of LogiQA-v2, respectively. The improvements to LLaMA-13B are more significant than those to LLaMA-7B, which is consistent with the observations about emergent abilities Wei et al. (2022). One possible reason is that larger models have stronger generalization and better apply the learned abilities into different tasks.6 Footnote 6: For meta-training with larger LLaMA models such as LLaMA-65B, we leave it for future work. For models based on FLAN-T5, we can draw similar conclusions about logic-enhanced meta-training as we did with the LLaMA-based models. FLAN-T5-3B provides accuracy scores of 52.5 and 48.7 on the test sets of ReClor and LogiQA-v2, respectively, which are much better than LLaMA-13B. This demonstrates the utility of the FLAN collection data and instruction training for logical reasoning. FLAN-T5-11B outperforms ChatGPT on the dev set of ReClor, as well as on the dev and test sets of LogiQA-v2, demonstrating that it serves as a strong base model for logical reasoning. Our logic-oriented meta-training also enhances both FLAN-T5-3B and FLAN-T5-11B. For FLAN-T5-11B, our model achieves accuracies of 61.2 and 61.1 on the dev and test sets of ReClor, respectively, and 56.0 and 54.0 on the dev and test sets of LogiQA-v2, respectively. On the dev set of ReClor, our logic-oriented FLAN-T5-11B model **outperforms ChatGPT by 4.8 accuracy points**. On the dev and test sets of LogiQA-v2, our logic-oriented FLAN-T5-11B model **surpasses ChatGPT by 1.5 and 1.3 accuracy points**, respectively, establishing a new state-of-the-art for zero-shot logical reasoning results. The overall results suggest that instruction tuning on multiple supervised datasets, such as the FLAN collection, can still be improved for learning logic. We hypothesize that this is due to the sparsity of reasoning-relevant data in the entire collection and the conflicts between different tasks, as FLAN includes multiple reasoning tasks. scores of 50.3 and 49.1 on the dev and test sets of ReClor, respectively, which are lower than the results from the zero-shot settings. It attains an accuracy score of 49.1 on the test set of LogiQA-v2, which is only slightly better than the result from the zero-shot setting. Our logic-oriented meta-training shows similar improvements when compared to the zero-shot setting, demonstrating that our method can also be effective in few-shot settings. The overall results suggest that the selection and generalization from exemplars remain open problems to be solved. We will discuss this phenomenon in Section 5.5. ### Logic-enhanced Meta-training for General Language Understanding In addition to logical reasoning in text, we are also curious about whether logic-enhanced training contributes to general language understanding. To investigate this, we evaluate the performance of the enhanced LLaMA models on RACE and MMLU. As shown in Table 3, the LLaMA-7B baseline scores 34.9 and 36.0 on the dev and test sets of RACE, respectively, and achieves an accuracy of 30.9 on the MMLU test set7. Training with MERIt+ significantly improves upon the baseline 7B model, enhancing performance by 4.6, 3.7, and 3.1 accuracy points on RACE-Dev, RACE-Test, and MMLU-Test respectively. Given that most tasks in MMLU require domain-specific knowledge, which can be challenging to compress into smaller models, the improvements on MMLU-Test demonstrate the robust generalization capabilities of our logic-enhanced meta-training for general language understanding. Similarly, LLaMA-13B outperforms LLaMA-7B on both RACE and MMLU. Our logic-oriented meta-training consistently enhances the performance of LLaMA-13B. These results collectively suggest that our proposed logic-oriented meta-training benefits general language understanding. Footnote 7: The detailed results on each sub-task of MMLU are displayed in Table 7 in the appendix. ### Logic-enhanced Meta-training for Complex Task Understanding We evaluate the performance of logic-enhanced pretrained models on BIG-Bench-Hard, a benchmark composed of multiple challenging tasks where human still performs much better than LLMs. From Table 4 we can conclude that (1) MERIt+ enhances the performance under both zero-shot setting and direct few-shot setting, suggesting that logic-enhanced training also improves the performance on the complex reasoning tasks. (2) Few-shot prompting, including both _direct_ and _CoT_ setting, cannot make significant improvements compared with that under zero-shot setting, except for LLaMA. This is because some questions have free-formed answers, and LLaMA can hardly stop generating without exemplars. ### Ablation Studies In this section, our focus is on studying the effects of different training objectives. Table 5 demonstrates that, in most data splits, auto-regressive models outperform contrastive learning-based variants. There are two primary reasons for this observation. First, while contrastive learning's negative candidates introduce additional supervision signals, the heuristic construction process cannot identify true contradictory relations, where most negative samples are randomly selected, lacking logically opposite relationships with the positive instances. Consequently, the model optimizes towards the target relation pair across all different examples, which shares similar objective with language modeling but in phrase-level. Secondly, the mismatch between contrastive learning and auto-regressive generation can hamper the ability of in-context learning. This issue becomes more pronounced in larger models as they are prone to overfitting. Moreover, the auto-regressive objective significantly reduces computation costs during training by eliminating the need for negative candidates. In summary, considering the advantages in performance and training cost, the auto-regressive variant proves to be a superior choice. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**ReClor**} & \multicolumn{2}{c}{**LogiQA-v2**} \\ Model / Dataset & Dev & Test & Dev & Test \\ & Acc. & Acc. & Acc. & Acc. \\ \hline LLaMA-7B & 30.2 & 30.3 & 27.4 & 28.1 \\ w/ MERIt+ (_ar_) & 32.4 & **31.0** & 27.7 & 28.6 \\ w/ MERIt+ (_ctr_) & **33.2** & 29.6 & **28.2** & **29.4** \\ LLaMA-13B & 30.4 & 33.5 & 33.0 & 32.1 \\ w/ MERIt+ (_ar_) & **37.4** & **36.3** & **34.1** & **34.0** \\ w/ MERIt+ (_ctr_) & 33.4 & 33.3 & 33.1 & 32.7 \\ \hline \hline \end{tabular} \end{table} Table 5: The experiments exploring the effect of different training objectives. _Ctr_ refer to the objective of contrastive learning and _ar_ means the auto-regressive variant. ### Weakness of LLMs on Logical Reasoning Table 6 presents the evaluation of LLMs' performance in few-shot and CoT settings. The intermediate reasoning process is generated automatically by ChatGPT using the prompt _Let's think step by step_. In the case of zero-shot CoT, we append the suffix prompt - _So the answer is_ to guide the models in summarizing and concluding the answer. For few-shot CoT, the reasoning process is initially generated for each sample in the training set. Subsequently, we retain the samples where the final prediction is correct, following the steps in zero-shot CoT. During testing, we randomly select samples from the retained candidates, as well as the automatically generated CoT, to serve as exemplars. However, we observe that both few-shot learning and using CoT do not significantly benefit the models. One possible reason is that the selected samples differ greatly from the target example. To verify this, we include the reasoning category information during exemplar selection. In LogiQA-V2, each question is annotated with a reasoning category such as categorical reasoning, sufficient conditional reasoning, and necessary conditional reasoning. For few-shot CoT prompting, we only consider candidates that share at least two common reasoning categories. This method variant is denoted as _ChatGPT w/ CoT + Cate._ in the table. Nevertheless, we find that carefully selecting prompting exemplars also provides limited improvement. All the results indicate that LLMs struggle to grasp the reasoning structure from a limited number of observed examples. Consequently, they are unable to learn the mapping between input-label and input-rationale-label effectively. Additionally, we can observe from Table 2 that MERIt+ also contributes little to addressing this problem. We leave further investigation of this matter to future work. ## 6 Conclusion In this paper, we first analysed the performance of ChatGPT on logical reasoning benchmarks, as well as that using logically enhanced chain-of-thought prompting. From the results we can conclude that though language modeling can help compress all data both with and without supervision into neural models and accept human instructions, it is still really weak in perform logical reasoning. Specifically, LLMs often fail at reaching correct deductions based on the given facts and rules, and cannot distil the logical reasoning structure from observed prompts to generalize to new problems. We hope these observations could bring some insights to future research on introducing logic prior into large language models. One step further, we combine previous data-driven approach, i.e., MERIt+, a self-supervised pre-training method for logical reasoning, with FLAN-T5-3B, and the results have also demonstrated its effectiveness.
2305.04264
ATLASGAL: 3-mm class I methanol masers in high-mass star formation regions
We analyzed the 3-mm wavelength spectral line survey of 408 ATLASGAL clumps observed with the IRAM 30m-telescope, focusing on the class I methanol masers with frequencies near 84, 95 and 104.3 GHz. We detect narrow, maser-like features towards 54, 100 and 4 sources in the maser lines near 84, 95 and 104.3 GHz, respectively. Among them, fifty 84 GHz masers, twenty nine 95 GHz masers and four rare 104.3 GHz masers are new discoveries. The new detections increase the number of known 104.3 GHz masers from 5 to 9. The 95 GHz class I methanol maser is generally stronger than the 84 GHz maser counterpart. We find 9 sources showing class I methanol masers but no SiO emission, indicating that class I methanol masers might be the only signpost of protostellar outflow activity in extremely embedded objects at the earliest evolutionary stage. Class I methanol masers that are associated with sources that show SiO line wings are more numerous and stronger than those without such wings. The total integrated intensity of class I methanol masers is well correlated with the integrated intensity and velocity coverage of the SiO (2--1) emission. The properties of class I methanol masers are positively correlated with the bolometric luminosity, clump mass, peak H$_2$ column density of their associated clumps but uncorrelated with the luminosity-to-mass ratio, dust temperature, and mean H$_2$ volume density. We suggest that the properties of class I masers are related to shocks traced by SiO. Based on our observations, we conclude that class I methanol masers at 84 and 95 GHz can trace a similar evolutionary stage as H$_2$O maser, and appear prior to 6.7 and 12.2 GHz methanol and OH masers. Despite their small number, the 104.3 GHz class I masers appear to trace a short and more evolved stage compared to the other class I masers. [abridged]
W. Yang, Y. Gong, K. M. Menten, J. S. Urquhart, C. Henkel, F. Wyrowski, T. Csengeri, S. P. Ellingsen, A. R. Bemis, J. Jang
2023-05-07T13:01:50Z
http://arxiv.org/abs/2305.04264v1
# ATLASGAL: 3-mm class I methanol masers in high-mass star formation regions ###### Abstract Context:Class I methanol masers are known to be associated with shocked outflow regions around massive protostars, indicating a possible link between the maser properties and those of their host clumps. Aims:The main goals of this study are (1) to search for new class I methanol masers, (2) to statistically study the relationship between class I masers and shock tracers, (3) to compare the properties between class I masers and their host clumps, also as a function of their evolutionary stage and, (4) to constrain the physical conditions that excite multiple class I masers simultaneously. Methods:We analyzed the 3-mm wavelength spectral line survey of 408 ATLASGAL clumps observed with the IRAM 30m-telescope, focusing on the class I methanol masers with frequencies near 84, 95 and 104.3 GHz. Results:We detect narrow, maser-like features towards 54, 100 and 4 sources in the maser lines near 84, 95 and 104.3 GHz, respectively. Among them, fifty 84 GHz masers, twenty nine 95 GHz masers and four trace 104.3 GHz masers are new discoveries. The new detections increase the number of known 104.3 GHz masers from 5 to 9. The 95 GHz class I methanol maser is generally stronger than the 84 GHz maser counterpart. We find 9 sources showing class I methanol masers but no SiO emission, indicating that class I methanol masers might be the only signpost of protostellar outflow activity in extremely embedded objects at the earliest evolutionary stage. Class I methanol masers that are associated with sources that show SiO line wings are more numerous and stronger than those without such wings. The total integrated intensity of class I methanol masers is well correlated with the integrated intensity and velocity coverage of the SiO (2-1) emission. The properties of class I methanol masers are positively correlated with the bolometric luminosity, clump mass, peak H\({}_{2}\) column density of their associated clumps but uncorrelated with the luminosity-to-mass ratio, dust temperature, and mean H\({}_{2}\) volume density. Conclusions:We suggest that the properties of class I masers are related to shocks traced by SiO. Based on our observations, we conclude that class I methanol masers at 84 and 95 GHz can trace a similar evolutionary stage as H\({}_{2}\)O maser, and appear prior to 6.7 and 12.2 GHz methanol and OH masers. Despite their small number, the 104.3 GHz class I masers appear to trace a short and more evolved stage compared to the other class I masers. ## 1 Introduction Astronomical methanol (CH\({}_{3}\)OH) masers were first reported in the Orion Kleinmann-Low (KL) nebula (Barrett et al., 1971), shortly after CH\({}_{3}\)OH emission had been discovered in the Galactic center (Ball et al., 1970). Subsequent observations have resulted in the detection of numerous CH\({}_{3}\)OH maser transitions which have been proven to be important tracers of star formation regions (e.g., Menten, 1991). These CH\({}_{3}\)OH masers are divided into two categories, based on their different observational properties and pumping mechanisms (Bartla et al., 1987; Menten, 1991). Class I CH\({}_{3}\)OH masers are thought to be collisionally pumped (e.g., Leurini et al., 2016), and often found to be offset from embedded infrared sources and ultracompact Hu (UCH\({}_{3}\)) regions. In contrast, class II CH\({}_{3}\)OH masers are thought to be radiatively pumped (e.g. Cragg et al., 2005) by the infrared radiation emitted by massive young stellar objects in whose environments they are found (Walsh et al., 1998). Only a small portion of these produce detectable radio continuum emission that can ionize UCH\({}_{3}\)(Nguyen et al., 2022). Based on previous studies (see Table 1 in Leurini et al., 2016, for instance), the three methanol transitions at 84 GHz, 95 GHz, and 104.3 GHz are the brightest class I methanol masers in the 3 mm band. The methanol emission in the \(J_{k}=5_{-1}-4_{0}\,E\) line near 84 GHz belongs to the same line series of strong class I masers as the 36 GHz \(4_{-1}-3_{0}\,E\) transition (i.e., they connect the same k-ladders). Maser emission at 84 GHz was discovered by Batrla & Menten (1988) and Menten (1991) towards DR21(OH), NGC2264 and OMC-2. Since 2001, several extensive searches for this transition have been conducted towards sources associated with other class I masers, and young bipolar outflows in low-intermediate-mass star formation re gions (Kalenskii et al., 2001, 2006; Rodriguez-Garza et al., 2018; Breen et al., 2019). The detection rate of the 84 GHz methanol line, which can show narrow maser-like features and broad quasi-thermal emission, in these targeted surveys is greater than 70%. Methanol emission in the 95 GHz \(J_{K}=8_{0}-7_{1}\,A^{*}\) transition belongs to the same line series (common \(K\) quantum numbers) as the strong and widespread \(7_{0}-6_{1}\,A^{*}\) maser at 44 GHz (Haschick et al., 1990). The number of known 95 GHz methanol masers has been significantly increased by systematic surveys, including towards (1) extended green objects (EGOs) identified in the 4.5\(\mu\)m band of the Spitzer Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE), whose emission is dominated by rotationally excited shocked H\({}_{2}\)(Chen et al., 2011, 2013); (2) molecular outflow sources (Gan et al., 2013); (3) many sources associated with both GLIMPSE point sources and 1.1 mm dust continuum emission detected in the Bolocam Galactic Plane Survey (BGPS; Chen et al., 2012); (4) a large sample of BGPS sources (Yang et al., 2017, 2020); and (5) red Midcourse Space Experiment sources (Kim et al., 2018). Ladeyschikov et al. (2019) compiled an online database of class I methanol masers and reported a total of 129 and 534 maser detections at 84 and 95 GHz, respectively. These facts suggest that both masers are widespread across the Milky Way. On the other hand, the \(11_{-1}-10_{-2}\,E\) methanol maser at 104.3 GHz is rarely seen. Maser emission in this line was first predicted by theoretical calculations (Voronkov, 1999), and was successfully detected towards W33-Met (Voronkov et al., 2005). A follow-up Mopra 104.3 GHz survey led to only two detections out of 69 targets, suggesting that this maser line is rare (Voronkov et al., 2007). Theoretical calculations suggest that more energetic conditions (i.e., higher temperatures and densities) are required to produce these rare class I methanol masers (Sobolev et al., 2005; Voronkov et al., 2012) than the widespread class I methanol masers at 36, 44, 84, and 95 GHz. Prior to this work, only five sources (G019.61\(-\)02, G305.21\(+\)0.21, G357.97\(-\)0.16, IRAS16547\(-\)4247 and W33-Met) were known to harbor 104.3 GHz methanol masers (e.g. Voronkov et al., 2006). However, the overall incidence of the 104.3 GHz methanol masers in the Milky Way is still poorly constrained. It has been established that class I methanol masers are closely related to shocked regions. Based on high-angular resolution observations towards DR21/DR21(OH), Plambeck & Menten (1990) suggested that class I methanol masers trace the interface between outflows and ambient dense clouds. Subsequent observations supported this scenario and found a coincidence between class I masers and other molecular shock tracers (e.g. SiO, Kurtz et al., 2004; HNCO, Gorski et al., 2018; shocked H\({}_{2}\) emission at 4.5 \(\mu\)m and 2.12 \(\mu\)m, Cyganowski et al., 2009; Voronkov et al., 2006). However, the relationship between class I masers and shock tracers has not been well studied in a detailed statistical manner. The APEX telescope large area survey of the Galaxy (ATLASGAL) is an unbiased 870 \(\mu\)m sub-millimetre continuum survey of the inner Galaxy (300\({}^{\circ}<l<\) 60\({}^{\circ}\), \(|b|<\) 1.5\({}^{\circ}\)) (Schuller et al., 2009). This survey provides a large inventory of dense molecular clumps (over \(\sim\)10 000, Contreras et al., 2013; Csengeri et al., 2014; Urquhart et al., 2014) that comprise a variety of early evolutionary stages related to high-mass star formation. The evolutionary stages of a large number (\(\sim\) 5000) of ATLASGAL clumps have been determined and their physical properties (e.g. distance, dust temperature, bolometric luminosity, clump mass and H\({}_{2}\) column density) have been studied in detail (Konig et al., 2017; Giannetti et al., 2017; Urquhart et al., 2018, 2022). This well characterized sample allows us to carry out a targeted survey searching for new class I methanol masers and perform a statistical analysis of the properties of detected masers and their associated star-forming clumps at different evolutionary stages. Silicon monoxide (SiO) emission is an excellent tracer of shock interactions in star formation regions. The abundance of SiO is enhanced by up to six orders of magnitude in shocked regions associated with molecular outflows with respect to that measured in quiescent gas (e.g. Jimenez-Serra et al., 2008) through grain-grain collisions and sputtering, after which the released atomic silicon forms SiO (e.g. Martin-Pintado et al., 1992; Caselli et al., 1997). The SiO emission towards our targeted ATLASGAL clumps has been well studied by Csengeri et al. (2016). This provides us a unique opportunity to statistically study the relationship between the class I methanol masers and the shocked gas traced by SiO emission. In this work, we focus on the three class I methanol maser transitions by analyzing 3 mm line survey data obtained using the Institut de Radioastronomie Millimetrique (IRAM) 30 m telescope on 408 ATLASGAL clumps. The aims of this work are: (i) to search for new class I maser sources; (ii) to statistically study the relationships between class I masers and a shock tracer; (iii) to investigate the properties between class I masers and their host clumps, as well as a function of their evolutionary stages; and (iv) to provide strong constraints on environments in which multiple class I masers occur. Also covered by our \(\lambda\)\(\sim\)3 mm data are six additional methanol transitions that have detected maser features at that belong to class II. These include the 85.5, 86.6, 86.9, 104.1, 107 and 108 GHz lines. We will focus on these class II methanol maser transitions, especially the maser emission and absorption features at 107 GHz, towards the same 408 ATLASGAL sources in a subsequent paper. The structure of this paper is as follows. Our observations and data reduction are described in Sect. 2. The results are given in Sect. 3. In Sect. 4, we discuss the properties of the associated ATLASGAL clumps and SiO emission for our detections, and perform theoretical calculations to constrain the physical conditions of maser environments. A summary of this work and highlighted conclusions are presented in Sect. 5. ## 2 Observations and data reduction The methanol transition data were extracted from unbiased spectral surveys with a frequency coverage of 83.8\(-\)115.7 GHz using the Eight MIxer Receiver (EMIR, Carter et al., 2012) at 3 mm (E090) of the IRAM 30m telescope (see Csengeri et al., 2016 for details, project ids: 181-10, 049-11 and 037-12), from 2010 May to 2012 October. Flux limited samples of ATLASGAL sources (Schuller et al., 2009; Csengeri et al., 2014) were selected, with additional infrared selection criteria that ensure to cover a large range of evolutionary stages and luminosity. The sample includes (i) \(\sim\)120 brightest ATLASGAL clumps that are infrared bright, (ii) \(\sim\)50 bona-fide massive young stellar objects that obey the Lumsden et al. (2002) infrared color criteria for embedded, massive (proto)stars and are associated with ATLASGAL clumps, (iii) \(\sim\)120 brightest ATLASGAL clumps that are GLIMPSE 8 \(\mu\)m dark but contain 24 \(\mu\)m sources, and (iv) \(\sim\)120 brightest ATLASGAL clumps that are Spitzer 8 and 24 \(\mu\)m dark. The properties and observational parameters of these transitions are summarized in Table 1. The FFTS backend with a frequency resolution of \(\sim\)200 kHz was used, resulting in a channel spacing of \(\sim\)0.7 km s\({}^{-1}\) (multiply by 1.16 to convert to velocity resolution, see Klein et al. 2012). Table 2 provides information on the observed sources. The half-power beam width (HPBW) is \(\sim\)30'' at 84 GHz. The observations were performed in position switching mode with an offset of 10'' in right ascension and declination for the reference position. The data were processed using the GILDAS/CLASS package (Pety 2005). To characterize the spectra, we employed a Gaussian fit method for each transition source by source, and a multi-Gaussian fit was used in case of multiple velocity components. ## 3 Results We consider emission above the 3\(\sigma\) level as a detection, which corresponds to a typical detection threshold of \(\gtrsim\)0.6 Jy for our three methanol transitions. Table 3 gives an overview of detections for individual sources in the three class I maser lines. Overall, we detect 282, 224 and 29 sources with methanol emission at 84 GHz (282/404, 70%), 95 GHz (224/408, 55%) and 104.3 GHz (29/404, 7%)1, respectively. Footnote 1: Four sources (G12.91\(-\)0.26B, G30.72\(-\)0.08, G30.73\(-\)0.06, G30.75\(-\)0.05) are only observed in the 95 GHz line, so the total number (408) of sources observed at 95 GHz is higher than for the other two transitions. Figure 1 shows the number of sources detected in emission in the three class I methanol maser lines. A total of 212 sources (over 50% of the observing targets) are detected with methanol emission at both 84 and 95 GHz (see Fig. 1). Seventy sources show methanol detections only at 84 GHz and nine sources have detections only at 95 GHz. All 29 detected 104.3 GHz emission lines in our sample have associated methanol emission at both 84 and 95 GHz, corresponding to 14% of the 212 sources (see Fig. 1). The detection rate decreasing from the 84 GHz transition to the 104.3 GHz transition, is readily explained by their relative energy levels and critical densities (see Table 1). Table 4 provides Gaussian fitting results of detected lines for each source. Figure 1 shows example spectra of the three class I (and the other six class II methanol maser transitions at 3 mm) towards the first six ATLASGAL sources in our sample. The following subsections present the details of the detections of the widespread class I masers (at 84 and 95 GHz) and rare class I masers (at 104.3 GHz), respectively. ### 84 and 95 GHz methanol emission Figure 2 shows diverse 84 and 95 GHz spectral profiles, from narrow maser-like features (e.g. G06.22\(-\)0.61, see Fig. 2a) to broad (quasi-) thermal-like components (e.g. G10.47\(+\)0.03, see Fig. 2b), and a combination of the two (e.g. G32.02\(+\)0.06, see Fig. 2c). The line profiles of the two transitions could be quite similar ( Fig. 2a-c), or could be very different (e.g. see Fig. 2d-f). G24.79\(+\)0.08 (see Fig. 2d) is an example showing different numbers of narrow features in the two lines. Emission from both transitions is not always detected towards a given source (e.g. G36.90\(-\)0.41 and G37.48\(-\)0.10 in Fig. 2e-f). Typical spectral maser profiles show strong and narrow emission features. However, with single-dish observations, we cannot definitively decide whether the observed sources show only maser emission or a combination of maser and thermal emission. In this work, an object showing narrow (\(\lesssim\)2 km s\({}^{-1}\), i.e. 3 channels) and relatively strong (\(\gtrsim\)1 Jy, i.e. 5\(\sigma\)) features (e.g. Fig. 2c for the 84 GHz line toward G32.02\(+\)0.06) is likely to be a maser source (labelled as "Y" in Table 3). Otherwise, objects showing other features are considered as maser candidates (labelled as "y"). A lack of emission above the 3\(\sigma\) level is rated as a non-detection and labelled as "N". For example, G06.22\(-\)0.61 shows narrow and strong maser features in both transitions (Fig. 2a). On the other hand, the 84 GHz emission in G36.90\(-\)0.41 (Fig. 2e) is broader than 2 km s\({}^{-1}\). Thus, this source is considered as a maser candidate. We note that narrower emission (\(<\)1 km s\({}^{-1}\)) could be better detected with higher spectral resolution. With our velocity resolution of \(\sim\) 0.7 km s\({}^{-1}\), such narrow emission could be smoothed out over channels, appearing broader and weaker. For example, the narrow and strong component of 95 GHz emission in G06.22\(-\)0.61 shows a line width of 0.8 km s\({}^{-1}\) and a peak flux density of 59 Jy at a velocity resolution of 0.11 km s\({}^{-1}\) significantly better than ours (Chen et al. 2013). Assuming that this maser component is not variable, our observations nevertheless yield a line width of 1.2 km s\({}^{-1}\) and a peak flux density of 38 Jy, indicating a constant integrated flux density, but blurred peak and line width parameters. Thus, it is not arbitrary but potentially stringent, to use a line width of 2 km s\({}^{-1}\) as a criterion for determining narrow maser features in this work. Based on this criterion, a total of 54 and 100 sources show maser features at 84 and 95 GHz, respectively. After cross-matching the maser database2(Ladeyschikov et al. 2019), we find that 260 and 101 sources with methanol emission at 84 and 95 GHz are detected for the first time, respectively. Among them, fifty 84 GHz masers and twenty nine 95 GHz masers are new discoveries. For 84 GHz masers with known distances, 78% of them are located within 5 kpc, 16% of them are located between 5 and 10 kpc, and 6% of them are located farther than 10 kpc. The situation is similar for the 95 GHz masers. Since an angular resolution of 26'' corresponds to a linear scale of 0.6 pc at a distance of 5 kpc, our following analysis primarily focuses on the scales of \(\sim\)0.6 pc which we refer to as the clump scale. Footnote 2: [https://maserdb.net](https://maserdb.net) Due to the close proximity of some sources to each other (within 30'', nearly the beam size at 3 mm), the observing beams may overlap and the detected emission could be from the same region. Thirteen pairs of sources are in close proximity, marked by a "\(\dagger\)" are their source names in Table 3. Among them, four pairs (G14.19\(-\)0.19 and G14.20\(-\)0.19; G14.63\(-\)0.57 and G14.63\(-\)0.58; G34.40\(+\)0.23 and G34.40\(+\)0.23A; G59.50\(-\)0.23 Figure 1: Venn diagram showing the number of sources detected in the three class I maser lines. All detected 104.3 GHz emission lines in our sample have associated methanol emission at both 84 and 95 GHz. and G59.50\(-\)0.24) show narrow maser features at 95 GHz. However, their small number means that they make negligible contributions to the overall statistics. Interferometric observations reveal that 95 GHz class I maser emission is stronger than 84 GHz maser emission detected in six spots towards IRAS16547\(-\)4247 (see Figs. 2 and 3 in Voronkov et al. 2006). Our observations allow us to figure out whether the 84 GHz or 95 GHz masers are brighter in a statistical manner using single-dish data. Figure 3 shows the histogram of the peak flux density ratios (\(S_{\rm pk,95}/S_{\rm pk,84}\)) of 95 and 84 GHz methanol emission. The median and mean values of peak flux density ratios for the whole 212 sources showing both emission are 0.65 and 0.83, respectively. Half (107/212) of the whole sample have both maser candidates, and 92% of the sources have stronger 84 GHz emission than their 95 GHz counterparts. The \(S_{\rm pk,95}/S_{\rm pk,84}\) ratios for these 107 sources range from 0.2 to 4.0 with a median value of 0.5 and a mean value of 0.6. If thermal emission dominates methanol detection in these sources, it is reasonable that 84 GHz emission is stronger than 95 GHz emission, since the upper level energy and critical density of the former are much lower than for the latter. On the other side, forty sources host both masers, and show a different distribution of peak flux density ratios compared with the sources hosting both maser candidates in Fig. 3. Among them, we find that 32 (80%) sources show stronger 95 GHz maser emission. The \(S_{\rm pk,95}/S_{\rm pk,84}\) ratios range from 0.3 to 3.1 with a median value of 1.2 and a mean value of 1.3. This suggests that the 95 GHz methanol masers are usually stronger than their 84 GHz maser counterparts. The trend is different from the fact that we find for the maser candidates. It is probably because the emission at 84 GHz may contain more emission of thermal origin for the candidates in Fig. 3. sitions. Among the 29 detected methanol sources at 104.3 GHz, four possess narrow and relatively strong maser characteristics (see Fig. 4), and the other 25 sources are regarded as maser candidates. The four 104.3 GHz masers are detected for the first time. Our work increases the number of known 104.3 GHz masers from five to nine. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Source & Line & \(V_{\rm pk}\) & \(\Delta V\) & \(S_{\rm pk}\) & \(S\) & \(\int S\,{\rm d}V\) & \(L_{\rm CH,OH}\) \\ & (GHz) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (Jy) & (Jy km s\({}^{-1}\)) & (Jy km s\({}^{-1}\)) & (L\({}_{\odot}\)) \\ \hline G06.22\(-\)0.61 & 84 & 19.61(0.02) & 1.57(0.08) & 15.20(0.42) & 25.47(0.82) & 25.47 & 2.0\(\times 10^{-5}\) \\ G06.22\(-\)0.61 & 95 & 19.65(0.00) & 1.21(0.01) & 38.00(0.25) & 49.01(0.33) & 49.01 & 4.4\(\times 10^{-5}\) \\ G08.05\(-\)0.24 & 84 & 38.89(0.12) & 2.32(0.35) & 2.09 (0.26) & 5.16 (0.56) & 5.16 & 1.1\(\times 10^{-5}\) \\ G08.05\(-\)0.24 & 95 & 38.48(0.07) & 1.34(0.17) & 1.59 (0.19) & 2.27 (0.26) & 2.27 & 5.6\(\times 10^{-6}\) \\ G08.68\(-\)0.37 & 84 & 37.98(0.03) & 4.54(0.09) & 9.06 (0.23) & 43.84(0.66) & 43.84 & 7.5\(\times 10^{-5}\) \\ G08.68\(-\)0.37 & 95 & 38.37(0.10) & 3.91(0.25) & 2.02 (0.18) & 8.41 (0.44) & 8.41 & 1.6\(\times 10^{-5}\) \\ G08.71\(-\)0.41 & 84 & 39.08(0.08) & 3.29(0.21) & 2.30 (0.18) & 8.05 (0.43) & 8.05 & 1.4\(\times 10^{-5}\) \\ G08.71\(-\)0.41 & 95 & 39.64(0.26) & 2.76(0.63) & 0.58 (0.16) & 1.69 (0.32) & 1.69 & 3.2\(\times 10^{-6}\) \\ G10.15\(-\)0.34 & 84 & 8.97 (0.14) & 4.66(0.34) & 1.32 (0.15) & 6.56 (0.42) & 6.56 & 4.9\(\times 10^{-6}\) \\ G10.17\(-\)0.36A & 84 & 14.35(0.55) & 5.64(1.87) & 0.54 (0.18) & 3.75 (0.72) & 3.75 & 2.8\(\times 10^{-6}\) \\ G10.17\(-\)0.36B & 84 & 15.63(0.48) & 5.85(0.94) & 0.55 (0.19) & 3.43 (0.54) & 3.43 & 2.5\(\times 10^{-6}\) \\ G10.21\(-\)0.30 & 84 & 11.45(0.21) & 6.79(0.79) & 1.71 (0.18) & 12.34(1.00) & 17.52 & 1.3\(\times 10^{-5}\) \\ G10.21\(-\)0.30 & 84 & 12.30(0.06) & 1.70(0.24) & 2.86 (0.18) & 5.18 (0.99) & \\ G10.21\(-\)0.30 & 95 & 11.62(0.13) & 4.27(0.36) & 1.39 (0.11) & 6.35 (0.45) & 11.12 & 9.3\(\times 10^{-6}\) \\ G10.21\(-\)0.30 & 95 & 12.53(0.03) & 1.34(0.07) & 3.35 (0.11) & 4.78 (0.40) & & \\ \hline \hline \end{tabular} _Note 4._ Column 1 gives the source name. Column 2 gives the frequency of the each methanol transition. Columns 3–6 give the peak velocity \(V_{\rm pk}\), the FWHM line width \(\Delta V\), the peak flux density \(S_{\rm pk}\), and the integrated flux density of each component estimated from Gaussian fits. Formal errors from the Gaussian fits are given in parentheses. Column 7 gives the total integrated flux density for each source and each methanol line, which is the sum of integrated flux densities over all velocity components for a given transition. Column 8 gives the isotropic luminosity of methanol emission estimated from the total integrated flux density. Only a portion of the table is shown here for guidance, the full table is available at the CDS. \end{table} Table 4: Observational results of class I CH\({}_{3}\)OH transitions. Figure 2: Examples of diverse line profiles of observed class I methanol emission at 84 (upper panel) and 95 GHz (lower panel), depicting narrow maser-like features (in panel a), broad thermal-like features (in panel b) and a combination of the two (in panel c). Line profiles of the two transitions are sometimes quite similar as shown in panels (a) to (c), and are sometimes very different as shown in panels (d) to (f). The frequency and the corresponding quantum numbers of each transition are shown in the upper right corner of each panel for six sources. The horizontal magenta dashed lines in G36.90\(-\)0.41 and G37.48\(-\)0.10 represent the 3\(\sigma\) noise level for the undetected transitions. As can be seen from Fig. 4, the 104.3 GHz maser emission generally align with the 84 and/or 95 GHz maser emission in velocity, indicating that for all three lines, the emission arise from the same source. However, compared to the 84 and 95 GHz spectra, the 104.3 GHz maser spectra appear to show fewer components and narrower line widths, which is consistent with a previous study (Voronkov et al., 2007). An explanation may be that the spatial distribution of the 104.3 GHz masers is not as extensive as that of the 84 and 95 GHz masers. This scenario is supported by interferometric observations towards IRAS16547\(-\)4247, where the rarer class I methanol maser emission in the 9.9, 25 and 104.3 GHz lines is confined to only one of the six 84 and 95 GHz maser spots, which is associated with the brightest shocked H\({}_{2}\) 2.12 \(\mu\)m emission (see Fig. 1 in Voronkov et al., 2006). Both G10.34\(-\)0.14 and G16.58\(-\)0.05 showing 104.3 GHz maser emission have associated EGOs (Cyganowski et al., 2008) which are believed to be caused by shocked molecular gas in active outflows (see above). Towner et al. (2017) detected class I methanol maser emission in the \(5_{2}-5_{1}\,E\) line at 25 GHz towards these two sources. We cross-match our sample with the EGO sample associated with 25 GHz methanol maser observations (Towner et al., 2017). We find three sources (G12.68\(-\)0.18, G14.33\(-\)0.64 and G22.04\(+\)0.22) showing 25 GHz masers and 104.3 GHz maser candidates, five sources (G10.29\(-\)0.12, G11.92\(-\)0.61, G12.90\(-\)0.03, G35.03\(+\)0.35 and G45.47\(+\)0.05) showing 25 GHz masers but without 104.3 GHz detections at the current sensitivity. This indicates that the 104.3 GHz masers require more energetic conditions than the 25 GHz masers. The four detected 104.3 GHz maser sources are associated with nearby 6.7 GHz class II masers (e.g. Green et al., 2010; Breen et al., 2015; Hu et al., 2016), and three of them also have a 12.2 GHz class II maser association, with the exception of G28.20\(-\)0.05 (e.g. Breen et al., 2014, 2016; Song et al., 2022). Although G28.20\(-\)0.05 does not have an associated 12.2 GHz maser (Breen et al., 2010), it has been classified as a source in a later evolutionary stage due to the presence of an Hu region (Urquhart et al., 2022). G16.58\(-\)0.05 and G27.37\(-\)0.17 are also classified as being in the Hii region stage, while G10.34\(-\)0.14 is classified as a PDR+Embedded source because of the complex infrared background (Urquhart et al., 2022). Voronkov et al. (2006) detected a 104.3 GHz maser toward IRAS16547\(-\)4247 but no 6.7 GHz class II maser, suggesting that this source is either too young or too old to have a 6.7 GHz maser. Recently, Urquhart et al. (2022) classified its host dust continuum source (AGAL343.128\(-\)0.062) to be in the Hii region stage. Overall, these facts support that the 104.3 GHz masers are likely associated with dust clumps that may already host Hii regions in Urquhart et al. (2022). ### Systemic velocity Several studies have revealed that class I methanol masers are a good tracer of systemic velocities (e.g. Jordan et al., 2017). In this work, we adopt \(V_{\rm LSR}\) from Urquhart et al. (2022) as the ATLASGAL clump systemic velocities to study the relationship between the velocity of class I methanol masers and systemic velocity. Table 5 gives the numbers of maser sources and all methanol detections showing single and multiple components in our three methanol transitions, as well as the fractions of masers (i.e. the number of masers divided by the number of all detection). It is clear that the majority of the methanol sources detected at 84, 95 and 104.3 GHz shows a single component, while the detection rate of masers in multi-component sources is higher than that in the single component sources. This is expected, because multiple components can be present distributed in different spots within a telescope beam. Figure 5 shows the histograms of peak velocity of methanol emission with respect to the systemic velocity (\(V_{\rm CH,OH}-V_{\rm sys}\)) at 84 GHz, 95 GHz and 104.3 GHz. Apart from some outliers (see later description in this Section), the distributions of the velocity difference show Gaussian profiles for both the 84 and the 95 GHz transitions. The \(V_{\rm CH,OH}-V_{\rm sys}\) ranges from \(-\)3.83 km s\({}^{-1}\) to 3.31 km s\({}^{-1}\) for the 84 GHz masers, and ranges from \(-\)5.09 km s\({}^{-1}\) to 4.23 km s\({}^{-1}\) for the 95 GHz masers. The mean and median velocity difference of the 84 GHz (95 GHz) maser sample is 0.17 km s\({}^{-1}\) (0.12 km s\({}^{-1}\)), and 0.03 km s\({}^{-1}\) (0.01 km s\({}^{-1}\)), with a standard error of 0.19 km s\({}^{-1}\) (0.16 km s\({}^{-1}\)). For all methanol detections (i.e. including maser and maser candidate) at 84 and 95 GHz, the mean and median values are similar to those of our maser samples. The standard deviations (1\(\sigma\)) of the velocity difference are 1.33 and 0.92 km s\({}^{-1}\) for methanol masers and all detections at 84 GHz, and 1.54 and 1.40 km s\({}^{-1}\) for the masers and all detections at 95 GHz. All of the 84 GHz masers and 99% of the 95 GHz masers (with the only exception of G23.44\(-\)0.18 showing the Figure 3: Histogram of the peak flux density ratio of 95 and 84 GHz emission. The grey filling represents the whole sample with both 95 and 84 GHz detections, the pink line shows the sources hosting both 84 and 95 GHz masers, and the black line shows the sources hosting maser candidates in both transitions. The red and blue dashed lines depict the median and mean values of the whole sample. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{84 GHz} & \multicolumn{2}{c}{95 GHz} & \multicolumn{2}{c}{104.3 GHz} \\ \cline{2-7} & single & multiple & single & multiple & single & multiple \\ \hline Masers & 16 & 38 & 35 & 65 & 2 & 2 \\ All detections & 204 & 78 & 135 & 89 & 22 & 7 \\ Fraction of masers & 8\% & 49\% & 26\% & 73\% & 9\% & 29\% \\ \hline \hline \end{tabular} \end{table} Table 5: Single or multiple components of three methanol transitions. greatest velocity difference) are consistent with the systemic velocity within 3\(\sigma\). That, overall, the velocities of 84 and 95 GHz methanol masers are consistent with the systemic velocity, confirms that the velocity of the 84 and 95 GHz methanol maser emission well traces the systemic velocity (e.g. Jordan et al., 2017). There are some outliers (84 and 95 GHz methanol emission in G43.17+0.01, and 95 GHz maser emission in G31.02+0.26), showing large (\(>10\) km s\({}^{-1}\)) velocity differences relative to the systemic velocity listed by Urquhart et al. (2022). However, after examining our own 3-mm spectral line survey data, we found that the velocity of methanol emission is consistent with that of C\({}^{18}\)O (1-0) emission. G43.17+0.01 is located in the massive and luminous W49 region, so that complex velocity profiles are not surprising. Both the 84 and 95 GHz lines in this source show two blended broad (\(\Delta V>4\) km s\({}^{-1}\)) components. In the case of the 84 GHz line, the stronger one peaks at 13 km s\({}^{-1}\), and the weaker and broader feature peaks at 7.3 km s\({}^{-1}\) (4.6 km s\({}^{-1}\) in case of the 95 GHz transition), which is similar to what is found for the double peak profile of C\({}^{18}\)O (1-0) emission, whose two components peak at 4.4 and 11.9 km s\({}^{-1}\). In G31.02+0.26, only a 95 GHz maser single component is detected. Its velocity coincides with that of the second strongest C\({}^{18}\)O (1-0) emission component (at \(\sim\)78 km s\({}^{-1}\)) but not with that of the strongest one (at \(\sim\)96 km s\({}^{-1}\)). Therefore, the consistence between the maser velocities and systemic velocities still holds. Previous observations of 84 and 95 GHz masers towards DR21(OH) suggest that they are formed in the interface between outflows and ambient dense gas (Batrala & Menten, 1988; Plambeck & Menten, 1990). The masers are thus expected at the cloud systemic velocities. Our statistical results indicate that this scenario should apply to most of class I methanol masers in the Milky Way. Similar to class I methanol masers, water masers are also believed to be caused by collisional pumping in star formation regions (e.g. Elitzur, 1992), but water masers show a different behavior in that they can exhibit multiple maser features within a wide velocity range, in extreme cases such as W49N, the velocity of water masers deviates from the systemic velocity more than 250 km s\({}^{-1}\)(e.g. Morris, 1976; Kramer et al., 2018), and they are thought to trace motions of gas bullets powered by stellar winds or jets emitted from massive YSOs (e.g. Sanna et al., 2010). Due to the limited number of methanol masers detected at 104.3 GHz, the velocity differences shown in Fig. 5 do not follow a Gaussian profile. It is worth noting that the four detected 104.3 GHz methanol masers are all clearly redshifted with respect to the systemic velocity (see also Fig. 4). G27.37\(-\)0.17 shows the maximum offset from the systemic velocity of 3.3 km s\({}^{-1}\), and G28.20\(-\)0.05 shows the minimum offset of 1.7 km s\({}^{-1}\). A faint and broad feature at 104.3 GHz is also detected in G10.34\(-\)0.14 and G28.20\(-\)0.05, that could be of thermal origin and is aligned with the systemic velocity (see Figs. 4a and 4d). All four 104.3 GHz maser sources have nearby 22.2 GHz H\({}_{2}\)O maser associations (e.g. Breen & Ellingsen, 2011; Walsh et al., 2014). ## 4 Discussion ### The relationship between class I methanol masers and SiO emission SiO is one of the best tracers of shocks in star formation regions. The abundance of SiO is enhanced in shocked regions (e.g. Jimenez-Serra et al., 2008) and the emission profile varies with shock velocity. In low-velocity shocks, the SiO emission shows a narrow profile (with a FWHM line width of \(\leq\) 1-2 km s\({}^{-1}\), Jimenez-Serra et al., 2009), while the SiO profiles are broader in high-velocity shocks. Cross-matching with the SiO data of Csengeri et al. (2016), we find that 9 sources showing narrow and strong methanol maser characteristics are not associated with SiO emission. Among these SiO non-detections, four sources (G10.75+0.02, G13.87+0.28, G14.18\(-\)0.53 and G17.64+0.15) harbor 84 GHz masers and seven sources (G08.05\(-\)0.24, G10.75+0.02, G13.87+0.28, G27.55\(-\)0.94, G31.02+0.26, G31.10+0.26 and G37.48\(-\)0.10) harbor 95 GHz masers. The abundance of gas phase methanol may be enhanced by low-velocity shocks in which the high-velocity shock tracer SiO is harder to detect. G10.75+0.02, G14.18\(-\)0.53 and G31.02+0.26 are classified as sources in a quiescent stage of evolution (Urquhart et al., 2022), suggesting that class I methanol masers could be a unique signpost of protostellar activity in extremely embedded objects at the earliest evolutionary stage. Figure 6 shows 84, 95 and 104.3 GHz integrated intensities of methanol masers and maser candidates plotted against inte Figure 4: Spectra of the 84, 95, 104.3 GHz lines (from top to bottom) for 4 sources showing 104.3 GHz methanol maser features. The vertical magenta dashed lines represent the systemic velocity adopted from Urquhart et al. (2022). The frequency and the corresponding quantum numbers of each transition are shown in each panel. grated SiO (2-1) and SiO (5-4) intensities. Their linear fitting results in log-log form for three methanol transitions are summarised in Table 6. Given that the correlation coefficients and slopes of the fitting lines are almost identical within the errors, we infer that there is no significant difference between masers and maser candidates in the 84 and 95 GHz lines. On the other hand, the positive slopes and correlation coefficients for the two lines indicate that stronger class I methanol maser emission is typically associated with higher SiO integrated intensity. Despite the small number of sources showing the 104.3 GHz emission, the integrated intensities of the 104.3 GHz emission appear to be also linearly correlated with the integrated SiO (2-1) and SiO (5-4) intensities. This is consistent with previous studies reporting similarly positive correlations between class I methanol masers at 36, 44 and 84 GHz and SiO emission (Jordan et al., 2017; Breen et al., 2019). In order to minimise the contribution of thermal emission of a maser source, we pick out the 95 GHz maser sources and maser candidates with a single component, and do least-square fitting separately. Figure 7 shows no significant difference between least-square fitting results for all masers and those for masers with a single component, suggesting that the potential contribution of thermal emission is negligible. Hence, the scaling relationships support the close link between the three class I methanol masers and SiO emission. SiO emission with high-velocity wings indicates shocked gas due to fast material ejection. Csengeri et al. (2016) detected high-velocity SiO (2-1) wings towards 167 sources in our sample, while 120 sources do not show such wings. Figure 8 reveals that there are more detections and higher detection rates of the three class I methanol transitions (as well as for methanol maser emission) towards sources showing wings than sources without wings. The uncertainties of the detection rates in this work are calculated under the assumption that the detection rates satisfy the binomial distribution, that is \(\sqrt{p\times(1-p)/n}\), where \(p\) is the detection rate and \(n\) is the sample size. Figure 9 shows that methanol masers have higher integrated intensities in the sources showing SiO (2-1) wings than in the sources without SiO (2-1) wings. This indicates that brighter class I methanol masers are produced by higher shock velocities. In order to further investigate the relationship, we adopt the full width at zero power (FWZP) for this analysis in the following. Beuther and Sridharan (2007) and Li et al. (2019) divided their observed SiO spectra by the FWZP into several regimes, and considered that sources with FWZP \(>\)20 km s\({}^{-1}\) indicate gas at relatively high velocities with respect to the ambient gas. We use the FWZP of SiO (2-1) which is taken from Csengeri et al. (2016) to roughly characterise shock speed, and consider that the broader FWZP traces the faster shock speed. We perform a Kolmogorov-Smirnov (K-S) test to study the FWZP distribution for the sources with and without methanol detection. Figure 10 shows that the FWZP of SiO emission for sources with and without methanol detection are significantly different, indicated by a very small \(p\)-value (\(\ll\) 0.0013) from the K-S test. This difference is shown in all of our three methanol transitions. Moreover, the sources with methanol detection show broader FWZPs of SiO emission that sources without methanol detection. The median value of the FWZP of SiO emission for sources showing 84 or 95 GHz emission is about 30 km s\({}^{-1}\), while the SiO emission generally exhibits a larger FWZP with a median of about 40 km s\({}^{-1}\) in sources detected in the 104.3 GHz transition. This suggests that the excitation of 104.3 GHz methanol masers requires faster shock velocities than the 84 and 95 GHz masers. Figure 11 shows that the differences of SiO (2-1) FWZP between maser and maser candidates are not statistically significant at both 84 and 95 GHz. This insignificant difference indicates that the shock speeds, at which maser and maser candidate emission are created, are similar on clump scales. It may also be due to our conservative classification that a significant portion of maser candidates host masers. Figure 5: Histograms of peak velocity of methanol emission with respect to the systemic velocity (taken from Urquhart et al., 2022) for the 84 GHz (top), 95 GHz (middle) and 104.3 GHz (bottom) lines. The orange filling and purple hollow histograms represent methanol maser and all methanol detections (including maser and maser candidate), respectively. The black solid and dashed lines depict the Gaussian fitting results for maser and all methanol detection, respectively. Figure 12 shows the detection numbers and detection rates of the three methanol transitions in different FWZP ranges of SiO (2-1) emission. For the 84 and 95 GHz transitions, the detection numbers first increase with increasing FWZP of SiO emission and peak at a FWZP range of 20-30 km s\({}^{-1}\), then decrease as the FWZP increases. When the FWZP \(<\)10 km s\({}^{-1}\), few sources show methanol emission, and only two out of four 95 GHz detections show maser features. With FWZP \(>\)30 km s\({}^{-1}\), the numbers of detected 84 and 95 GHz emission gradually decrease, the detection rates for both lines keep rising (which due to a smaller number of sources), and reaching a detection rate of 100% for FWZP \(>\)50 km s\({}^{-1}\). For the 104.3 GHz transition, the detection rate also shows a tendency to increase with increasing FWZP, when FWZP \(<\) 50 km s\({}^{-1}\). With FWZP \(>\) 50 km s\({}^{-1}\), the detection rate drops slightly. This indicates that the faster shock speeds lead to the higher detection rates of methanol emission when shocks have not yet completely destroyed SiO and methanol molecules. This is consistent with the astrochemical modeling results that the methanol abundance is correlated with shock velocities (e.g., see Fig. 3 in Holdship et al. 2017). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Parameters} & \multicolumn{3}{c}{Maser} & \multicolumn{3}{c}{Detection (i.e. maser+candidate)} \\ \cline{2-7} & slope & \(r\) & \(p\)-value & slope & \(r\) & \(p\)-value \\ \hline \multicolumn{7}{c}{84 GHz} \\ \hline \(\int S_{\rm sd}dV\) vs. \(\int S_{\rm SO2-1}dV\) & 1.11\(\pm\)0.09 & 0.88 & 7.7\(\times\)10\({}^{-17}\) & 1.07\(\pm\)0.04 & 0.87 & 3.6\(\times\)10\({}^{-77}\) \\ \(\int S_{\rm sd}dV\) vs. \(\int S_{\rm SO5-sd}dV\) & 0.66\(\pm\)0.17 & 0.68 & 9.4\(\times\)10\({}^{-4}\) & 0.61\(\pm\)0.06 & 0.70 & 5.9\(\times\)10\({}^{-17}\) \\ \(\int S_{\rm sd}dV\) vs. FWZP of SiO (2–1) & 1.72\(\pm\)0.33 & 0.61 & 3.2\(\times\)10\({}^{-6}\) & 1.41\(\pm\)0.13 & 0.56 & 3.2\(\times\)10\({}^{-22}\) \\ \hline \(\int S_{\rm sgd}V\) vs. \(\int S_{\rm SO2-1}dV\) & 0.97\(\pm\)0.09 & 0.75 & 1.2\(\times\)10\({}^{-17}\) & 0.99\(\pm\)0.07 & 0.72 & 3.8\(\times\)10\({}^{-34}\) \\ \(\int S_{\rm sgd}V\) vs. \(\int S_{\rm SO5-sd}dV\) & 0.85\(\pm\)0.14 & 0.65 & 2.6\(\times\)10\({}^{-7}\) & 0.85\(\pm\)0.08 & 0.74 & 4.8\(\times\)10\({}^{-19}\) \\ \(\int S_{\rm sgd}V\) vs. FWZP of SiO (2–1) & 1.14\(\pm\)0.23 & 0.46 & 4.2\(\times\)10\({}^{-6}\) & 1.13\(\pm\)0.18 & 0.41 & 8.7\(\times\)10\({}^{-10}\) \\ \hline \multicolumn{7}{c}{104.3 GHz} \\ \hline \(\int S_{\rm 104.3}dV\) vs. \(\int S_{\rm SO2-1}dV\) & 0.70\(\pm\)0.02 & 0.999 & 9.4\(\times\)10\({}^{-4}\) & 0.73\(\pm\)0.32 & 0.53 & 2.8\(\times\)10\({}^{-3}\) \\ \(\int S_{\rm 104.3}dV\) vs. \(\int S_{\rm SO5-sd}dV\) & 0.81\(\pm\)0.25 & 0.92 & 8.3\(\times\)10\({}^{-2}\) & 0.65\(\pm\)0.17 & 0.63 & 8.3\(\times\)10\({}^{-4}\) \\ \(\int S_{\rm 104.3}dV\) vs. FWZP of SiO (2–1) & 1.21\(\pm\)0.52 & 0.85 & 0.15 & 0.36\(\pm\)0.51 & 0.14 & 0.49 \\ \hline \end{tabular} _Note 5._ Columns 2–7 give the slopes of the linear fitting results, Pearson correlation coefficients \(r\), and \(p\)-value for masers and all methanol detections. \end{table} Table 6: Summary of linear fitting results in log-log form of methanol integrated intensity versus the SiO intensity and the FWZP of SiO (2–1) emission for the 84, 95 and 104.3 GHz transitions. Figure 6: Methanol integrated intensity as a function of SiO integrated intensity. Top panels (a,b,c) show the integrated 84, 95 and 104.3 GHz methanol emission versus SiO (2–1) integrated intensity, respectively. Bottom panels (d,e,f) show the integrated 84, 95 and 104.3 GHz methanol emission versus SiO (5–4) integrated intensity, respectively. The orange and purple triangles represent the clumps hosting masers and maser candidates, respectively. The orange solid lines and black dashed lines depict the least-square fitting results for methanol maser and all methanol detections, respectively. We further quantify the relationship between methanol emission and the SiO (2-1) FWZP in Figure 13, which shows upward trends of 84, 95 and 104.3 GHz methanol integrated intensity with increasing FWZP values of the SiO (2-1) emission. This further supports that the integrated intensities of class I methanol emission increase with increasing shock speeds. The linear fitting results in log-log form are summarised in Table 6. ### Physical properties on a clump scale We make use of the latest catalog of ATLASGAL physical clump properties (Urquhart et al., 2022) to study the relationship between the physical parameters of clumps and associated class I methanol (maser) emission detections. The clump parameters used for analysis include the bolometric luminosity, \(L_{\rm bol}\), the FWHM clump mass, \(M_{\rm fwhm}\), the luminosity-to-mass ratio, \(L_{\rm bol}/M_{\rm fwhm}\), the dust temperature \(T_{\rm dust}\), the mean H\({}_{2}\) FWHM volume density, \(n_{\rm fwhm}\)(H\({}_{2}\)), as well as the peak H\({}_{2}\) column density, \(N\)(H\({}_{2}\)) (Urquhart et al., 2018). The FWHM clump mass is calculated using the integrated 870-\(\mu\)m flux density within the 50 per cent contour of the peak of the ATLASGAL continuum emission, and the mean H\({}_{2}\) FWHM volume density is calculated using the \(M_{\rm fwhm}\) and the FWHM clump size, which is determined by the number of pixels within the 50 per cent contour (see details in Equations 3 and 5 in Urquhart et al., 2022). These FWHM parameters eliminate observational bias due to source evolution, as demonstrated by Billington et al. (2019). Figure 14 presents the cumulative distributions of the clump physical properties for sources with and without 84 GHz methanol detection. The K-S tests reveal that except for the FWHM clump mass, the properties of clumps with and without 84 GHz methanol detection are significantly different. The situation is similar in our sample of 95 and 104.3 GHz sources; their cumulative distributions are shown in Figs. 10 and 11. Previous research towards 6.7 GHz class II methanol masers (Billington et al., 2019) and 95 GHz class I methanol masers (Ladeyschikov et al., 2020) also shows a very similar trend in that only the FWHM clump mass shows no difference between maser detection and non-maser detection or for the entire sample. Table 7 provides a statistical summary of the physical properties for the maser subsample, the detection subsample (i.e. maser and maser candidate) and the non-detections for the 84, 95 and 104.3 GHz transitions. It is clear to see that the \(L_{\rm bol}\), \(L_{\rm bol}/M_{\rm fwhm}\), \(T_{\rm dust}\), \(n_{\rm fwhm}\)(H\({}_{2}\)), and \(N\)(H\({}_{2}\)) of clumps with methanol detection typically show higher values (i.e. the minimum, maximum, median and mean) than those without a detection for all three methanol transitions, as also suggested in Figs. 14, 10 and 11. While for the clumps with methanol maser and with methanol detections, the values of these properties are comparable. Figures 12 and 13 show the results of K-S tests of the cumulative distributions of the clump physical properties for sources with maser and maser candidates for 84 Figure 8: Distribution of detection number and detection rate of three methanol transitions for SiO (2–1) emission with or without wings. The labels for the detection numbers are on the left side, while those for the detection rates are on the right side. The red, blue and green bins present the detection numbers of methanol emission at 84, 95 and 104.3 GHz, respectively, where the filled and hatched parts represent masers and maser candidates. The red, blue and green triangles indicate the detection rates of methanol emission at 84, 95 and 104.3 GHz. Figure 7: 95 GHz methanol integrated intensity versus SiO (2–1) integrated intensity. The orange and purple dots represent the clumps hosting masers and maser candidates, respectively. The orange and purple circle mark the source with a single component. The orange and purple dashed lines depict the least-square fitting results for methanol maser and maser candidates with single component, respectively. The orange solid line represents the least-square fitting results for methanol masers as in Fig. 6b. Figure 9: Box plots of the integrated intensity for the 84 (left) and 95 (right) GHz masers that are divided into two groups with and without SiO (2-1) wings. In each box plot, the horizontal colored line represents the median value, the box represents the interquartile range between the 25th and 75th percentiles, and the vertical lines (the ‘whiskers’) show the ranges from the minimum value to the 25th percentile and from the 75th percentile to the maximum value. and 95 GHz transitions. The \(p\)-values of the K-S tests for each property are not smaller than 0.0013, indicating that the two subsamples belong to the same distribution. We conclude that the warmer, brighter and denser clumps are more easily detectable in methanol and more likely to host the class I methanol masers. Figure 15 shows the cumulative distributions of the clump physical properties for sources with 84, 95 and 104.3 GHz methanol detection, which are similar to Fig. 14. The K-S tests reveal that except for the FWHM clump mass, the properties of clumps with 104.3 GHz methanol detection are significantly different from the clumps with 84 or 95 GHz (\(p\)-values\(\approx\)0.0013). In contrast, the properties of clumps with 84 and 95 GHz methanol detection are similar (\(p\)-values\(\approx\)0.0013), due to the fact that a large portion of sources have both maser species. This supports a significant overlap in the physical conditions that excite 84 and 95 GHz masers. As shown in Fig. 15, the clumps with 104.3 GHz masers generally show brighter luminosities, warmer dust temperatures, larger luminosity-to-mass ratios, and denser environments than the clumps with 84 and 95 GHz masers (see also Table 7). The narrower coverage (i.e. maximum minus minimum in Table 7) of these parameters for the clumps hosting 104.3 GHz masers also indicate that this maser species occurs only under more demanding physical conditions. The minimum values of these parameters of the clumps hosting 104.3 GHz masers suggest that this maser species would not arise in environments (on clump scales) where \(L_{\rm bol}\lesssim 10^{4}\) L\({}_{\odot}\), \(M_{\rm fwhm}\lesssim 20\) M\({}_{\odot}\), \(T_{\rm dust}\lesssim 22\) K, \(N({\rm H}_{2})\lesssim 10^{23}\) cm\({}^{-2}\) and \(n_{\rm fwhm}({\rm H}_{2})\lesssim 10^{5}\) cm\({}^{-3}\). The isotropic luminosity of CH\({}_{3}\)OH masers can be calculated by the following equations: \[L_{84}=8.81\times 10^{-8}L_{\odot}\left(\frac{d}{1\ {\rm kpc}}\right)^{2}\left( \frac{\int S_{84}{\rm d}V}{1\ {\rm Jy\ km\ s^{-1}}}\right), \tag{1}\] \[L_{95}=9.92\times 10^{-8}L_{\odot}\left(\frac{d}{1\ {\rm kpc}}\right)^{2} \left(\frac{\int S_{95}{\rm d}V}{1\ {\rm Jy\ km\ s^{-1}}}\right), \tag{2}\] \[L_{104.3}=1.09\times 10^{-7}L_{\odot}\left(\frac{d}{1\ {\rm kpc}}\right)^{2} \left(\frac{\int S_{104.3}{\rm d}V}{1\ {\rm Jy\ km\ s^{-1}}}\right). \tag{3}\] where \(d\) is the distance in units of kpc, and \(\int S_{\nu}{\rm d}V\) is the total integrated flux density in units of Jy km s\({}^{-1}\) at the corresponding frequency \(\nu\). Figure 16 shows the correlations between 84 GHz methanol (maser) luminosity or integrated intensity and ATLASGAL properties. Similarly, the correlations for the 95 and 104.3 GHz lines are shown in Figs. 15 and 16. Their linear Figure 11: Cumulative distribution functions of the FWZP of SiO (2–1) emission for masers and maser candidates. The panels of (a) and (b) show the distributions for 84, 95 GHz methanol maser (orange lines) and maser candidates (purple lines), respectively. The vertical dashed lines in the corresponding colors depict the median values of the two samples. The \(p\)-values from the K-S test are presented in the panel. Figure 10: Cumulative distribution functions of the FWZP of SiO (2–1) emission for sources with and without methanol detections. From panel a to c, the distributions are shown for 84, 95 and 104.3 GHz methanol detections (red lines) and non-detections (blue lines), respectively. The vertical dashed lines in the corresponding colors depict the median values of the two samples. The \(p\)-values from the K-S test are presented in the panel. fitting results in log-log form for three methanol transitions are summarised in Table 8. From these figures and Table 8, it can be seen that (1) for all three methanol transitions, the class I maser luminosity shows significant positive correlations with the bolometric luminosity of the embedded protostellar objects and the FWHM clump mass, which is consistent with previous studies of the 95 GHz masers towards BGPS sources (Chen et al., 2011, 2012; Yang et al., 2020); (2) for 84 and 95 GHz methanol transitions, the total integrated intensity of class I methanol masers has a significant positive correlation with peak H\({}_{2}\) column density; (3) for all three methanol transitions, the methanol luminosity of class I methanol masers has a very weak, or no statistically significant correlation with luminosity-to-mass ratio, dust temperature or the mean H\({}_{2}\) FWHM volume density. The methanol luminosity independence on the mean H\({}_{2}\) FWHM volume density agrees with the results in Chen et al. (2012). ### Class I methanol masers in different evolutionary stages As massive proto-stars evolve, their surrounding material can be dramatically changed due to powerful stellar radiation and jet/outflow activities, and the changing physical and chemical properties may successively meet the excitation conditions for different maser species. Ellingsen (2007) suggested to use the presence and absence of interstellar masers to infer an evolutionary timeline for high-mass star formation regions, and proposed a possible evolutionary sequence for common maser species, including class I and II methanol, water, and OH masers. Breen et al. (2010) then refined and quantified the maser-based evolutionary timeline (see their Fig. 6). Due to the fact that infrared colors are redder in sources solely associated with class I methanol masers than in sources associated with both class I and II masers, class I methanol masers are generally believed to trace an earlier stage than class II masers (Ellingsen, 2006). However, some studies have revealed that in some cases class I methanol masers can also be associated with sources in more evolved stages (e.g. Voronkov et al., 2010, 2014; Gomez-Ruiz et al., 2016), for example with expanding Hii regions (Voronkov et al., 2010, 2014). Here, we use the properties of the ATLASGAL clumps to study the evolutionary stages in which class I methanol masers may arise. ATLASGAL clumps have been classified into four evolutionary stages, namely quiescent, protostellar, young stellar objects (YSOs), and Hu regions (Konig et al., 2017; Giannetti et al., 2017; Urquhart et al., 2018, 2022). In this work, we adopt the latest evolutionary stage classification given by Urquhart et al. (2022) for each source. Table 2 lists the evolutionary stage of each target. Out of 408 ATLASGAL sources, a total of 309 (\(\sim\)76%) are assigned to one of the four evolutionary stages. Fifty-eight sources are starless, that is, the quiescent stage, 69 sources belong to a protostellar stage, 90 sources are classified as YSOs, and 92 sources host Hii regions. The other 99 sources are classified as "ambiguous", "complicated" and "PDR+embedded source", or remain undefined in Urquhart et al. (2022), due to the complexity of the submillimeter/infrared emission from their environments, which precludes a reliable determination of the current level of star formation in them. Consequently, these 99 sources are excluded from the following statistical analysis. Figure 17 shows the proportions of the clumps in the four evolutionary stages. Figure 18 shows that the detection numbers and detection rates increase with evolution for all three of our studied class I methanol detections (including maser sources and candidates). Especially, we find that class I methanol masers are favored to be detected at the Hii stage on a clump scale. Previous VLA observations result in a detection rate (54%) towards UCHii regions (Gomez-Ruiz et al., 2016), supporting our findings on an even smaller scale. Different from the 84 GHz and 95 GHz masers that are seen in four evolutionary stages, methanol emission at 104.3 GHz never occurs before the YSO stage, and the rare 104.3 GHz maser emission only arises at the Hii stage. This is expected because the 104.3 GHz line has an upper energy level of 158.6 K and is difficult to be excited in cold environments. In addition, the bolometric luminosity-to-mass (\(L_{\rm bol}\)/\(M_{\rm fwhm}\)) ratio can be used as an evolutionary indicator in the star formation process (e.g. Molinari et al., 2016; Elia et al., 2021; Urquhart et al., 2022). High \(L_{\rm bol}\)/\(M_{\rm fwhm}\) ratios are related to evolved sources where the luminosity increases and the envelope mass is consumed, while low ratios are related to young sources. Our observing targets cover a wide range of \(L_{\rm bol}\)/\(M_{\rm fwhm}\) values from 0.07 to 1037 L\({}_{\odot}\)/M\({}_{\odot}\) spanning more than four orders of magnitude. Billington et al. (2020) presented a box plot of the central 95 percent of \(L_{\rm bol}\)/\(M_{\rm fwhm}\) ratios for 6.7, 12.2 GHz class II methanol masers, OH maser and H\({}_{2}\)O masers (see their Fig. 14). Ladeyschikov et al. (2022) updated this analysis with larger samples for these masers. Figure 19 shows a similar box plot of the \(L_{\rm bol}\)/\(M_{\rm fwhm}\) ratio distributions for ATLASGAL clumps associated with class I masers, comparing to the four maser species studies by Ladeyschikov et al. (2022). From the figure, we can see that class I masers at 84 and 95 GHz masers have a great degree of overlap in the \(L_{\rm bol}\)/\(M_{\rm fwhm}\) ratios with H\({}_{2}\)O, OH, and class II methanol masers. These two maser species can trace a similar evolutionary stage as H\({}_{2}\)O maser, and appear prior to 6.7 and 12.2 GHz methanol and OH masers. Despite the small number, Figure 12: Distribution of detection number and detection rate of three methanol transitions under different FWZP of SiO (2–1) emission. The labels for the detection numbers are on the left side, while those for the detection rates are on the right side. The red, blue and green bins present the detection numbers of methanol emission at 84, 95 and 104.3 GHz, respectively, where the solid and hatched parts represent masers and maser candidates. The red, blue and green triangles indicate the detection rates of methanol emission at 84, 95 and 104.3 GHz. the 104.3 GHz class I masers appear to trace a short and more evolved stage compared to the other class I maser species. ### Physical conditions of class I masers As indicated in Leurini et al. (2016), the intensity ratio of two maser transitions usually gives much stronger constraints on physical conditions than one single transition. If there are three coincident methanol maser transitions detected in one source, theoretical calculations of the maser intensity ratios would put stronger constraints on the physical conditions (such as kinetic temperature and H\({}_{2}\) volume density) where all masers are excited. Figure 14: Cumulative distribution functions of the physical properties for the clumps with and without 84 GHz methanol detections. From (a) to (f), cumulative distributions of the bolometric luminosity, the FWHM clump mass, the luminosity-to-mass ratio, the dust temperature, the peak H\({}_{2}\) column density, the mean H\({}_{2}\) FWHM volume density for 84 GHz methanol detections (red lines) and non-detections (blue lines), respectively. The vertical dashed lines in the corresponding colors depict the median values of the two samples. The p-values from the K-S tests are presented in each panel. Figure 13: Methanol integrated intensity as a function of the FWZP of SiO (2–1) for the 84, 95 and 104.3 GHz detection, respectively. The orange and purple triangles represent the clumps hosting masers and maser candidates, respectively. The orange solid lines and black dashed lines depict the least-square fitting results for methanol maser and all methanol detections, respectively. We use RADEX (van der Tak et al., 2007) and myRadex3(Du, 2022) to create model grids for the methanol lines. MyRadex solves the same problem as RADEX, except that a different approach is used to solve the statistical equilibrium problem (i.e. using an ordinary differential equation solver to evolve the system towards equilibrium under an initial distribution). In our work, we show the modeling results from the myRadex code instead of RADEX, because RADEX calculations do not reach convergence when the specific methanol column densities (\(N/\Delta V\)) are higher than 10\({}^{14}\) cm\({}^{-2}\) km\({}^{-1}\) s, and such high specific methanol column densities are required to generate bright class I methanol masers (Leurini et al., 2016). Footnote 3: [https://github.com/fjdu/myRadex](https://github.com/fjdu/myRadex) We make use of the molecular data of CH\({}_{3}\)OH from the Leiden Atomic and Molecular Database (LAMDA4; Schoier et al., 2005), where the energy levels, transition frequencies and Einstein A coefficients are adopted from the CDMS (CDMS; Muller et al., 2005; Endres et al., 2016) and the latest collisional rates are obtained from Rabli & Flower (2010). The collisional rates for the torsional ground states of \(A\)- and \(E\)-type methanol are only known for collisions with para H\({}_{2}\) in a temperature range from 10 to 200 K and including the rotational states up to \(J\)=15 for collisions (Rabli & Flower, 2010). We note that myRadex is able to account for a variable ortho-to-para ratio, which can be calculated under a local thermodynamic equilibrium assumption and only depends on local temperature (e.g. see Equ. 1 in Takahashi, 2001). However, significant conversion from para- to ortho-H only starts at 700 K in C-shocks and at even higher temperatures in J-shocks (Wilgenbus et al., 2000). That means that the abundance of ortho-H\({}_{2}\) is very low at a kinetic temperature of \(<\)200 K. Thus, it is reasonable to consider that collisions with ortho-H\({}_{2}\) are negligible at kinetic temperatures of \(<\)200 K. Footnote 4: [https://home.strw.leidemuniv.nl/~moldata/](https://home.strw.leidemuniv.nl/~moldata/) Methanol molecules can exist in one of two different symmetry species, \(A\)- and \(E\)- type CH\({}_{3}\)OH. Since the very long timescale of proton exchange reactions to interconvert the two types of methanol, we can consider the \(A\)- and \(E\)-type methanol as two different molecules. The \(A/E\) methanol abundance ratio is related to the temperature where methanol was formed. The value of the ratio is about unity at a spin temperature (characterizing the relative population of non-interacting spin types of a molecule) of 30-40 K and increasing at lower temperatures (see Wirstrom et al., 2011). Based on previous studies (Sobolev et al., 2005; Voronkov et al., 2006; Leurini & Menten, 2018), class I methanol masers are usually excited at kinetic temperatures higher than 40 K. Therefore, it is reasonable to assume a methanol \(A/E\) type ratio of unity in our calculations. Figure 20 shows myRadex calculations for 84, 95 and 104.3 GHz methanol lines over an equally-spaced grid with 39 temperatures from 10 to 200 K and 41 para-H\({}_{2}\) densities from 10\({}^{4}\) to 10\({}^{8}\) cm\({}^{-3}\). The specific column densities of methanol are fixed at 10\({}^{15}\) cm\({}^{-2}\) km\({}^{-1}\) s (Fig. 20a) and 10\({}^{16}\) cm\({}^{-2}\) km\({}^{-1}\) s (Fig. 20b). A plane-parallel slab geometry is adopted, which is appropriate for shocks, and the corresponding escape prob Figure 15: Cumulative distribution functions of the physical properties for the clumps with 84, 95 and 104.3 GHz methanol detections. Panels (a) to (f) are similar to the corresponding one in Fig. 14. The vertical dashed lines in the corresponding colors depict the median values of each sample. The p-values from the K-S tests between each two methanol emission are presented in each panel. ability \(\beta\) = (1-e\({}^{-37}\))/3\(\tau\), where \(\tau\) is the optical depth. No external radiation field is taken into account except for the cosmic microwave background, because class I methanol masers are usually offset from infrared sources and UCHii regions, and interferometric observations reveal that they reside in the interface regions between outflows and ambient dense material (Plambeck & Menten, 1990; Voronkov et al., 2006). The black and white contours represent the peak intensity ratios of \(T_{\rm r,1043}/T_{\rm r,84}\) and \(T_{\rm r,95}/T_{\rm r,84}\), respectively. We assume that the maser components in these three transitions with the same velocity are co-spatial, and have the same small masing area (i.e. emission size \(\ll\) beam size). In the calculations of peak intensity ratios, different beam dilution factors of each transition have been taken into account. For example, \(T_{\rm r,1043}/T_{\rm r,84}\) equals to \((T_{\rm mb,1043}/T_{\rm mb,84})\cdot(\theta_{\rm beam,1043}^{2}/\theta_{\rm beam,84}^{2})\), where \(T_{\rm mb}\) and \(\theta_{\rm beam}\) are the observed brightness temperature and the beam size at the corresponding frequency. From our four detected 104.3 GHz masers, only G10.34\(-\)0.14 shows distinguishable and velocity-aligned (at \(\sim\)14.5 km s\({}^{-1}\)) maser features in the three transitions. We adopt this feature as an example to find out under what physical conditions the three masers can be excited simultaneously. The red star in Fig. 20 is the intersection of the observed two pairs of ratios, and the red dashed lines represent the uncertainties of the observed ratios. Under an assumed specific column density of methanol of 10\({}^{15}\) cm\({}^{-2}\) km\({}^{-1}\) s, a kinetic temperature of 62\(\pm\)3 K and a para-H\({}_{2}\) number density of 4.4(\(\pm\)1.5)\(\times\)10\({}^{6}\) cm\({}^{-3}\) are needed to excited the three class I methanol masers. When the assumed specific column density of methanol increases to 10\({}^{16}\) cm\({}^{-2}\) km\({}^{-1}\) s, the masering region in G10.34\(-\)0.14 should have a kinetic temperature of 57\(\pm\)3 K and a para-H\({}_{2}\) number density of 7.9(\(\pm\)2.5)\(\times\)10\({}^{5}\) cm\({}^{-3}\). Comparing with the two sub-figures in Fig. 20, it can be seen that (1) the 104.3 GHz methanol masers are usually produced in a dense (\(\gtrsim\)10\({}^{5}\) cm\({}^{-3}\)) environment, (2) with increasing specific column densities of methanol, the absolute values of optical depth become higher, which means stronger maser emission, and the strongest maser occurs in a less dense environment. ## 5 Summary We performed a survey of three class I methanol masers in the 84, 95 and 104.3 GHz transitions using the IRAM-30m telescope towards 408 ATLASGAL clumps. The main results are summarized as follows. 1. We detect in 282 (70%), 224 (55%) and 29 (7%) sources with methanol emission at 84, 95 and 104.3 GHz, respectively. The 104.3 GHz emission is only found in sources with both 84 and 95 GHz detections. A total of 54, 100 and 4 sources show maser-like features at 84, 95 and 104.3 GHz, respectively. Among them, fifty 84 GHz masers, twenty nine 95 GHz masers, and four 104.3 GHz masers are new discoveries. Our work increases the number of known 104.3 GHz masers from five to nine. 2. In our sample, the 95 GHz class I methanol maser is generally stronger than its 84 GHz maser counterpart. For both the 84 and 95 GHz class I masers, the relative velocities between maser velocity and systemic velocity are within \(\sim\)5 km s\({}^{-1}\), confirming that the class I maser velocity can well trace the systemic velocity. 3. All detected 104.3 GHz methanol masers are redshifted with respect to the systemic velocity. Based on the properties \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ATLASGAL Parameters} & \multicolumn{4}{c}{Maser} & \multicolumn{4}{c}{Detection (i.e. maser+candidate)} & \multicolumn{4}{c}{Non-detection} \\ \cline{2-13} & min & max & median & mean & \(\sigma\) & min & max & median & mean & \(\sigma\) & min & max & median & mean & \(\sigma\) \\ \hline & & & \multicolumn{13}{c}{84 GHz} \\ \hline \(\log_{10}(L_{bol})\) [L\({}_{\odot}\)] & 1.93 & 5.51 & 3.66 & 3.69 & 0.92 & 1.50 & 6.91 & 3.79 & 3.78 & 1.06 & \(-\)0.12 & 6.60 & 2.71 & 2.84 & 1.06 \\ \(\log_{10}(M_{\rm{stehm}})\) [M\({}_{\odot}\)] & 1.19 & 3.33 & 2.55 & 2.55 & 0.46 & 0.87 & 4.51 & 2.69 & 2.67 & 0.53 & \(-\)0.47 & 4.14 & 2.68 & 2.60 & 0.63 \\ \(L_{bol}/M_{\rm{stehm}}\) [L\({}_{\odot}\)/M\({}_{\odot}\)] & 0.13 & 518.68 & 19.01 & 57.86 & 106.16 & 0.08 & 1037.23 & 16.53 & 63.96 & 127.43 & 0.07 & 291.03 & 1.40 & 11.97 & 34.47 \\ \(T_{\rm{dust}}\) & 11.8 & 34.6 & 21.1 & 21.2 & 6.0 & 8.9 & 40.5 & 20.7 & 21.4 & 6.6 & 8.7 & 32.2 & 16.5 & 17.2 & 5.5 \\ \(\log_{10}(N({\rm H}_{2}))\) [cm\({}^{-2}\)] & 22.10 & 23.66 & 22.81 & 22.78 & 0.34 & 22.10 & 24.03 & 22.78 & 22.79 & 0.33 & 22.06 & 23.04 & 22.48 & 22.47 & 0.29 \\ \(\log_{10}(n_{\rm{stehm}}({\rm H}_{2}))\) [cm\({}^{-3}\)] & 4.02 & 6.37 & 4.92 & 4.89 & 0.51 & 3.71 & 6.47 & 4.80 & 4.83 & 0.49 & 3.74 & 5.85 & 4.47 & 4.53 & 0.43 \\ \hline & & \multicolumn{13}{c}{95 GHz} \\ \hline \(\log_{10}(L_{bol})\) [L\({}_{\odot}\)] & 1.63 & 5.65 & 3.77 & 3.76 & 0.91 & 1.63 & 6.91 & 3.96 & 3.94 & 0.99 & \(-\)0.12 & 6.00 & 2.79 & 3.00 & 1.11 \\ \(\log_{10}(M_{\rm{stehm}})\) [M\({}_{\odot}\)] & 0.87 & 3.97 & 2.61 & 2.62 & 0.52 & 0.87 & 4.51 & 2.73 & 2.72 & 0.54 & \(-\)0.47 & 4.12 & 2.60 & 2.56 & 0.57 \\ \(L_{bol}/M_{\rm{stehm}}\) [L\({}_{\odot}\)/M\({}_{\odot}\)] & 0.08 & 647.86 & 17.73 & 47.54 & 95.62 & 0.08 & 647.86 & 20.66 & 57.78 & 94.09 & 0.07 & 1037.23 & 2.10 & 40.40 & 132.47 \\ \(T_{\rm{dust}}\) & 8.9 & 35.59 & 20.9 & 21.3 & 5.6 & 8.9 & 37.0 & 21.6 & 21.8 & 6.1 & 8.7 & 40.5 & 17.3 & 18.3 & 6.8 \\ \(\log_{10}(N({\rm H}_{2}))\) [cm\({}^{-2}\)] & 22.10 & 23.92 & 22.89 & 22.87 & 0.36 & 22.10 & 24.03 & 22.85 & 22.85 & 0.34 & 22.06 & 23.04 & 22.52 & 22.52 & 0.21 \\ \(\log_{10}(n_{\rm{stehm}}({\rm H}_{2}))\) [cm\({}^{-3}\)] & 3.93 & 6.47 & 5.02 & 5.01 & 0.49 & 3.79 & 6.47 & 4.86 & 4.89 & 0.50 & 3.71 & 5.85 & 4.52 & 4.55 & 0.40 \\ \hline & & \multicolumn{13}{c}{104.3 GHz} \\ \hline \(\log_{10}(L_{bol})\) [L\({}_{\odot}\)] & 4.03 & 5.15 & 4.46 & 4.52 & 0.56 & 3.66 & 6.28 & 4.60 & 4.64 & 0.69 & \(-\)0.12 & 6.91 & 3.36 & 3.42 & 1.12 \\ \(\log_{10}(M_{\rm{stehm}})\) [M\({}_{\ of their associated ATLASGAL clumps, this maser species would not arise in environments where \(L_{\rm bol}\lesssim 10^{4}\) L\({}_{\odot}\), \(\dot{M}_{\rm rhum}\lesssim 200\) M\({}_{\odot}\), \(T_{\rm dust}\lesssim 22\) K, \(N({\rm H}_{2})\lesssim 10^{23}\) cm\({}^{-2}\) and \(n_{\rm fwhm}({\rm H}_{2})\lesssim 10^{5}\) cm\({}^{-3}\). * We find 9 sources that show class I methanol masers but no SiO emission, indicating that class I methanol masers might be the only signpost of protostellar activity in extremely embedded objects at the earliest evolutionary stage. More and stronger class I masers were detected towards sources showing SiO line wings than towards sources without SiO wings. The total integrated intensity of class I masers is positively correlated with SiO integrated intensity and FWZP of SiO (2-1) emission. These facts strongly suggest that the properties of class I masers are regulated by shock properties also traced by SiO. * The properties of class I methanol masers show positive correlations with the following properties of associated ATLASGAL clumps: bolometric luminosity, clump mass and peak H\({}_{2}\) column density. There is no statistically significant correlation between the luminosity of class I methanol masers and the luminosity-to-mass ratio, dust temperature, or mean H\({}_{2}\) volume density. * Our results show that the 84 GHz and 95 GHz methanol masers exist in the quiescent, protostellar, YSO, and Hii region stages. In contrast, the 104.3 GHz methanol masers are only detected in the Hii region stage. Based on the distribution of bolometric luminosity-to-mass ratios associated with different masers, we suggest that class I methanol masers at 84 and 95 GHz can trace a similar evolutionary stage as H\({}_{2}\)O maser, and appear prior to 6.7 and 12.2 GHz methanol and Figure 16: Distribution of 84 GHz methanol luminosity or integrated intensity against properties of ATLASGAL clumps. From panel (a) to (e), methanol isotropic luminosities are plotted against the bolometric luminosity, the FWHM clump mass, the luminosity-to-mass ratio, the dust temperature and the mean H\({}_{2}\) FWHM volume density. In panel (f), total integrated intensities of the 84 GHz methanol transition are plotted against the peak H\({}_{2}\) column density. The orange and purple triangles represent the ATLASGAL clumps host maser and maser candidates, respectively. The orange solid lines and black dashed lines depict the least-square fitting results for methanol maser and all methanol detections, respectively. Figure 17: Proportions of 309 ATLASGAL clumps classified into the four evolutionary stages. _Note 7._ Columns 2-7 give the slopes of the linear fitting results and Pearson correlation coefficients \(r\) and \(p\)-values for the parameters given in Column 1 of maser and methanol detections. We present a systemic study of class I methanol masers in the inner Galaxy and their relation with SiO emission on clump scales. In future work, more in-depth statistical analysis such as mathematical classification models can be performed, to understand which properties of the ATLASGAL clump are the best predictors of an associated class I methanol maser. This analysis can then be used to predict which ATLASGAL clumps have not been observed should be targeted for class I maser searches. In addition, higher angular follow-up observations will be able to pinpoint the location of class I methanol masers on smaller scales, which will provide more constraints on the relation between the masers and their driving stars. ## Acknowledgments T. Cs. has received financial support from the French State in the framework of the IdEx Universite de Bordeaux Investments for the future Program. This work made use of Python li Figure 18: Distribution of methanol detection number and detection rate of three methanol transitions under four evolutionary stages. The labels for the detection numbers are on the left side, while those for the detection rates are on the right side. The red, blue and green bins present the detection numbers of methanol emission at 84, 95 and 104.3 GHz, respectively, where the solid and hatched parts represent masers and maser candidates. The red, blue and green triangles indicate the detection rates of methanol emission at 84, 95 and 104.3 GHz. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Parameters} & \multicolumn{3}{c}{Maser} & \multicolumn{3}{c}{Detection} \\ \cline{2-7} & slope & \(r\) & \(p\)-value & slope & \(r\) & \(p\)-value \\ \hline \multicolumn{7}{c}{84 GHz} \\ \hline \(L_{84}\) vs. \(L_{\rm bol}\) & 0.32\(\pm\)0.09 & 0.47 & 5.3\(\times\)10\({}^{-4}\) & 0.36\(\pm\)0.03 & 0.55 & 4.2\(\times\)10\({}^{-22}\) \\ \(L_{84}\) vs. \(M_{\rm rulm}\) & 0.62\(\pm\)0.17 & 0.45 & 8.4\(\times\)10\({}^{-4}\) & 0.91\(\pm\)0.16 & 0.68 & 1.1\(\times\)10\({}^{-37}\) \\ \(L_{84}\) vs. \(L_{\rm bol}/M_{\rm rulm}\) & 0.20\(\pm\)0.10 & 0.26 & 0.06 & 0.12\(\pm\)0.03 & 0.24 & 6.2\(\times\)10\({}^{-5}\) \\ \(L_{84}\) vs. \(T_{\rm dust}\) & 0.54\(\pm\)0.74 & 0.10 & 0.47 & 0.69\(\pm\)0.32 & 0.13 & 0.03 \\ \(L_{84}\) vs. \(n_{\rm rulm}({\rm H_{2}})\) & 0.11\(\pm\)0.18 & 0.09 & 0.54 & 0.15\(\pm\)0.09 & 0.10 & 0.11 \\ \(\int S_{\rm bol}{\rm d}V\) vs. \(N({\rm H_{2}})\) & 1.17\(\pm\)0.15 & 0.74 & 4.1\(\times\)10\({}^{-10}\) & 1.17\(\pm\)0.07 & 0.74 & 1.1\(\times\)10\({}^{-47}\) \\ \hline \multicolumn{7}{c}{95 GHz} \\ \hline \(L_{95}\) vs. \(L_{\rm bol}\) & 0.40\(\pm\)0.06 & 0.57 & 8.4\(\times\)10\({}^{-10}\) & 0.36\(\pm\)0.04 & 0.57 & 1.5\(\times\)10\({}^{-19}\) \\ \(L_{95}\) vs. \(M_{\rm rulm}\) & 0.63\(\pm\)0.11 & 0.52 & 5.6\(\times\)10\({}^{-8}\) & 0.68\(\pm\)0.07 & 0.57 & 1.3\(\times\)10\({}^{-19}\) \\ \(L_{95}\) vs. \(N_{\rm bol}/M_{\rm rulm}\) & 0.25\(\pm\)0.08 & 0.32 & 1.3\(\times\)10\({}^{-3}\) & 0.19\(\pm\)0.04 & 0.32 & 2.3\(\times\)10\({}^{-6}\) \\ \(L_{95}\) vs. \(T_{\rm dust}\) & 1.25\(\pm\)0.52 & 0.24 & 0.02 & 1.23\(\pm\)0.34 & 0.24 & 3.7\(\times\)10\({}^{-4}\) \\ \(L_{95}\) vs. \(n_{\rm rulm}({\rm H_{2}})\) & 0.11\(\pm\)0.13 & 0.09 & 0.39 & 0.20\(\pm\)0.09 & 0.16 & 0.02 \\ \(\int S_{\rm bol}{\rm d}V\) vs. \(N({\rm H_{2}})\) & 1.04\(\pm\)0.11 & 0.69 & 3.4\(\times\)10\({}^{-15}\) & 1.06\(\pm\)0.08 & 0.66 & 4.1\(\times\)10\({}^{-28}\) \\ \hline \multicolumn{7}{c}{104.3 GHz} \\ \hline \(L_{104.3}\) vs. \(L_{\rm bol}\) & 0.76\(\pm\)0.32 & 0.86 & 0.14 & 0.66\(\pm\)0.11 & 0.78 & 1.6\(\times\)10\({}^{-6}\) \\ \(L_{104.3}\) vs. \(M_{\rm rulm}\) & 0.96\(\pm\)0.11 & 0.99 & 0.01 & 1.19\(\pm\)0.13 & 0.88 & 2.1\(\times\)10\({}^{-9}\) \\ \(L_{104.3}\) vs. \(L_{\rm bol}/M_{\rm rulm}\) & \multicolumn{1}{c}{–0.13\(\pm\)1.26} & \multicolumn{1}{c}{–0.07} & 0.93 & 0.64\(\pm\)0.28 & 0.42 & 0.03 \\ \(L_{104.3}\) vs. \(T_{\rm dust}\) & \multicolumn{1}{c}{–4.48\(\pm\)6.71} & \multicolumn{1}{c}{–0.43} & 0.57 & 2.51\(\pm\)1.65 & 0.29 & 0.14 \\ \(L_{104.3}\) vs. \(n_{\rm rulm}({\rm H_{2}})\) & \multicolumn{1}{c}{–2.18\(\pm\)2.20} & \multicolumn{1}{c}{–0.57} & 0.43 & 0.41\(\pm\)0.21 & 0.36 & 0.07 \\ \(\int S_{\rm bol}{\rm d}V\) vs. \(N({\rm H_{2}})\) & 0.45\(\pm\)0.43 & 0.60 & 0.40 & 1.01\(\pm\)0.11 & 0.87 & 3.1\(\times\)10\({}^{-9}\) \\ \hline \end{tabular} \end{table} Table 8: Summary of linear fitting results in log-log form of methanol luminosity or integrated intensity versus the properties of ATLASGAL clumps for 84, 95 and 104.3 GHz transitions. braies including Astropy5(Astropy Collaboration et al., 2013), NumPy6(van der Walt et al., 2011), SciPy7(Jones et al., 2001), Matplotlib8(Hunter, 2007). This research has made use of the VizieR catalogue, operated at CDS, Strasbourg, France. Figure 19: Box plot shows the distributions of bolometric luminosity-to-mass ratios for clumps associated with different masers. Except for class I maser data, the plotting data for H\({}_{2}\)O, OH, and class II methanol masers at 6.7 and 12.2 GHz are taken from Fig. 7(A) in Ladeyschikov et al. (2022). The boxes filled in red, blue, light yellow and green indicate the class I and II methanol masers, H\({}_{2}\)O, and OH masers, respectively. Each box represents the interquartile range between 25th and 75th percentile with a thick black line denoting the median value. The whiskers show the full range of the data. Figure 20: Statistical equilibrium calculations of optical depths and line ratios. The left and right panels show that the calculations are performed under the specific column densities of \(10^{15}\) and \(10^{16}\) cm\({}^{-2}\) km\({}^{-1}\) s, respectively. The colored background shows the regions with negative optical depths of the 104.3 GHz methanol transition where masers can be generated. The black and white contours represent the peak intensity ratios of \(T_{\rm{r},104.3}/T_{\rm{r},84}\) and \(T_{\rm{r},95}/T_{\rm{r},84}\), respectively. The red star is the intersection of the observed two pairs of ratios in G10.34\(-\)0.14, and the red dashed lines represent the 1\(\sigma\) uncertainties of the observed ratios.
2304.13431
Implicit Counterfactual Data Augmentation for Deep Neural Networks
Machine-learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations. However, explicitly generating counterfactual data is challenging, with the training efficiency declining. Therefore, this study proposes an implicit counterfactual data augmentation (ICDA) method to remove spurious correlations and make stable predictions. Specifically, first, a novel sample-wise augmentation strategy is developed that generates semantically and counterfactually meaningful deep features with distinct augmentation strength for each sample. Second, we derive an easy-to-compute surrogate loss on the augmented feature set when the number of augmented samples becomes infinite. Third, two concrete schemes are proposed, including direct quantification and meta-learning, to derive the key parameters for the robust loss. In addition, ICDA is explained from a regularization aspect, with extensive experiments indicating that our method consistently improves the generalization performance of popular depth networks on multiple typical learning scenarios that require out-of-distribution generalization.
Xiaoling Zhou, Ou Wu
2023-04-26T10:36:40Z
http://arxiv.org/abs/2304.13431v1
# Implicit Counterfactual Data Augmentation ###### Abstract Machine-learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations. However, explicitly generating counterfactual data is challenging, with the training efficiency declining. Therefore, this study proposes an implicit counterfactual data augmentation (ICDA) method to remove spurious correlations and make stable predictions. Specifically, first, a novel sample-wise augmentation strategy is developed that generates semantically and counterfactually meaningful deep features with distinct augmentation strength for each sample. Second, we derive an easy-to-compute surrogate loss on the augmented feature set when the number of augmented samples becomes infinite. Third, two concrete schemes are proposed, including direct quantification and meta-learning, to derive the key parameters for the robust loss. In addition, ICDA is explained from a regularization aspect, with extensive experiments indicating that our method consistently improves the generalization performance of popular depth networks on multiple typical learning scenarios that require out-of-distribution generalization. Counterfactual, implicit augmentation, spurious correlation, meta-learning, regularization, generalization. ## I Introduction Deep learning models are supposed to learn invariances and make stable predictions based on some right causes. However, models trained with empirical risk minimization are prone to learning spurious correlations and suffer from high generalization errors when the training and test distributions do not match [1, 2]. For example, dogs are mostly on the grass in the training set. Thus, a dog in the water can easily be misclassified as a "drake" due to its rare scene context ("water") in the "dog" class, as illustrated in Fig. 1. A promising solution for improving the models' generalization and robustness is to learn causal models [3], as if a model can concentrate more on the causal correlations but not the spurious associations between non-causal attributes and classes, stable and exact predictions are more likely. Counterfactual augmentation has become popular for causal models because of its clear explanation and being model-agnostic. For instance, Lu et al. [4] and He et al. [5] augmented the data effectively by swapping identity pronouns in texts. Moreover, Chang et al. [6] introduced two new image generation procedures that included counterfactual and factual data augmentations to reduce spuriousness between backgrounds of images and labels, achieving higher accuracy in several challenging datasets. Mao et al. [2] utilized a novel strategy to learn robust representations that steered generative models to manufacture interventions on features caused by confounding factors. Nevertheless, the methods presented above suffer from several shortcomings. Specifically, it is not trivial to explicitly find all confounding factors, and the training efficiency will decline as excess augmented images are involved in training. It should be mentioned that implicit data augmentation settles the inefficiency of explicit augmentation by avoiding the generation of excess samples. ISDA [7] conducts a pioneering study on implicit data augmentation, which is inspired by the observation that the deep features in a network are usually linearized. Then, it translates samples along the semantic directions in the feature space based on an assumed class-wise augmentation distribution. By deriving an upper bound on the expected cross-entropy (CE) loss, ISDA enables optimization of only the upper bound to achieve data augmentation in an efficient way. Moreover, MetaSAug [8] innovatively applies the idea of ISDA to the long-tailed recognition, which does not modify the augmentation distribution of ISDA but optimizes the covariance matrices using metadata, yielding good performance on imbalanced data. Besides, RISDA [9] constructs novel augmentation distributions of tail classes by mixing the information from relevant categories, thus more effectively enriching samples in tail categories. However, these methods adopt purely class-wise semantic augmentation strategies, and thus samples in the same class have identical augmentation distributions that are inaccurate. Fig. 1(a) illustrates samples in the same class that may be negatively influenced by different attributes (or classes), where an ideal augmentation strategy should consider these sample-wise non-causal attributes. Additionally, these methods adopt the same augmentation strength for each instance, ignoring that inappropriate distributions (e.g., imbalanced label and attribute distributions) also lead Fig. 1: (a): Illustration for images affected by spurious correlations due to rare attributes (e.g., posture, color, and scene context). \(C_{1}\), \(C_{2}\), and \(C_{3}\) are the dog, polar bear, and drake classes, respectively. The solid line connects the sample’s ground-truth class, and the dotted line connects the class with a spurious correlation with the sample. (b): Illustration for attribute imbalance.
2304.05253
Approximating Online Human Evaluation of Social Chatbots with Prompting
As conversational models become increasingly available to the general public, users are engaging with this technology in social interactions. Such unprecedented interaction experiences may pose considerable social and psychological risks to the users unless the technology is properly controlled. This highlights the need for scalable and robust evaluation metrics for conversational chatbots. Existing evaluation metrics aim to automate offline user evaluation and approximate human judgment of pre-curated dialogs. However, they are limited in their ability to capture subjective perceptions of users who actually interact with the bots and might not generalize to real-world settings. To address this limitation, we propose an approach to approximate online human evaluation leveraging large language models (LLMs) from the GPT family. We introduce a new Dialog system Evaluation framework based on Prompting (DEP), which enables a fully automatic evaluation pipeline that replicates live user studies and achieves an impressive correlation with human judgment (up to Pearson r=0.95 on a system level). The DEP approach involves collecting synthetic chat logs of evaluated bots with an LLM in the other-play setting, where the LLM is carefully conditioned to follow a specific scenario. We further explore different prompting approaches to produce evaluation scores with the same LLM. The best performing prompts, which contain few-shot demonstrations and instructions, show outstanding performance on the tested dataset and demonstrate the ability to generalize to other dialog corpora.
Ekaterina Svikhnushina, Pearl Pu
2023-04-11T14:45:01Z
http://arxiv.org/abs/2304.05253v2
# Approximating Human Evaluation of Social Chatbots with Prompting ###### Abstract Once powerful conversational models have become available for a wide audience, users started actively engaging in social interactions with this technology. Such unprecedented interaction experiences may pose considerable social and psychological risks to the users unless the technology is properly controlled. This creates an urgent need for scalable and robust evaluation metrics for conversational chatbots. Existing automatic evaluation metrics usually focus on objective quality measures and disregard subjective perceptions of social dimensions. Moreover, most of these approaches operate on pre-produced dialogs from available benchmark corpora, which implies human involvement for preparing the material for evaluation and, thus, impeded scalability of the metrics. To address this limitation, we propose to make use of the emerging large language models (LLMs) from the GPT-family and describe a new framework allowing to conduct dialog system evaluation with prompting. With this framework, we are able to achieve full automation of the evaluation pipeline and reach impressive correlation with the human judgement (up to Pearson \(r=0.95\) on system level). The underlying concept is to collect synthetic chat logs of evaluated bots with a LLM in the other-play setting, where LLM is carefully conditioned to follow a specific scenario. We further explore different prompting approaches to produce evaluation scores with the same LLM. The best-performing prompts, containing few-show demonstrations and instructions, show outstanding performance on the tested dataset and demonstrate the ability to generalize to other dialog corpora. ## 1 Introduction The recent arrival of conversational AI, marked by the public release of ChatGPT from OpenAI,1 initiated unprecedented user engagement with conversational chatbots in a real-world setting. With the impressive naturalness of machines' responses, users are going beyond traditional transactional exchanges and start exploring more social interaction scenarios with increasing curiosity (Thormundsson, 2023). In such situations, users might be subject to social and psychological harms if dialog systems fail to follow commonsense social rules (Svikhnushina and Pu, 2022; Kim et al., 2022). Several instances of alarming social behavior of the technology have already been discussed in the media (Roose, 2023; De Cosmo, 2023; Life, 2023). In this context, developing meaningful and robust evaluation metrics for these systems has become particularly urgent to ensure that the models are safe and acting in the best interest of the users before their release. Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Initially, human evaluation was considered a de facto standard for evaluating dialog systems (Li et al., 2019) (Figure 0(a)). As running human evaluation is time- and resource-consuming, a number of automatic evaluation metrics for dialog systems have been proposed (Mehri et al., 2022; Yeh et al., 2021). Despite the fact that dialog evaluation is handled automatically, the majority of these approaches still require human involvement at some point of the process, for example for making conversations with the bots which will be subject to further evaluation (Mehri and Eskenazi, 2020; Ghazarian et al., 2022) or for providing intermediate annotations (Deriu et al., 2020) (Figure 0(b)). These limitations impede the scalability and broad applicability of the existing metrics for evaluating new dialog systems from scratch. More importantly, existing metrics mainly focus on objective quality measures of the chatbots' responses in a fixed-context setting, such as evaluating accuracy, fluency, or specificity of separate model outputs conditioned on provided input (Lee et al., 2022). Meanwhile, users typically engage in multi-turn exchanges with their chatbots and form their appraisals of the systems based on the holistic inter active experience, which is influenced by various subjective factors. Previous research works have already emphasized that taking interactivity into account is essential for improving the evaluation of chatbots (Ghandeharioun et al., 2019; Lee et al., 2022), however efforts taken in this direction have been limited. To address this gap, we propose a novel automatic **D**ialog system **E**valuation framework based on **P**rompting, DEP. Our framework automates the whole pipeline of dialog system evaluation in an interactive setting. As the first step towards this goal, we leverage a large language model (LLM) from the GPT-family models to collect synthetic chat logs of evaluated bots with the LLM. Second, we prompt the same LLM to produce the resulting evaluation scores for generated chat logs and, finally, rank the chatbots based on their overall performance (Figure 1c). While using bot-play is not a new idea per se, we emphasize the importance of carefully choosing a dialog partner for the evaluation targets specifically for social conversational contexts where roles of two interlocutors can differ significantly. For example, it was shown that the emotion/intent distribution in conversations between an emotional speaker and an empathetic listener are very different for the two dialog partners (Welivita and Pu, 2020). To account for it, in the first step of our framework, we propose prompting LLMs to play a particular social role over the course of the interaction with the chatbots to be evaluated. For the second step, we draw inspiration from the fact that LLMs demonstrate solid performance improvement when their generation process is augmented with instructions (Kim et al., 2022). We demonstrate that prompting the model with appropriate instructions that explain how fine-grained evaluation dimensions relate to the overall dialog score lead to drastic performance improvement, reaching up to \(r=0.95\) Pearson correlation with the human judgement on a system level. Overall, our contributions include the following. 1) We describe an end-to-end prompting-based evaluation framework for dialog systems, specifically targeting social interaction scenarios (Section 3). 2) Our experiments showcase the effectiveness of prompting for assigning a desired social role to LLMs and, thus, collecting machine-generated dialogs that better approximate real interpersonal communication (Section 4.1.2). 3) We consider different prompt designs and conclude that including demonstrations together with instructions results in the best performance (Sections 4.1.3, 4.2.2). ## 2 Related Work ### Automatic Evaluation of Chatbots Automatic dialog evaluation has been a long-standing goal for the practitioners. Initial works focused on evaluating chatbots' responses against a ground-truth reference (Papineni et al., 2002; Tao et al., 2018). Following works moved on to exploring reference-free evaluation metrics as referenced evaluation was shown to be ineffective due to a wide range of acceptable responses for a single context (Liu et al., 2016), implying that comparing with a single reference is limited. Reference-free metrics usually operate either on the utterance or the dialog level. For the utterance level, practitioners have explored ways to evaluate response appropriateness for the preceding context (Lan et al., 2020; Pang et al., 2020) or predict the qualities of the follow-up response as a proxy for the quality of the preceding dialog (Ghazarian et al., 2022, 2020; Mehri and Eskenazi, 2020). For the dialog level, a number of diverse approaches have been proposed, ranging from aggregating several fine-grained utterance-level evaluations (Zhang et al., 2021), to employing graph representations to model dialog dynamics (Huang et al., 2020; Zhang et al., 2021), to using semantic-level manipulations to teach the evaluation model to distinguish coherent and incoherent dialogs (Ghazarian et al., 2022). Works above require dialog data to produce scores and usually operate on available dialog datasets that were created with human involvement. Since one of the major goals of dialog evaluation is measuring the performance of one dialog system compared to others, recent works started paying more attention to system-level evaluation. Deriu et al. (2020) proposed a partially automated framework where human judges rank chatbots regarding their ability to mimic conversational behavior using bot-to-bot conversations, which relies on survival analysis. Sato et al. (2022) proposed a particular bipartite-play approach for collecting bot-to-bot conversations to provide a fairer comparison setting for evaluation targets. These papers consider methodologies for organizing bot-to-bot conversation sessions, but they are not concerned with the way how these bot-to-bot conversations unfold. In our work, we explore the use of bot-to-bot conver sations to model a desired social behavior. ### Prompting Prompt-based learning paradigm (Liu et al., 2023) received significant attention after Brown et al. (2020) demonstrated how GPT-3, a large foundation model, can well handle a wide range of tasks without the need for fine-tuning, relying only on natural-language prompts and task demonstrations as context. Prompt-based model performance depends on the design of the provided prompt. Prompt engineering efforts explore approaches for designing prompts, which vary in the shape of prompts (cloze or prefix), human effort required for writing prompts (manual or automatic), and number of demonstrations provided to the model in the prompt (zero-shot or few-shot) (Liu et al., 2023). Prompt-based learning applied to recently created LLMs has been reported to achieve outstanding results on a variety of tasks and benchmarks, including classification, reasoning, coding, translation, and many others (_e.g._ Wei et al., 2022; Chowbery et al., 2022; Chung et al., 2022). However, exploring prompting for evaluation of dialog systems has not been widely investigated. We are only aware of one more simultaneous and independent effort in this direction. Huynh et al. (2023) studied how different LLM parameters (type, size, training data) may influence the dialog evaluation, focusing on utterance- and dialog-level evaluation using existing datasets. Our work focuses on how prompting can be used to capture a holistic evaluation of dialog systems in social settings, relying on freshly generated dialogs. ## 3 Proposed Method: DEP We introduce our DEP framework, which consists of two consecutive steps. First, it requires collecting interactive chat logs between the LLM and evaluated chatbots, which we denote as LLM-to-bot play. Second, the LLM is prompted to generate scores to these chat logs. The generated scores are further aggregated to produce a final ranking of the systems. We describe each of the steps below. Figure 1: Three types of dialog evaluation with the corresponding processes. In the first step, dialog logs are curated. In the second step, each dialog log is assigned a dialog-level score. In the third step, the system ranking is obtained by aggregating the dialog scores of each chatbot. Greyed-out elements indicate steps that are intended to be automated by the means of automatic evaluation once it gets used in the wild. Pink boxes mark the steps in the process where the correlation (\(r\).) with the ground truth human judgment is computed to validate the automatic evaluation metric during its development process. a) Subplot demonstrates the human evaluation process, where human effort is required both for the first and the second steps. b) Subplot demonstrates a widely represented approach of dialog evaluation in the literature. These works usually focus exclusively on the second step, automating human effort for assigning scores to dialogs. They operate on dialog datasets that are created with human involvement, resulting in only partial automation. c) Subplot demonstrates the approach followed in our paper, where human effort is eliminated at both steps and human judgment is approximated on the system level. ### Prompted LLM-to-Bot Play In social settings, two partners may play considerably different roles in a dialog, thus establishing very distinct conversational behaviors. Examples include conversations between a student and a teacher, an emotional speaker and an empathetic listener, or even between two interlocutors with different personas. Chatbots are usually built to perform well in one of these roles (e.g., empathetic listener), but not necessarily the other. Therefore, collecting synthesized dialogs via self-play of the chatbot with itself (or a similar competing model) might fail to represent a realistic discourse flow due to the differences in the intents produced by speakers and listeners in dialogs. To address this consideration and render the synthesized dialogs that better approximate real social interactions, we propose leveraging LLMs' ability to produce responses on behalf of an assigned character (Thoppilan et al., 2022). Specifically, we suggest letting the evaluated chatbots converse with a LLM prompted to play a particular social role. Figure 2 demonstrates how to structure the prompt to produce each next output of the LLM in an interactive manner. Meanwhile, responses from the evaluated chatbots are computed by passing the accumulated dialog history to these chatbots as input context. The process can be repeated for multiple dialog turns. The length of the exchange may depend on the amount of details provided to prompt the LLM. The more specific the prompt is, the faster the evaluated chatbot can demonstrate its performance in the social situation of interest. On the contrary, more generic conversation starters require more dialog turns to reveal the targeted social behavior. ### Prompted Evaluation Once dialog logs are synthesized, we propose using prompting to produce evaluation scores for each dialog. Prompts can be constructed in several ways. We investigate zero-shot and few-shot settings, either with or without instructions, in our experiments (Section 4). Many available foundation LLMs are accessible through APIs and only output text completions without corresponding log probabilities. Therefore, regardless of the type of prompt that we use, to generate a score for each dialog, we obtain a textual form of the score from the LLM completion and then use a verbalizer function to map it to a numerical value, getting inspiration from (Schick and Schutze, 2021). Formally, given a dialog log \(d\), we construct a prompt \(P(d)\) that takes \(d\) as input and outputs a prompt that contains exactly one mask token as a placeholder for the dialog score. Let \(y\) be a predicted token for \(P(d)\). We then define a verbalizer as an injective function \(v\) that maps each score in textual form to a numerical value. Thus, \(v(y)\) produces a numerical score for a single dialog. The final rating of a dialog system \(j\) is obtained by averaging the corresponding dialog scores: \[s_{j}=\frac{\sum_{i=1}^{N_{j}}v(y_{ji})}{N_{j}},\] where \(N_{j}\) denoted the number of dialogs collected for the system \(j\). For fair evaluation, the number of dialogs collected for each evaluated chatbot should be identical. ## 4 Results For all reported experiments, we used the most capable version of InstructGPT model (text-davinci-003) available at the moment of initiation of our experiments in early Q1 2023. We used this model as it was easily accessible through OpenAI API2 and was expected to have superior performance for social scenarios as it was trained based on human feedback, which captures subjective human judgement of interactive outputs (Ouyang et al., 2022). Footnote 2: [https://openai.com/blog/openai-api](https://openai.com/blog/openai-api) Following previous works that considered system-level evaluation (Lowe et al., 2017; Ghandeharioun et al., 2019), we report Pearson correlation for our experiments, unless specified otherwise. We also opted for this type of correlation coefficient as it performed better for capturing whether the automated metric succeeds in preserving the gap in scores for the best- and least-performing chatbots, the information which gets lost with rank correlation. Figure 2: Prompt template to condition a LLM to play an assigned social role while interacting with an evaluated chatbot. We start by demonstrating the application of our evaluation framework to empathetic dialog systems as in these interactive scenarios two conversational partners have clearly distinct social roles: an emotional speaker and an empathetic listener. Further, we consider the generalizing ability of the framework to other social domains. ### Evaluation of Empathetic Chatbots Below, we first describe the dataset used for the experiment. Then, we consider the ability of prompted LLM to effectively replicate social discourse patterns over multi-turn interactions with the chatbots that serve as eventual evaluation targets. Finally, we explore several types of prompts applied to synthesized LLM-to-bots dialogs to evaluate how well they can approximate human judgement on a system level. #### 4.1.1 Dataset and Evaluated Chatbots We used iEval dataset for this experiment (Svikhnushina et al., 2022). The dataset features human conversations with four empathetic chatbots collected in an interactive manner. During the dataset curation process, each human was assigned an emotion label with the situation description taken from the EmpatheticDialogues dataset (Rashkin et al., 2019) and asked to have a 6-turn conversation with each chatbot while playing a character in the assigned scenario. Overall, there are 480 situation descriptions in the dataset, which evenly cover two emotional polarities: positive and negative. As each chatbot participated in each scenario, there are in total 1920 dialogs in the dataset. Human interlocutors provided their appraisals of chatbot listeners in each dialog, including five fine-grained listener qualities on a 5-point Likert scale: politeness, empathy, likability, repetitiveness, and making sense, and an overall dialog rating on a 3-point scale. All scores are provided on a dialog-level. The four chatbot models used to curate the dataset were Blender (Roller et al., 2021), MIME (Majumder et al., 2020), MEED and Plain (Xie and Pu, 2021). All of them are publicly available. We use these models in the same configurations for our experiment. #### 4.1.2 LLM-to-Bot Play Results As the first step to validate our evaluation framework, we analyzed whether the LLM succeeds in mimicking human discourse following an assigned social role and whether replacing human speakers with the LLM causes any considerable changes in the chatbots' response patterns. To generate LLM-to-bots conversations, we closely followed the procedure of iEval dataset curation. Specifically, we used emotion labels and situation descriptions from the dataset to create prompts for the LLM: _I am a Speaker, feeling <emotion> because <situation>. I am sharing these emotions with a Listener, expecting empathy and understanding from them. I respond as a Speaker in a dialog._ The first LLM input was also taken from the iEval dataset. For each scenario, we collected LLM conversations with each of the four bots, letting them converse for 3 turns, i.e., 3 inputs from the LLM and 3 responses from the chatbot. To examine the similarity of discourse patterns between human-to-bots and LLM-to-bots conversations, we started by annotating each dialog turn in two datasets with emotion and empathetic intent labels, using emotion/intent classifier developed by Welivita and Pu (2020) for EmpatheticDialogues dataset. As datasets in our experiment were grounded in situation descriptions taken from EmpatheticDialogues, the classifier was expected to generalize well to our data. Consequently, we visualized the most prominent discourse patterns for two corpora in a form of Sankey diagrams, shown in Figures 3 and 4. From the visual inspection, it can be seen the LLM emotion distribution over the course of the dialog (Figure 4) largely resembles the one of human interlocutors (Figure 3). More importantly, sets of intents produced by empathetic chatbots are also very similar between the two figures, with _Questioning_, _Sympathizing_, and _Acknowledging_ being the most prominent ones. Thus, our freshly generated interactive dataset with LLM-to-bot play was deemed to produce a reasonable approximation of human-to-bot conversations. #### 4.1.3 Prompted Evaluation Results Turning to the second step of our evaluation framework, we examined different types of prompting to produce scores for the generated LLM-to-bot dialogs. Specifically, two variables in the prompt design were considered. First, we tried score generation in zero-shot and few-shot settings. For a few-shot setting, the number of demonstrations was fixed to the number of points in the ground truth human evaluation scale, with one representative example supplied for each score. Thus, for iEval dataset we used three demonstration dialogs corresponding to the three possible evaluation scores: _Bad_, _Okay_, and _Good_. The examples were selected manually and are provided in Table 3 in Appendix A. Second, we analyzed whether providing additional instructions helped the LLM evaluation performance. To write the instructions, we relied on the findings of Svikhnushina et al. (2022), which explained how chatbots' performance on various fine-grained dimensions translates in the overall score. As the authors emphasized the difference of humans' expectations from an empathetic listener in positive and negative conversational scenarios, we devised slightly different instructions to prompt evaluation of these two emotional polarities. Specific formulations of the instructions are also provided in Table 3 in Appendix A. To generate scores for each dialog, we prompted the LLM to compete the masked score, provided the log of the evaluated dialog. Depending on the configuration, few-shot demonstrations and/or instructions were prepended to the prompt. A template of the used prompt can be found in Figure 6 Figure 4: Sankey diagram showing discourse patterns in freshly generated LLM-to-bots conversations. Figure 3: Sankey diagram showing discourse patterns in human-to-bots conversations originating from the iEval dataset. in Appendix A. After obtaining dialog-level scores, we aggregated them to produce system-level ratings. One system was defined as a chatbot operating in one of the two emotional polarities. This decision is driven by the fact that based on human evaluation results in (Svikhnushina et al., 2022), chatbots demonstrated statistically significant difference in their performance depending on the emotion. Thus, we considered eight systems for computing system-level correlations. System-level correlations between human- and LLM-judgements for each of the four possible prompt design manipulations are reported in Table 1. Few-shot prompting with instructions results in the highest correlation of 0.954, which is further illustrated by the scatter plots in Figure 5. According to the plots, providing examples helps the LLM to calibrate the produces scores, eliminating the positivity bias, whereas instructions result in reduced variance. ### Generalizability to Different Domain In this section, we consider how prompted evaluation can generalize to a different corpus and conversational setting. As the results above suggested that prompts combining instructions with examples perform best for evaluation, for the following experiment we searched for a dataset that allowed formulating instructions for defining what properties correspond to good or bad overall appraisal ratings of the dialogs. Therefore, we selected a dataset that contained both fine-grained and overall ratings of the dialogs and used the information of the most relevant fine-grained dimensions to formulate instructions. We also considered only those datasets that contained multi-turn dialogs collected in an interactive manner. The selected dataset features human-to-bot dialogs, with some dialog systems that are not publicly available. Moreover, these dialogs were collected in a generic manner, without a purpose to model any specific social behavior (e.g., as empathy in iEval). Due to these considerations, in the following experiment we only studied the performance of the second step of our DEP framework, skipping the synthesis of new LLM-to-bots conversations. In a general case, when researchers have access to their evaluation targets, prompting LLMs to engage in a generic social interaction with the evaluated bots should be straightforward as we demonstrated in Section 4.1.2. #### 4.2.1 Dataset We used FED dataset to study the generalizability of prompted evaluation (Mehri and Eskenazi, 2020). It contains 124 open-domain dialogs of humans with humans or with machines (Meena and Mitsuku) that were originally released by (Adiwardana et al., 2020). Each dialog in FED is annotated with human appraisals of ten fine-grained dialog qualities and an overall impression rating. #### 4.2.2 Prompted Evaluation Results To construct a prompt for evaluating FED dataset, we selected five dialog examples covering five possible scores for overall dialog ratings, ranging from _Very bad_ to _Very good_; they are provided in Table \begin{table} \begin{tabular}{c|c c} \hline \hline & No instructions & Instructions \\ \hline \multirow{2}{*}{Zero-shot} & 0.748 & 0.651 \\ & (p=0.033) & (p=0.080) \\ \multirow{2}{*}{Few-Shot} & 0.892 & **0.954** \\ & (p=0.003) & (p<0.001) \\ \hline \hline \end{tabular} \end{table} Table 1: System-level correlation for four possible prompt design manipulations, with the p-value in brackets. Figure 5: Scatter plots depicting the system-level correlation results. Human scores are based on the iEval dialog annotations, while prompted LLM scores are computed based on the generated dialogs. 2 in Appendix B. To formulate the instructions, we used information from the original paper describing the relative importance of each fine-grained dialog quality for the overall impression. The specific formulation of the instruction is provided in Appendix B. Given a small number of participating models in the FED dataset, we report dialog-level correlation results instead of system-level. We also compute Spearman correlation for this experiment for the purpose of comparison with the results produced by the FED metric proposed together with the dataset. Our prompted evaluation achieved \(0.655\) (\(p<0.001\)) correlation with the ground truth human scores, which exceeds FED correlation (\(0.443\) (\(p<0.05\))) by a considerable margin. Thus, the prompted evaluation demonstrated ability to generalize to new open-domain conversational settings. ## 5 Discussion Dialog system evaluation with prompting showed its usefulness both for generating new interactive exchanges with the evaluated systems and for judging their performance, therefore, allowing for the end-to-end automation of the evaluation pipeline. We deem this approach particularly promising for the evaluation of social aspects of conversations. LLMs used for prompting suffer from occasional hallucinations, i.e., a tendency to make up factual information (Ouyang et al., 2022). It might be difficult to keep track of all specific factual items of information that come up in the interactively created dialog between two conversational models and search for ground truth references for each of them to construct objective metrics such as model's accuracy or truthfulness (Lin et al., 2022). Whereas, prompting the LLM to establish a specific behavior and providing instructions about commonsense social norms appears more feasible once these instructions are established. Drawing from the visualization of discourse patterns in our newly collected dataset of dialogs between the LLM and empathetic chatbots, we observed that the prompted LLM largely mirrors the conversational patterns of humans. However, there are also some differences. For example, in Figure 4 there is an apparent sub-flow with a _Grateful_ emotion, increasingly displayed by the LLM. We believe the LLM might have developed an agreeable "personality" due to its training procedure based on Reinforcement Learning from Human Feedback, which optimized LLM's responses to satisfy human labelers. Future research can consider alternative prompting techniques to make the emotion/intent distribution of LLMs' responses even more balanced and representative. We conducted our experiments with only one LLM and explored the few-shot prompting scenarios with a fixed number of demonstrations. Future studies could explore the applicability of other LLMs for the DEP framework, as it has been already initiated by (Huynh et al., 2023). An area of particular interest would be to study the efficacy of the framework working with open source LLMs, such as LLaMa (Touvron et al., 2023). We would also like to explore how DEP generalizes to evaluating other phenomena in social conversations, apart from generic open-domain interactions and empathetic dialogs. For example, further studies might focus on applying the framework to evaluate toxicity or humor in dialogs. However, this research direction requires curation of appropriate calibration datasets. ## 6 Conclusion In this paper, we proposed DEP - a framework for evaluating social chatbots using prompting. Our framework addresses the limitations of evaluation approaches using benchmark datasets in an offline setting. We describe how LLMs can be leveraged to synthesize realistic conversational logs with the evaluated chatbots in an interactive manner. We further outline how the knowledge about the desired fine-grained qualities of a conversational partner can be translated in the prompting instructions to generate reliable overall scores for the collected dialogs. The proposed framework eliminates human effort at all steps of the evaluation process, leading to a highly time- and cost-efficient process. Our experiments demonstrated that the prompting-based evaluation results achieve high correlation with the human judgement, reaching impressive Pearson \(r=0.95\) system-level correlation for the iEval dataset, which features dialogs with empathetic chatbots. We explain our vision why this framework is well-suited for evaluation of social phenomena in conversations and lay out future research directions. We also publicly release all freshly curated chat logs between the LLM and evaluated chatbots, as well as all additional annotations for the iEval and FED datasets created for this study.3 ## Acknowledgements This project has received funding from the Swiss National Science Foundation (Grant No. 200021_184602). The authors also express gratitude to Mohamed Elasfoury for the invaluable help with conducting the generalizability experiments.
2305.12406
Intensity statistics inside an open wave-chaotic cavity with broken time-reversal invariance
Using the supersymmetric method of random matrix theory within the Heidelberg approach framework we provide statistical description of stationary intensity sampled in locations inside an open wave-chaotic cavity, assuming that the time-reversal invariance inside the cavity is fully broken. In particular, we show that when incoming waves are fed via a finite number $M$ of open channels the probability density ${\cal P}(I)$ for the single-point intensity $I$ decays as a power law for large intensities: ${\cal P}(I)\sim I^{-(M+2)}$, provided there is no internal losses. This behaviour is in marked difference with the Rayleigh law ${\cal P}(I)\sim \exp(-I/\overline{I})$ which turns out to be valid only in the limit $M\to \infty$. We also find the joint probability density of intensities $I_1, \ldots, I_L$ in $L>1$ observation points, and then extract the corresponding statistics for the maximal intensity in the observation pattern. For $L\to \infty$ the resulting limiting extreme value statistics (EVS) turns out to be different from the classical EVS distributions.
Yan V. Fyodorov, Elizaveta Safonova
2023-05-21T09:07:02Z
http://arxiv.org/abs/2305.12406v2
# Intensity statistics inside an open wave-chaotic cavity with broken time-reversal invariance ###### Abstract Using the supersymmetric method of random matrix theory within the Heidelberg approach framework we provide statistical description of stationary intensity sampled in locations inside an open wave-chaotic cavity, assuming that the time-reversal invariance inside the cavity is fully broken. In particular, we show that when incoming waves are fed via a finite number \(M\) of open channels the probability density \(\mathcal{P}(I)\) for the single-point intensity \(I\) decays as a powerlaw for large intensities: \(\mathcal{P}(I)\sim I^{-(M+2)}\), provided there is no internal losses. This behaviour is in marked difference with the Rayleigh law \(\mathcal{P}(I)\sim\exp\bigl{(}-I/\bar{I}\bigr{)}\) which turns out to be valid only in the limit \(M\to\infty\). We also find the joint probability density of intensities \(I_{1},\ldots,I_{L}\) in \(L>1\) observation points, and then extract the corresponding statistics for the maximal intensity in the observation pattern. For \(L\to\infty\) the resulting limiting extreme value statistics (EVS) turns out to be very different from the classical EVS distributions. ## I Introduction This work aims to contribute towards understanding the statistics of intensity of a monochromatic wave field inside an irregularly shaped enclosure (cavity) which could be fed with incoming waves through M open channels (antennae), see a sketch in the figure: According to the standard paradigm of Quantum Chaos, we assume that the shape of the enclosure ensures chaotic ergodization of a single classical particle motion in the same scattering domain. At this ergodic situation universal properties of closed wave-chaotic systems can be, following the famous Bohigas-Giannoni-Schmidt (BGS) conjecture [1], effectively modelled by replacing the microscopic system's Hamiltonian (or wave operator) by random matrices \(\hat{H}\) of big dimension \(N\gg 1\). The standard choice is to use three ensembles with Gaussian-distributed entries, GOE, GUE and GSE, composed of real symmetric, complex Hermitian and real quaternionic matrices, respectively, and labelled by the Dyson parameter \(\beta=1,2,4\). The choice \(\beta=1\) is used to describe time-reversal invariant systems, \(\beta=2\) corresponds to broken time reversal symmetry, and \(\beta=4\) to systems with Kramers degeneracy of energy levels. The ensuing statistical characteristics of closed quantum-chaotic systems turn out to be universal, i.e. independent of microscopic details, when studied in the energy/frequency intervals of the length comparable to the typical distance \(\Delta\) between neighbouring energy/frequency levels. It is expected that essentially the same statistics should be observed in regularly-shaped cavities with a finite density of randomly placed scatterers inside, provided one neglects the effects of Anderson localization. The scale \(\Delta\) is assumed to be much smaller than the energy scale of the order of inverse relaxation time \(t_{e}\) ensuring full ergodization in the chaotic enclosure. In the context of systems with disorder such time is controlled by classical diffusion and the corresponding energy scale is known as the Thouless time. Although proving BGS conjecture remains one of the great challenges in Mathematics, see e.g. [2], its validity at the level of Theoretical Physics is beyond any reasonable doubt, being supported by extensive numerics, as well as by elaborate field theory [3; 4] and semiclassical computations [5; 6]. Chaotic wave scattering in enclosures is an object of intensive research effort extending over several decades, with application to studies in compound nucleus scattering [7], transport properties in mesoscopic electronic systems [8] and more recently in lasing [9] as well as in manipulating light in complex media for energy deposition and imaging purposes [10]. One of the central objects in both theory and experiments is the energy-dependent (or, rather, in the classical wave scattering context, Figure 1: A schematic sketch of a chaotic wave scattering in a cavity, with \(\psi_{l}\), \(l=1,\ldots,M\) being the wave in \(l_{th}\) channel/antenna, with \(x_{l}\) being the coordinate along the channel. An operator describing wave dynamics in the system decoupled from the channels/antennae is assumed to be effectively described by a large random matrix \(\hat{H}\). The \(M\times M\) unitary scattering matrix \(\hat{S}\) can be related to \(\bar{H}\) in the framework of the Heidelberg approach. frequency-dependent) unitary scattering matrix (or simply \(\hat{S}-\)matrix) \(\hat{S}(E)\), the elements of which describe the relationship between the vector \(\mathbf{a}=(a_{1},\dots,a_{M})\) of amplitudes of \(M\) incoming waves in all open channels to the vector \(\mathbf{b}=(b_{1},\dots,b_{M})\) of the amplitudes of outgoing waves. Since the scattering process is essentially random, the properties of \(\hat{S}\)-matrix must be described using statistical language, i.e. probability distributions and correlation functions. In developing this description the Random Matrix theory (RMT) plays the central role. The modern use of RMT for describing chaotic wave scattering statistically goes back to the seminal work of the Heidelberg group[11] who suggested to model the scattering matrix elements in the form \[\hat{S}_{cc^{\prime}}(E)=\delta_{cc^{\prime}}-2i\sum_{x,y=1}^{N}\hat{W}_{cx}^{ *}\left[\frac{1}{E-\hat{\mathcal{H}}_{\text{eff}}}\right]_{xy}\hat{W}_{yc^{ \prime}}, \tag{1}\] where \(\hat{\mathcal{H}}_{\text{eff}}=\hat{H}-i\sum_{c}\mathbf{w}_{c}\otimes\mathbf{ w}_{c}^{\dagger}\), with \(\hat{H}\) being \(N\times N\) random matrix replacing true Hamiltonian of the closed cavity, and the energy-independent vectors of coupling amplitudes \(\mathbf{w}_{c}=(W_{c1},\dots,W_{cN})\) relate \(N\) inner states in the chosen basis to \(M\) open channels. Without restricting generality, one can take vectors \(\mathbf{w}_{c}\) as fixed orthogonal satisfying \[\mathbf{w}_{c}^{\dagger}\mathbf{w}_{c^{\prime}}=\gamma_{c}\delta_{cc^{\prime }},\qquad\gamma_{c}>0\ \forall c=1,\dots,M. \tag{2}\] As a result the parameters describing the scattering system in universal limit are \(M\) coupling constants \(g_{c}=\frac{1}{2\pi\rho}(\gamma_{c}+\gamma_{c}^{-1})\in[1,\infty),\,c=1,\dots,M\), with \(\rho\) standing for the mean eigenvalue density. The coupling constants describe how well a given channel is coupled to the cavity, with \(g_{c}=1\) corresponding to the best possible (="perfect") coupling and the limit \(g_{c}\rightarrow\infty\) corresponding to the channel which is closed for incoming and outgoing waves. Such a choice for the model provided the most convenient framework for studying statistics of the scattering matrix on small energy/frequency scales, comparable with separation \(\Delta\) between neighbouring resonant frequencies/energy levels of a closed system by utilizing the powerful supersymmetry approach developed earlier by Efetov [12] in the context of disordered electronic systems. Over the years it allowed to compute explicitly many statistical characteristics of the \(\hat{S}-\)matrix and other closely related objects, see e.g. [13; 14; 15; 16] and references therein. Nowadays the model experimental setups to test the theoretical predictions based on Random Matrix Theory (RMT) are mainly systems of classical waves (acoustic or electromagnetic) scattered from specially built resonators, shaped in the form of the so-called chaotic billiards or/and with added scatterers inside, see e.g. [17; 18]. Under appropriate conditions, the associated Helmholtz equation for the electric field strength is scalar and mathematically identical to the two-dimensional Schrodinger equation of a particle elastically reflected by the contour of the microwave resonator, i.e., of a quantum billiard. Alternatively, experiments on chaotic wave scattering are performed on systems built with microwave graphs, see e.g. [19]. Whereas a lot of efforts was devoted to study of transmission and reflection of waves, which pertains to measuring the wave field _outside_ of the scattering medium (or at its boundary with external world), an interesting question is also to understand the statistics of wave patterns _inside_ the chaotic enclosure. This question is especially natural in view of growing interest in various aspects of coherent manipulations of wave propagation in complex media for imaging, light storage, electromagnetic compatibility tests etc, see e.g. [20; 21; 22; 10; 23; 24] and references therein. The study of statistics of radiation intensity in random medium has a long history. In particular, it has been suggested to model the wave pattern as a random superposition of running plane waves with complex coefficients [25; 26]: \[u(\mathbf{r})=\sum_{\mathbf{k}}a(\mathbf{k})e^{i\mathbf{k}\mathbf{r}} \tag{3}\] where all wavevectors \(\mathbf{k}\) have the same length, while the amplitudes \(a(\mathbf{k})\) are chosen as random gaussian complex numbers. While in closed systems with preserved time-reversal invariance one has to assume \(a^{*}(\mathbf{k})=a(-\mathbf{k})\), the correlations between \(a(\mathbf{k})\) and \(a(-\mathbf{k})\) gradually diminish with increased degree of openness of the scattering system. The simplest prediction of such a model was a one-parameter family of possible distributions for the point intensity \(I=|u(\mathbf{r})|^{2}\), reducing to the simple exponential/Rayleigh distribution \(\mathcal{P}(I)\propto e^{-I/\mathcal{I}}\) for completely uncorrelated \(a(\mathbf{k})\) and \(a(-\mathbf{k})\). Despite favourably agreeing with some experimental results [27], the use of simple Gaussian model Eq.(3) looks largely phenomenological, and certainly calls for a proper microscopic justification. Motivated by this, the present paper aims to investigate statistics of the intensity of wave field at a given point \(\mathbf{r}\) inside a chaotic cavity relying on the same assumptions as RMT-based model Eq.(1) for the \(\hat{S}-\)matrix. We will demonstrate that the framework of the Heidelberg approach gives a possibility to derive \(\mathcal{P}(I)\) for any fixed number \(M\) of open channels without further assumptions, at least in the simplest case of chaotic systems with fully broken time-reversal invariance. The latter is described at the level of RMT by \(\hat{H}\) taken from the Gaussian Unitary Ensemble. We note that for such systems the eigenfunctions of the closed systems are already complex, with independent, identically distributed complex and imaginary parts. Hence when applying the Gaussian wave ansatz Eq.(3) to the open system with broken time-reversal invariance one would naturally expect that \(a(\mathbf{k})\) and \(a(-\mathbf{k})\) are uncorrelated, implying the Rayleigh law as the reference for the intensity distribution. We will indeed see how such law emerges in the limit of very open system, with the number of scattering channels tending to infinity. However for any finite number of channels the ensuing distribution \(\mathcal{P}(I)\) of local intensity is found to be very different and shows a power-law rather than exponential decay. As scattering systems with broken time-reversal invariance are now routinely realized both in "billiard"-type scattering experiments [28; 29; 30; 31; 32] and in chaotic scattering in microwave graphs, see e.g [33; 34], one may expect that the predicted behaviour may be eventually tested experimentally. ## II Formulation of the problem and the main results We recall that the incoming waves are fed into the cavity via \(M\) channels \(c=1,\ldots,M\), with amplitudes given by the vector \(\mathbf{a}=(a_{1},\ldots,a_{M})\). This creates a field inside the cavity which we think of as a vector \(\mathbf{u}\) in the \(N-\)dimensional inner Hilbert space. In particular, for our purpose it is convenient to think of the position basis \(|\mathbf{r}\rangle\), associated with an appropriate coordinate system inside the cavity domain, so that the quantities \(u(\mathbf{r})\equiv\langle\mathbf{r}|\mathbf{u}\rangle\) give precisely the amplitude of the wave in a point \(\mathbf{r}\) inside the cavity. The corresponding intensity is given by \(I_{\mathbf{r}}=|u(\mathbf{r})|^{2}\). In the framework of the Heidelberg model one can relate the vector \(\mathbf{u}\) at a given value of the energy/frequency to the scattering matrix as ( see e.g. Eq.(27) in [13]) \[\mathbf{u}=\frac{1}{2}\frac{1}{E-\hat{H}}\hat{W}\left(\hat{\mathbf{1}}_{M}+\hat {S}\right)\mathbf{a} \tag{4}\] where \(\hat{\mathbf{1}}_{M}\) stands for the identity matrix and \(\hat{H}\) is the random matrix representing the inner Hamiltonian, while \(\hat{W}\) is the matrix whose \(M\) columns are channel vectors \(\mathbf{w}_{c}\). Further using an equivalent form of the scattering matrix given by \[\hat{S}=\left(\hat{\mathbf{1}}_{M}-i\hat{K}\right)\times\left(\hat{\mathbf{1 }}_{M}+i\hat{K}\right)^{-1},\quad\hat{K}=\hat{W}^{\dagger}\frac{1}{E-\hat{H} }\hat{W} \tag{5}\] one can bring Eq.(4) to another well-known form, cf. e.g. Eq.(38) in [35], conveniently written in the bra-ket notations as \[|\mathbf{u}\rangle=\frac{1}{E-\hat{H}+i\hat{W}\hat{W}^{\dagger}}\left|\mathbf{w} _{\mathbf{a}}\right\rangle,\quad|\mathbf{w}_{\mathbf{a}}\rangle\equiv\sum_{c= 1}^{M}\,a_{c}\left|\mathbf{w}_{c}\right\rangle \tag{6}\] and implying for the intensity a representation \[I_{\mathbf{r}}=\left\langle\mathbf{r}\right|\frac{1}{E-\hat{H}+i\hat{W}\hat{ W}^{\dagger}}\left|\mathbf{w}_{\mathbf{a}}\right\rangle\left\langle\mathbf{w}_{ \mathbf{a}}\right|\frac{1}{E-\hat{H}-i\hat{W}\hat{W}^{\dagger}}\left|\mathbf{ r}\right\rangle. \tag{7}\] This formula is the starting point of our calculation of the probability density \(\mathcal{P}(I)\) for the single-point intensity \(I=I_{\mathbf{r}}\). Relegating the technical part of the calculation, largely inspired by the methods of the works [15; 16], to the body of the paper, we start with presenting and discussing our main results below. ### Single-point intensity distribution Given the set of coupling parameters \(g_{c}\geq 1,c=1,\ldots,M\), define for a given \(I>0\) the parameter \(\lambda_{1}>1\) as the (unique) solution of the equation \[I=\frac{\lambda_{1}-1}{2}\sum_{c=1}^{M}|a_{c}|^{2}\left(1-\frac{g_{c}-1}{ \lambda_{1}+g_{c}}\right). \tag{8}\] The existence and uniqueness of the solution follows from the fact that the right-hand side of Eq.(8) is positive, monotonically increasing to infinity function of \(\lambda_{1}\) in the whole range \(\lambda_{1}\in[1,\infty)\), and is equal to zero when \(\lambda_{1}=1\). The intensity distribution in a single spatial point is then characterized by the probability density given explicitly in terms of \(\lambda_{1}\) as \[\mathcal{P}_{M}(I)=\frac{d}{dI}I\frac{d}{dI}\mathcal{F}_{M}(I),\quad\text{with } \mathcal{F}_{M}(I)=\sum_{c=1}^{M}|a_{c}|^{2}\mathcal{F}_{c}(I), \tag{9}\] with \(\mathcal{F}_{c}(I)\) given by \[\mathcal{F}_{c}(I)=\frac{\lambda_{1}-1}{\left(2I+(\lambda_{1}-1)^{2}\sum\limits \limits_{i=1}^{M}|a_{i}|^{2}\frac{g_{i}-1}{(\lambda_{1}+g_{i})^{2}}\right) \prod\limits_{j=1}^{M}(\lambda_{1}+g_{j})}\] \[\times\int_{-1}^{1}\,d\lambda_{2}\,\,\frac{\lambda_{2}+\tilde{g}_{c}}{\lambda_ {1}-\lambda_{2}}\prod\limits_{k\neq c}^{M}(\lambda_{2}+g_{k}) \tag{10}\] where we introduced the notation: \[\tilde{g}_{c}=\frac{1+g_{c}\lambda_{1}}{g_{c}+\lambda_{1}}. \tag{11}\] There are two special cases when the solution to Eq.(8) can be explicitly written. The first one pertains to the situation when the incoming wave is incident only via a single channel, which we can choose to correspond to the channels index \(c=1\), whereas all other channels with \(2\leq c\leq M\) may only support outgoing waves. Indeed, setting \(a_{1}=1\) for simplicity, and \(a_{c}=0,\,\forall c=2,\ldots,M\) we see that Eq.(8) becomes quadratic and one immediately finds that \[\lambda_{1}=I+\sqrt{1+2g_{1}I+I^{2}}, \tag{12}\] which after some manipulations allows to show that \[\tilde{g}_{1}=-I+\sqrt{1+2g_{1}I+I^{2}} \tag{13}\] and \[\frac{\lambda_{1}+g_{1}}{\lambda_{1}-1}=\frac{I+1+\sqrt{1+2g_{1}I+I^{2}}}{2I}\] which further implies \[\frac{\lambda_{1}+g_{1}}{\lambda_{1}-1}\left(2I+(\lambda_{1}-1)^{2}\frac{g_{1}- 1}{(\lambda_{1}+g_{1})^{2}}\right)=2\sqrt{1+2g_{1}I+I^{2}} \tag{14}\] Correspondingly, Eq.(10) takes very explicit and rather elegant form \[{\cal F}_{M}(I)=\frac{1}{2\sqrt{1+2g_{1}I+I^{2}}}\frac{1}{\prod_{j=2}^{M}(I+ \sqrt{1+2g_{1}I+I^{2}}+g_{j})} \tag{15}\] \[\times\int_{-1}^{1}\,d\lambda_{2}\,\,\frac{\lambda_{2}-I+\sqrt{1+2g_{1}I+I^{2} }}{I+\sqrt{1+2g_{1}I+I^{2}}-\lambda_{2}}\prod_{k=2}^{M}(\lambda_{2}+g_{k})\] The integral, hence the probability density for the intensity \(I\), can be evaluated in a closed form for any coupling strengths, but general results are quite cumbersome. In the simplest case of a single open channel one gets \[{\cal P}_{M=1}(I)=\frac{1}{(1+2g_{1}I+I^{2})^{3/2}} \tag{16}\] \[\times\left(2g_{1}-3(g_{1}^{2}-1)\frac{I}{1+2g_{1}I+I^{2}}\right)\] and for the two-channel case the cumulative distribution of intensities is given by \[\int_{I}^{\infty}{\cal P}_{M=2}(\widetilde{I})\,d\widetilde{I}=-\frac{g_{2}I (\lambda_{1}+g_{1})}{(\lambda_{1}+g_{2})^{2}(1+2g_{1}I+I^{2})}\] \[+\frac{1}{(1+2g_{1}I+I^{2})^{1/2}}\left(1-\frac{2I(\lambda_{1}+g_{1})}{( \lambda_{1}+g_{2})^{2}}\right) \tag{17}\] \[-\frac{1}{(1+2g_{1}I+I^{2})^{3/2}}\frac{g_{2}I(I+g_{1})}{\lambda_{1}+g_{2}},\] with \(\lambda_{1}\) defined in Eq.(12). The second special case corresponds to all scattering channels being of equal strength: \(g_{c}=g\geq 1\) for all \(c=1,\ldots,M\). Defining the total incoming flux in all channels as \[{\cal I}=\sum_{c=1}^{M}|a_{c}|^{2} \tag{18}\] and further introducing the ratio \(J=I/{\cal I}\) one finds that \[\lambda_{1}=J+\sqrt{1+2g\,J+J^{2}}, \tag{19}\] implying that again \(\tilde{g}=-J+\sqrt{1+2gJ+J^{2}}\), and further finding \[{\cal F}_{M}(I)=\frac{1}{2\sqrt{1+2gJ+J^{2}}}\] \[\times\int_{-1}^{1}\,\frac{\lambda_{2}-J+\sqrt{1+2gJ+J^{2}}}{J+\sqrt{1+2gJ+J^{ 2}}-\lambda_{2}} \tag{20}\] \[\times\left(\frac{\lambda_{2}+g}{J+\sqrt{1+2g_{1}J+J^{2}}+g}\right)^{M-1}\,d \lambda_{2}\,.\] The integral can be evaluated in the closed form, and as we already considered \(M=1\) case above, we may assume \(M\geq 2\) and find \[{\cal F}_{M}(I)=-\ln\frac{\lambda_{1}-1}{\lambda_{1}+1}-\frac{1}{\sqrt{1+2gJ+ J^{2}}}\] \[+\sum_{p=0}^{M-2}\left(\begin{array}{c}M-1\\ p+1\end{array}\right)\frac{(-1)^{p}}{(\lambda_{1}+g)^{p+1}}f_{p}(I)\,, \tag{21}\] where we defined \[f_{p}(I)=\frac{1}{2(p+2)}\frac{(\lambda_{1}+1)^{p+2}-(\lambda_{1}-1)^{p+2}}{ \sqrt{1+2gJ+J^{2}}} \tag{22}\] \[-\frac{1}{(p+1)}\left((\lambda_{1}+1)^{p+1}-(\lambda_{1}-1)^{p+1}\right),\] with \(\lambda_{1}\) defined in Eq.(19). The probability density \({\cal P}_{M}(I)\) is then obtained by substituting Eq.(21) into Eq.(9). The most elegant result emerges if all channels are perfectly coupled, with \(g=1\) implying \(\lambda_{1}=2J+1\). After some algebra we get in that case \[{\cal F}_{M}(I)=-\ln\frac{J}{J+1}-\sum_{p=1}^{M}\left(\begin{array}{c}M\\ p\end{array}\right)\frac{(-1)^{p}}{p}\left[\left(\frac{J}{J+1}\right)^{p}-1 \right], \tag{23}\] and after substituting into Eq.(9) the probability density for the intensity \(I\) takes an especially simple form: \[{\cal P}_{M}(I)=(M+1)\frac{{\cal I}^{M+1}}{(I+{\cal I})^{M+2}}. \tag{24}\] In fact for any coupling the tail behaviour can be easily extracted from Eqs. (9) and (20) and has the same powerlaw form: setting \(I\rightarrow\infty\) at fixed \(g\) one finds the tail \({\cal P}(I)\sim I^{-(M+2)}\). We conclude that for any finite number of channels the ensuing powerlar-tailed distribution is quite different from the Rayleigh law predictions of the " Gaussian random wave" model. Note however that setting in Eq.(24) the number of channels to infinity in such a way that the incoming flux per channel remains finite: \(\lim_{M\rightarrow\infty}{\cal I}/M=\overline{I}<\infty\) restores the Rayleigh law: \[\lim_{M\rightarrow\infty}{\cal P}_{M}(I)=\frac{1}{\overline{I}}e^{-I/\overline{ I}}. \tag{25}\] This fact supports the view that the Gaussian wave model is asymptotically accurate if scattering system is open in an essentially semiclassic way, with many incoming channels supporting finite flux per channel. A few remarks are now in order. **Remark 1**. The one-point intensity distribution presented in Eq.(9)-(10) has been obtained under a physical assumption of the observation point location \(\mathbf{r}\) to be chosen "far enough" ( much further away than the wavelength at a given energy/frequency) from the point of attached antenna/channel. Mathematically this condition has been implemented by considering the value of all scalar products \(\left\langle r|\mathbf{w}_{c}\right\rangle\) to be negligible in comparison with the norms \(\gamma_{c}=|\mathbf{w}_{c}|^{2}\) for every \(c=1,\ldots,M\). **Remark 2**. In the course of derivation it has been also assumed that no irreversible losses of flux occur inside the cavity domain. In real microwave experiments this is hardly a realistic assumption, unless resonator walls made of superconducting material, like e.g. in [32]. It is however well-known how to account for the uniform absorption in cavity walls in the framework of the Heidelberg approach, see e.g. [14; 36]. The idea is that absorption can be treated as loss of flux in the multitude of unobserved open channels, very weakly coupled to the cavity. To this angle, one can add to \(M\) observed channels a big number \(\tilde{M}\gg 1\) of channels, numbered by channel indices \(c=M+1,\ldots,M+\tilde{M}\), all with the same coupling strength: \(g_{M+1}=\ldots=g_{\tilde{M}+M}:=g_{a}\) and consider the limits \(\tilde{M}\rightarrow\infty\) and \(g_{a}\rightarrow\infty\) while keeping \(\tilde{M}/g_{a}=\epsilon\) fixed. It is easy to check the result of this procedure amounts to adding to the integrand, in expressions like Eq.(10), an extra factor \(e^{-\epsilon(\lambda_{1}-\lambda_{2})}\). The dimensionless parameter \(\epsilon\) should be then interpreted as the effective rate of absorption. An alternative procedure for arriving to the same result amounts to adding a small positive imaginary part to the energy \(E\) in the formulation of Heidelber model, see Eq.(1) or its equivalent formulation Eq.(5). The loss of S-matrix unitarity which such procedure entails accounts for the irreversible loss of incoming flux in the cavity. One then finds that at the level of final formulas the net result is exactly the same factor \(e^{-\epsilon(\lambda_{1}-\lambda_{2})}\) in the integrands, with the parameter \(\epsilon\) given by the ratio of the imaginary part to the mean level spacing \(\Delta\). This fact shows the equivalence of the two methods. It is easy to see that the additional exponential factor in the integrand immediately converts the most distant tails of the intensity distribution \(\mathcal{P}(I)\) from powerlaw to exponential ones, so that the powerlaw behaviour can be observed only in a finite interval of intensities \(1\ll I/\mathcal{I}\ll\epsilon^{-1}\). **Remark 3**. If the incoming flux is nonvanishing only in a single channel \(c=1\), the density (Eq.16) coincides with \(M=1\) case of the distribution of the photodissociation cross-section \(\sigma(E)\) in quantum chaotic systems studied in [37], see eq. (18) in that paper. Such coincidence is not at all accidental and is to be expected. Namely, in the Heidelberg approach the cross-section can be represented as \(\sigma(E)\propto\mathrm{Im}\left\langle m\right|\left(E-\hat{\mathcal{H}}_{ \mathrm{eff}}\right)^{-1}\left|m\right\rangle\), with \(\hat{\mathcal{H}}_{\mathrm{eff}}\) defined after Eq.(1) and \(\left|m\right\rangle\) being a fixed nonrandom vector, related to a dipole moment operator. On the other hand, for \(M=1\) one may use the identity \[\left|\mathbf{w_{a}}\right\rangle\left\langle\mathbf{w_{a}}\right|=\left|a_{1 }\right|^{2}\left|\mathbf{w}_{1}\right\rangle\left\langle\mathbf{w}_{1}\right|\] \[=\frac{\left|a_{1}\right|^{2}}{2i}\left(\left(E-\hat{\mathcal{H}}_{\mathrm{ eff}}\right)-\left(E-\hat{\mathcal{H}}_{\mathrm{eff}}\right)^{\dagger} \right), \tag{26}\] which, when substituted to Eq.(7) shows that the local intensity in this case is proportional to the diagonal element of the resolvent: \[I=\left|a_{1}\right|^{2}\mathrm{Im}\left\langle\mathbf{r}\right|\left(E-\hat {\mathcal{H}}_{\mathrm{eff}}\right)^{-1}\left|\mathbf{r}\right\rangle. \tag{27}\] We see that indeed intensity for \(M=1\) is statistically equivalent to the photodissociation cross-section \(\sigma(E)\), up to a constant proportionality factor. If incoming fluxes are nonvanishing in more than one channel, then \[\left|\mathbf{w_{a}}\right\rangle\left\langle\mathbf{w_{a}}\right|=\sum_{c} \left|a_{c}\right|^{2}\left|\mathbf{w}_{c}\right\rangle\left\langle\mathbf{w}_ {c}\right|+\sum_{c,c^{\prime}}\overline{a}_{c^{\prime}}a_{c}\left|\mathbf{w}_{ c}\right\rangle\left\langle\mathbf{w}_{c^{\prime}}\right|\] and is never proportional to \[\sum_{c}\left|\mathbf{w}_{c}\right\rangle\left\langle\mathbf{w}_{c}\right|= \frac{1}{2i}\left(\left(E-\hat{\mathcal{H}}_{\mathrm{eff}}\right)-\left(E-\hat {\mathcal{H}}_{\mathrm{eff}}\right)^{\dagger}\right). \tag{28}\] As the result, the correspondence between the diagonal part of the resolvent and the local intensity is broken, hence for \(M>1\) the distribution in Eq.(24) is different from the corresponding distribution for the normalized cross-section \(q=\frac{\sigma}{\left\langle\sigma\right\rangle}\) at perfect coupling, derived originally in [38]. The latter can be given for systems of any symmetry \(\beta=1,2,4\) as \[\mathcal{P}_{M}(q)\propto\frac{q^{\frac{\beta M}{2}-1}}{(1+q)^{\beta M+1}}. \tag{29}\] Note however that for \(\beta=2\) the same tail behaviour is shared by Eq.(24) and by Eq.(29). **Remark 4**. Finally, in the case when the waves are fed via a single channel \(c=1\) one may consider the limit \(g_{1}\gg g_{c},\forall c=2,\ldots,N\) describing the case of extremely weak coupling of the feeding channel. In such a limit the point intensity studied in this paper should coincide, after appropriate normalization, with the so-called "transmitted power", whose distribution for \(\beta=2\) chaotic systems has been recovered in the Heidelberg approach framework in [39] by the method of moments. One can indeed check that our Eq. (15) in this limiting case reproduces the distribution found in [39]. Following the same logic, one should expect the Eq.(15) to be itself deducible as a limiting case from the distribution of modulus of the off-diagonal element of the scattering matrix found in [15; 16]. We check in the Appendix B that this is indeed the case. Let us however stress that (i) the distribution of intensity in the general case, Eq.(10), can not be deduced in such a way and (ii) our computation despite being inspired by [15; 16], was implemented somewhat differently which helped to arrive to the final results in a rather economic way. ### Joint probability distribution of intensities at several points Consider now a finite number \(L\ll N\) of the observation points at locations \(\mathbf{r}_{1},\ldots\mathbf{r}_{L}\), each location being both far enough from each of the \(M\) antennae, as well as from each other. We have found that for the case of ergodic systems with broken time-reversal invariance the joint probability density of the corresponding intensities \(\mathcal{P}_{M}^{(L)}\left(I_{1},\ldots,I_{L}\right)\) is very simply related to the previously studied one-point density \(\mathcal{P}_{M}^{(1)}(I):=\mathcal{P}_{M}(I)\) via: \[\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})=(-1)^{L-1}\frac{d^{L-1}}{dI^{L-1}} \mathcal{P}_{M}(I)\mid_{I=I_{l}+\ldots+I_{L}}. \tag{30}\] With this relation it is then straightforward to calculate the probability density \(p_{M}\left(I_{\Sigma}\right)\) for the sum of the intensities \(I_{\Sigma}=I_{l}+\ldots+I_{L}\): \[p_{M}\left(I_{\Sigma}\right)=\frac{(-1)^{L-1}}{(L-1)!}\,I_{\Sigma}^{L-1}\frac {d^{L-1}}{dI^{L-1}}\mathcal{P}_{M}\left(I_{\Sigma}\right). \tag{31}\] In particular, in the case of perfectly coupled channels the Eqs.(31) and Eq.(24) imply together: \[p_{M}\left(I_{\Sigma}\right)=\frac{(L+M)!}{M!(L-1)!}\,\left(1+\frac{\mathcal{I }}{I_{\Sigma}}\right)^{-L}\frac{\mathcal{I}^{M+1}}{I_{\Sigma}\left(I_{\Sigma}+ \mathcal{I}\right)^{M+1}}. \tag{32}\] Introducing the intensity "per point" \(i_{\Sigma}=I_{\Sigma}/L\) one finds that such object has the finite limiting probability density as \(L\rightarrow\infty\): \[p_{M}\left(i_{\Sigma}\right)=\frac{1}{L}\frac{1}{M!}\frac{\mathcal{I}^{M+1}}{ i_{\Sigma}^{M+2}}\,e^{-(\mathcal{I}/i_{\Sigma})},\quad i_{\Sigma}=\lim_{L \rightarrow\infty}I_{\Sigma}/L. \tag{33}\] **Remark 5**. Summing up the intensities in Eq.(7) over _all_\(N\) internal points in the cavity and using the completeness relation \(\sum_{\mathbf{r}}\left|\mathbf{r}\right\rangle\left\langle\mathbf{r}\right|= \hat{\mathbf{1}}_{N}\) one finds that \(\sum_{\mathbf{r}}I_{\mathbf{r}}=\mathbf{a}^{\dagger}\hat{\mathcal{Q}}\mathbf{a}\), where \[\hat{\mathcal{Q}}=\hat{W}^{\dagger}\frac{1}{E-\hat{H}-i\hat{W}\hat{W}^{ \dagger}}\,\,\frac{1}{E-\hat{H}+i\hat{W}\hat{W}^{\dagger}}\hat{W}. \tag{34}\] The \(M\times M\) matrix \(\hat{\mathcal{Q}}\) is one of most important objects in scattering theory known as the Wigner-Smith time delay matrix. Various aspects of its statistical properties in wave-chaotic systems enjoyed intensive research over several decades, both in the framework of RMT, see the review [40] and refs. therein, as well as by semiclassical methods [41; 42; 43]. In particular, for the perfect coupling in all channels the distribution of \(\hat{\mathcal{Q}}\) is known explicitly for all \(\beta=1,2,4\)[44], see also [45; 46] for non-perfect couplings. Combining that distribution with Eq.(34) it is easy to verify that \(\sum_{\mathbf{r}}I_{\mathbf{r}}/\mathcal{I}\), with \(\mathcal{I}\) defined in Eq.(18), is distributed in the same way as the diagonal entries \(\hat{\mathcal{Q}}_{cc}\) of the matrix \(\hat{\mathcal{Q}}\). In turn, for the perfect coupling the latter are known to be distributed in the same way as _partial_ delay times \(\tau\)[45] whose probability density is explicitly given by \[p_{M}\left(t\right)=\frac{\left(\frac{\beta}{2}\right)^{\frac{\beta M}{2}}}{ \Gamma(\frac{\beta M}{2})}\frac{1}{t^{\frac{\beta M}{2}}+2}\,e^{-\left(\frac{ \beta}{2\tau}\right)},\quad t=\tau\Delta \tag{35}\] where \(\Delta\sim N^{-1}\) is the mean level spacing in the closed cavity. We then see that the distribution Eq.(33) of intensity per point \(i_{\Sigma}/\mathcal{I}\) considered in the limit of many observation points \(1\ll L\ll N\rightarrow\infty\) coincides for \(\beta=2\) with the distribution of the total scaled intensity \(\Delta\sum_{\mathbf{r}}I_{\mathbf{r}}/\mathcal{I}\), i.e. sampled accross the whole cavity. This matching implies that the same result will be valid for the (properly scaled) sum of intensities over \(L\sim N^{\epsilon}\) points, for any \(0<\epsilon\leq 1\). **Remark 6**. For systems with preserved time reversal invariance with values \(\beta=1,4\) the problem of finding the full distribution of the local intensity \(I_{\mathbf{r}}\) for arbitrary coupling constants \(g_{c},c=1,\ldots,M\) and its further \(L\)-point generalizations remains largely open, apart from \(M=1\) case and \(L=N\) case, where distributions of partial time delays are known even at the crossover between \(\beta=1\) and \(\beta=2\), see [47]. However one may safely conjecture that the far tail for all these quantities should be universally given by \(\mathcal{P}(I\gg\mathcal{I})\sim I^{-\frac{\beta M}{2}+2}\), as this behaviour in all cases is expected to be controlled by the density of narrow resonances, see discussion in p.1967 of [13]. ### Distribution of maximal and minimal intensities in the multipoint observation Having at our disposal the joint probability density \(\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})\) given by Eq.(30) one can pose a natural question of the distribution of the maximal and minimal value in the observed pattern: \[I_{\max}=\max(I_{1},\ldots,I_{L}),\quad I_{\min}=\min(I_{1},\ldots,I_{L}) \tag{36}\] Note that extreme values of the intensity field in chaotic reverberation chambers were studied experimentally in [48]. The joint probability (30) implies that intensities in different spatial points are in general correlated, apart from the only case when the single-point intensity is given by the exponential Rayleigh law \(\mathcal{P}_{M}(I)\propto e^{-I/\mathcal{I}}\). Thus the posed questions belong to the domain of extreme value statistics of many correlated variables, which attracted a lot of attention, in recent years, especially when \(L\rightarrow\infty\), see [49] for a review. Our case is special, as the joint probability density depends on all individual intensities only via their sum. Extreme value statistics for such case was not much studied, apart from a single special case in [50], which in our language would correspond to the particular choice \(\mathcal{P}^{(L)}(I_{1},\ldots,I_{L})\propto\delta(I_{1}+\ldots+I_{L})\). We therefore performed an analysis in some generality. After some computations explained in detail in Section (III.2) one finds the general relation in terms of the single-point density \({\cal P}_{M}(I)\): \[{\rm Prob}\left(I_{\rm max}<Y\right)=\sum_{l=0}^{L}(-1)^{l}\left(\begin{array}{c }L\\ l\end{array}\right)\int_{IY}^{\infty}{\cal P}_{M}(I)\,dI, \tag{37}\] whereas \({\rm Prob}\left(I_{\rm min}>Y\right)=\int_{LY}^{\infty}{\cal P}_{M}(I)\,dI\). In particular, for the perfect coupling case one can use the Eq.(24) and get \[{\rm Prob}\left(I_{\rm max}<Y\right)=\sum_{l=0}^{L}(-1)^{l}\left(\begin{array} []{c}L\\ l\end{array}\right)\frac{1}{\left(1+l\frac{Y}{l}\right)^{M+1}} \tag{38}\] and \[{\rm Prob}\left(I_{\rm min}>Y\right)=\frac{1}{\left(1+L\frac{Y}{l}\right)^{M+1}} \tag{39}\] We see that in such a pattern of \(L\) observation points the typical minimal intensity scales as \(I_{\rm min}^{\rm typ}\sim{\cal I}L^{-1}\) and the limiting density of the variable \(\sigma_{\rm min}=L\frac{I_{\rm min}}{2}\) is given by \(\rho\left(\sigma_{\rm min}\right)=(M+1)\left(1+\sigma_{\rm min}\right)^{-(M+2)}\), thus of the same form as the density of the one-point intensity. The statistics of \(I_{\rm max}\) is somewhat more interesting. To start with, consider the simplest case of the the Rayleigh law \({\cal P}(I)=\frac{1}{l}e^{-I/\overline{I}}\) obtained in the limit of many open channels keeping the incoming flux per channel finite: \(\lim\limits_{M\to\infty}\overline{I}/M=\overline{I}<\infty\), see Eq.(25). In this case it is easy to see that \[{\rm Prob}\left(I_{\rm max}<Y\right)=\left(1-e^{-Y/\overline{I}}\right)^{L}. \tag{40}\] Setting \(Y/\overline{I}=\ln L+q\) we then recover in the limit \(L\to\infty\) the Gumbel distribution: \[{\rm Prob}\left(I_{\rm max}<\overline{I}(\ln L+q)\right)\quad\quad=\quad\exp \left(-e^{-q}\right) \tag{41}\] smoothly interpolating between zero at \(q\to-\infty\) and one for \(q\to\infty\). The Gumbel law is one of classical Extreme Value Statistics (EVS), and is fully expected here as the intensities \(I_{l}\) at different points are uncorrelated. Note also that the threshold of extreme values is located to the leading order sharply at \(I_{\rm max}/\overline{I}=\ln L+o(1)\). Turning our attention now to the finite number of channels, the first of Eq.(38) can be alternatively represented as \[{\rm Prob}\left(I_{\rm max}<Y\right)=\frac{1}{M!}\int_{0}^{\infty}dv\,v^{M}e^ {-v}\left(1-e^{-v\frac{X}{\overline{I}}}\right)^{L}. \tag{42}\] Setting in the last expression \(Y=\sigma_{\rm max}{\cal I}\ln L\) and considering \(\sigma_{\rm max}>0\) fixed as \(L\to\infty\) one first notices that \[\lim_{L\to\infty}\left(1-e^{-\sigma_{\rm max}v\ln L}\right)^{L}=\left\{ \begin{array}{ll}0,&\mbox{if }0<v<\sigma_{\rm max}^{-1}\\ 1,&\mbox{if }v>\sigma_{\rm max}^{-1}\end{array}\right..\] It is now straightforward to see that the typical maximal intensity in a big pattern is scaled logarithmically with the number \(L\) of observation points: \(I_{\rm max}^{\rm(typ)}\sim{\cal I}\ln L\), and the limiting distribution for the properly rescaled maximum intensity is given by: \[\lim_{L\to\infty}{\rm Prob}\left(I_{\rm max}<{\cal I}\ln L\sigma_{\rm max}\right)\] \[=\frac{1}{M!}\int_{0}^{\sigma_{max}}\frac{dt}{t^{M+2}}\exp\left(-\frac{1}{t} \right). \tag{43}\] To make contact with the previously considered Rayleigh case, one may consider \(M\to\infty\) limit in Eq.(43). Recalling that in this limit we assume \({\cal I}=M\overline{I}\) we further introduce \(\sigma_{\rm max}M=\overline{\sigma_{\rm max}}\) and assume it to remain finite in the limit \(M\to\infty\). We also rescale \(t=\tau/M\) which gives: \[\lim_{M\to\infty}\lim_{L\to\infty}{\rm Prob}\left(I_{\rm max}<\overline{\sigma _{\rm max}}\,\overline{I}\,\ln L\right)\] \[=\lim_{M\to\infty}\frac{M^{M+1}}{M!}\int_{0}^{\overline{\sigma_{\rm max}}}d \tau e^{-M\left(\ln\tau+\frac{1}{\tau}\right)}, \tag{44}\] which upon evaluating the integral by the Laplace method yields \[\lim_{M\to\infty}\lim_{L\to\infty}{\rm Prob}\left(I_{\rm max}<\overline{ \sigma_{\rm max}}\,\overline{I}\,\ln L\right)\] \[=\left\{\begin{array}{ll}1&\mbox{if }\overline{\sigma_{\rm max}}>1\\ 1/2&\mbox{if }\overline{\sigma_{\rm max}}=1\\ 0&\mbox{if }\overline{\sigma_{\rm max}}<1\end{array}\right.. \tag{45}\] This agrees with the fact that the threshold of extreme values in this case is sharply at \(I_{\rm max}=\overline{I}\,\ln L\), but such interchange of limits (first \(L\to\infty\), then \(M\to\infty\)) misses the fine-scale Gumbel distribution, replacing it by the step function. To improve on that one has to consider the following double scaling limit in Eq.(42): both \(M\) and \(L\) tending to infinity in such a way that \(\frac{\ln L}{\sqrt{M}}=c\), with \(c\in[0,\infty)\) kept constant, and also \(\lim\limits_{M\to\infty}{\cal I}/M=\overline{I}<\infty\). In such a limit one finds that \[\lim_{L\to\infty}{\rm Prob}\left(I_{\rm max}<\overline{I}\left(\ln L+q\right)\right)\] \[=\int_{-\infty}^{\infty}\frac{dw}{\sqrt{2\pi}}\,e^{-\frac{w^{2}}{2}}\exp\left( -e^{-q-cw}\right), \tag{46}\] providing a family of interpolating distributions and reducing to Gumbel for \(c=0\). Coming back to finite number of channels \(M\), we may conclude from Eq.(43) that the probability density for the scaled maximal intensity \(\sigma_{\rm max}=\frac{I_{\rm max}}{{\cal I}\ln L}\) in perfectly coupled system with broken time-reversal invariance converges in the limit of many observation points to \[\rho\left(\sigma_{\rm max}\right)=\frac{1}{M!\sigma_{\rm max}M^{M+2}}\exp \biggl{\{}-\frac{1}{\sigma_{\rm max}}\biggr{\}}, \tag{47}\] hence follows the same law as the limiting intensity per point, eq.(33), or partial delay times eq.(35). This type of extreme value statistics resembles the Frechet law, but is different from it and does not seem to appear in the literature before. An interesting open question is to verify if this property still holds for systems with preserved time-reversal invariance. ## III Outline of the method and derivations of the main results. ### Distribution of one-point intensity To characterize the distribution of one-point intensity \(I_{\mathbf{r}}\) we use the method of Laplace transform generating functions, and aim to calculate for \(p>0\) the function \[\mathcal{L}(p):=\left\langle e^{-pI_{\mathbf{r}}}\right\rangle=\left\langle e^ {-pu^{*}(\mathbf{r})u(\mathbf{r})}\right\rangle, \tag{48}\] where \(u(\mathbf{r}):=\left\langle\mathbf{r}|\mathbf{u}\right\rangle\) is the amplitude of the wave in a point \(\mathbf{r}\) inside the cavity, and angular brackets stand for averaging performed over the random matrix Hamiltonian \(\hat{H}\), assumed to be represented by a GUE matrix. As the first step of the evaluation, we find it to be expedient to use a variant of the Gaussian (Habbard-Stratanovich) transformation, representing \(\mathcal{L}(p)\) as \[\left\langle e^{-pu^{*}(\mathbf{r})u(\mathbf{r})}\right\rangle=\int\frac{dq^{ *}dq}{\pi}e^{-q^{*}q}\mathcal{R}(q,q^{*}), \tag{49}\] where we defined \[\mathcal{R}(q,q^{*}):=\left\langle e^{-i\sqrt{p}(q^{*}u(\mathbf{r})+qu^{*}( \mathbf{r}))}\right\rangle\,, \tag{50}\] which can be equivalently written as \[\mathcal{R}(q,q^{*})=\] \[=\left\langle e^{-i\sqrt{p}\left(q^{*}\langle\mathbf{r}|\frac{1}{E-\hat{H}+i \widehat{W}\widehat{W}^{\dagger}}|\mathbf{w}_{\mathbf{a}}\rangle+q\langle \mathbf{w}_{\mathbf{a}}|\frac{1}{E-\hat{H}+i\widehat{W}\widehat{W}^{\dagger}}| \mathbf{r}\rangle\right)}\right\rangle. \tag{51}\] We also recall a more general Gaussian identity \[\int e^{-\boldsymbol{z}^{\dagger}\hat{A}\boldsymbol{z}-\left(\boldsymbol{a}^{ \dagger}\boldsymbol{z}+\boldsymbol{z}^{\dagger}\mathbf{b}\right)}d \boldsymbol{z}d\boldsymbol{z}^{\dagger}=\frac{\pi^{N}}{\det\hat{A}}\exp\left( \boldsymbol{a}^{\dagger}\hat{A}^{-1}\boldsymbol{b}\right) \tag{52}\] valid, as long as the integral over \(\boldsymbol{z}\in\mathbb{C}^{N}\) is convergent, for any \(N\times N\) matrix \(\hat{A}\) and any complex-valued \(N-\)component vectors \(\boldsymbol{a},\boldsymbol{b}\). Let us recall the definition: \[u(\mathbf{r}):=\left\langle\mathbf{r}|\,\frac{1}{E\boldsymbol{1}_{N}-\hat{H}+i \widehat{\Gamma}_{\epsilon}}\,|\mathbf{w}_{\mathbf{a}}\right\rangle, \tag{53}\] where we defined \[\hat{\Gamma}_{\epsilon}=\epsilon\boldsymbol{1}_{N}+\pi\sum_{c=1}^{M}\mathbf{w }_{c}\otimes\mathbf{w}_{c}^{\dagger},\quad\epsilon>0, \tag{54}\] with \(\epsilon>0\) being a regularization parameter, physically chosen proportional to the uniform absorption in the sample. Now we use Eq.(52) for \(\hat{A}=-i\left(E-\hat{H}+i\hat{\Gamma}_{\epsilon}\right)\) to see that \[e^{-i\sqrt{p}q^{*}u(\mathbf{r})}\propto\det\left(E\boldsymbol{1}_{N}-\hat{H}+ i\hat{\Gamma}_{\epsilon}\right) \tag{55}\] \[\times\int\,d\boldsymbol{z}_{1}d\boldsymbol{z}_{1}^{\dagger}\,e^{i\boldsymbol {z}_{1}^{\dagger}\left(E\boldsymbol{1}_{N}-\hat{H}+i\hat{\Gamma}_{\epsilon} \right)\boldsymbol{z}_{1}-ip^{1/4}(q^{*}\langle\mathbf{r}|\boldsymbol{z}_{1} \rangle+\langle\boldsymbol{z}_{1}|\mathbf{w}_{\mathbf{a}}\rangle)}.\] Similarly, for \(\hat{A}=i\left(E-\hat{H}-i\hat{\Gamma}_{\epsilon}\right)\) we may see that \[e^{-i\sqrt{p}q^{*}(\mathbf{r})}\propto\det\Bigl{\{}\left(E\boldsymbol{1}_{N}- \hat{H}-i\hat{\Gamma}_{\epsilon}\right)\Bigr{\}} \tag{56}\] \[\times\int\,d\boldsymbol{z}_{2}d\boldsymbol{z}_{2}^{\dagger}\,e^{-i\boldsymbol {z}_{2}^{\dagger}\left(E\boldsymbol{1}_{N}-\hat{H}-i\hat{\Gamma}_{\epsilon} \right)\boldsymbol{z}_{2}+ip^{1/4}(q\langle\mathbf{w}_{\mathbf{a}}|\boldsymbol {z}_{2}\rangle-\langle\boldsymbol{z}_{2}|\mathbf{r}\rangle)}.\] Note that here and below we systematically disregard the proportionality constants, restoring them in final expressions by normalization conditions, and also find it convenient intermittently use the bra-ket notations for the scalar products, e.g. \(\langle\boldsymbol{z}|\mathbf{w}_{\mathbf{a}}\rangle\equiv\boldsymbol{z}^{ \dagger}\mathbf{w}_{\mathbf{a}}\). Another useful remark is that there is a certain freedom in choosing the arrangement of the variables \(q,q^{*}\) in front of scalar products in the exponents, and we used it in two different ways in Eq.(55 and Eq.(56). This choice will be _aposteriori_ justified by very essential simplification of the forthcoming calculations. On the other hand, the determinant factors can be represented as Gaussian integrals over anticommuting \(N-\) vectors \(\boldsymbol{\chi}_{\sigma}\) and \(\boldsymbol{\chi}_{\sigma}^{*}\) with \(\sigma=1,2\): \[\det\left(E-\hat{H}\pm\hat{\Gamma}_{\epsilon}\right)=\int d\boldsymbol{\chi}_{ \sigma}d\boldsymbol{\chi}_{\sigma}^{*}\exp\left(-i\boldsymbol{\chi}_{\sigma}^ {\dagger}\left[E-\hat{H}\pm\hat{\Gamma}_{\epsilon}\right]\boldsymbol{\chi}_{ \sigma}\right)\] with no issues of convergence arising in this case by definition. It is convenient to combine vectors with commuting and anticommuting components in a single \(4N\)-dimensional supervector \(\boldsymbol{\Phi}\) defined as \[\boldsymbol{\Phi}=\left(\begin{array}{c}\boldsymbol{z}_{1}\\ \boldsymbol{\chi}_{1}\\ \boldsymbol{z}_{2}\\ \boldsymbol{\chi}_{2}\end{array}\right),\quad d\boldsymbol{\Phi}d\boldsymbol{ \Phi}^{\dagger}=d\boldsymbol{z}_{1}d\boldsymbol{z}_{1}^{\dagger}d\boldsymbol{ \chi}_{1}d\boldsymbol{\chi}_{1}^{\dagger}d\boldsymbol{z}_{2}d\boldsymbol{z}_{2 }^{\dagger}d\boldsymbol{\chi}_{2}d\boldsymbol{\chi}_{2}^{\dagger}d\boldsymbol{ \chi}_{2}d\boldsymbol{\chi}_{2}^{\dagger}, \tag{57}\] and also introduce supermatrices \(\hat{L}=\operatorname{diag}\left(1,1,-1,1\right),\quad\hat{\Lambda}= \operatorname{diag}\left(1,1,-1,-1\right)\). To shorten the notation we in most cases do not distinguish between the number \(1\) and identity matrix \(\boldsymbol{1}_{N}\) when the dimensions are evident from the context. As the result, we can rewrite the function \(\mathcal{R}(q,q^{*})\) in Eq.(51) as \[\mathcal{R}(q,q^{*})=\iint d\boldsymbol{\Phi}d\boldsymbol{\Phi}^{\dagger} \left\langle e^{i\boldsymbol{\Phi}^{\dagger}\left(\left(E-\hat{H}\right)\hat{L}+i \hat{L}\hat{\Lambda}\hat{\Gamma}_{\epsilon}\right)\boldsymbol{\Phi}}\right\rangle\] \[\times e^{-ip^{1/4}\left(\mathbf{\Phi}^{\dagger}\boldsymbol{\xi}_{1}+\boldsymbol{ \xi}_{2}^{\dagger}\boldsymbol{\Phi}\right)}, \tag{58}\] where the supervectors \(\boldsymbol{\xi}_{\sigma}\) are given by \[\boldsymbol{\xi}_{1}=\left(\begin{array}{c}\mathbf{w_{a}}\\ 0_{N}\\ \boldsymbol{r}\\ 0_{N}\end{array}\right),\quad\boldsymbol{\xi}_{2}^{\dagger}=\left(q^{*} \boldsymbol{r}^{\dagger},0_{N}^{T},-q\,\mathbf{w}_{\mathbf{a}}^{\dagger},0_{N }^{T}\right)\,. \tag{59}\] Closely following a variant of the supersymmetry approach as exposed e.g. in [13] (one may consult also the lectures [51] for the detail of similar procedures), one performs the average over GUE matrices \(\hat{H}\) and, after exploiting a supermatrix version of the Habbard-Stratanovich transformation and peforming the Gaussian integrals over the supervectors \(\boldsymbol{\Phi}\), arrives at the following representation in terms of a \(4\times 4\) supermatrix \(\hat{R}\) (see the above references for its structure motivated by convergence arguments): \[\mathcal{R}(q,q^{*})=\] \[=\int d\hat{R}\,e^{-\frac{N}{2}\mathrm{Str}\hat{R}^{2}}\mathrm{Sdet}^{-1} \left(\left(1_{N}\otimes\hat{L}^{1/2}\right)\hat{G}\left(1_{N}\otimes\hat{L} ^{1/2}\right)\right) \tag{60}\] \[\times e^{-p^{1/2}\boldsymbol{\xi}_{2}^{\dagger}\left(1_{N}\otimes\hat{L}^{1 /2}\right)\hat{G}\left(1_{N}\otimes\hat{L}^{1/2}\right)\boldsymbol{\xi}_{1}}\] where we introduced the \(4N\) component supermatrix \(\hat{G}=-i1_{N}\otimes(E-\hat{R})+\hat{\Gamma}_{c}\otimes\hat{\Lambda}\). In what follows we assume the scaling \(\epsilon=\eta/2N\), with \(\eta\) fixed as \(N\to\infty\), and in the limit \(N\gg 1\) perform the \(\hat{R}-\) integral in Eq.(60) by the saddle-point method, assuming the number of channels \(M\) being fixed. Repeating the same steps as in [13], the \(\hat{R}-\) integral is reduced to one over a saddle-point manifold parametrized by a \(4\times 4\) supermatrix \(\hat{Q}=\hat{T}\hat{\Lambda}\hat{T}^{-1}\) where supermatrices \(\hat{T}\) satisfy \(\hat{T}^{\dagger}\hat{L}\hat{T}=\hat{L}\). In the Appendix A we give an explicit parametrization of these matrices for convenience of the reader. To simplify the presentation we also assume for simplicity \(E=0\), the results for general \(E\) are obtained via the well-known rescaling using the semicircular density of GUE eigenvalues as in [13]. After all these steps one arrives at the following representation: \[\mathcal{R}(q,q^{*})=\int d\mu(\hat{Q})\,e^{-\frac{1}{2}\eta\mathrm{Str}\hat{ Q}\hat{\Lambda}}\prod_{c=1}^{M}\mathrm{Sdet}\left(1_{4}+\gamma_{c}\left( \hat{\Lambda}\hat{Q}\right)\right)\] \[\times\exp\left(-p^{1/2}\boldsymbol{\xi}_{2}^{\dagger}\left(\sum_{k=0}^{ \infty}\left(-\hat{\Gamma}\right)^{k}\otimes\hat{\tau}_{k}\right)\boldsymbol{ \xi}_{1}\right) \tag{61}\] where we introduced the short-hand notation \(\hat{\tau}_{k}\equiv\hat{L}^{-1/2}\hat{Q}\left(\hat{\Lambda}\hat{Q}\right)^{k }\hat{L}^{-1/2}\) and used the parameters \(\gamma_{c}\) as defined in Eq.(2). To evaluate the expression in the exponent of Eq.(61) we use the definition Eq.(59) of supervectors \(\boldsymbol{\xi}_{1,2}\) to write the \(k_{\mathrm{th}}\) term in the sum as \[-q\left\langle\mathbf{w_{a}}\right|\hat{\Gamma}^{k}\left|\mathbf{w_{a}}\right\rangle \left(\hat{\tau}_{k}\right)_{b2b1} \tag{62}\] Due to the condition of orthogonality of channels, see Eq.(2), we have \(\hat{\Gamma}^{k}\left|\mathbf{w_{c}}\right\rangle=\gamma_{c}^{k}\left|\mathbf{ w}_{c}\right\rangle\) and \[\hat{\Gamma}\left|\mathbf{w_{a}}\right\rangle=\left(\sum_{c=1}^{M}\mathbf{w}_{ c}\otimes\mathbf{w}_{c}^{\dagger}\right)\left|\mathbf{w_{a}}\right\rangle=\sum_{c=1}^{ M}a_{c}\gamma_{c}\left|\mathbf{w}_{c}\right\rangle\] iterating which implies \(\hat{\Gamma}^{k}\left|\mathbf{w_{a}}\right\rangle=\sum_{c=1}^{M}a_{c}\gamma_{ c}^{k}\left|\mathbf{w}_{c}\right\rangle\), hence \[\left\langle\mathbf{w_{a}}\right|\hat{\Gamma}^{k}\left|\mathbf{w_{a}}\right\rangle= \sum_{c}^{M}|a_{c}|^{2}\gamma_{c}^{k+1}. \tag{63}\] Next important assumption is to consider only the observation of intensity in points far from the channel entrances. Such a condition is taken into account assuming that \(\left\langle\mathbf{w}_{c}|\mathbf{r}\right\rangle=0,\,\forall c=1,\ldots,M\), implying that \(\left\langle\mathbf{r}\right|\hat{\Gamma}^{k}\left|\mathbf{r}\right\rangle= \left\langle\mathbf{r}|\mathbf{r}\right\rangle\delta_{k,0}\). In principle, here one may put \(\left\langle\mathbf{r}|\mathbf{r}\right\rangle=1\), but we leave it in this form as it will help to understand some arising structures later on. Further using the identity \[\sum_{k=0}^{\infty}\left((-1)^{k}\gamma_{c}^{k}\left(\hat{\Lambda}\hat{Q}\right) ^{k}\right)=\left(1+\gamma_{c}\hat{\Lambda}\hat{Q}\right)^{-1}\] one can obtain \[\boldsymbol{\xi}_{2}^{\dagger}\left(\sum_{k=0}^{\infty}\left(-\hat{\Gamma} \right)^{k}\otimes\hat{\tau}_{k}\right)\boldsymbol{\xi}_{1}=\] \[=-q\sum_{c=1}^{M}|a_{c}|^{2}\gamma_{c}\hat{D}_{c}(Q)_{b2b1}+q^{*}\left\langle \mathbf{r}|\mathbf{r}\right\rangle\left(\hat{L}^{-1/2}\hat{Q}\hat{L}^{-1/2} \right)_{b1b2} \tag{64}\] where we introduced the supermatrices \[\hat{D}_{c}(\hat{Q})=\hat{L}^{-1/2}\hat{Q}\left(1+\gamma_{c}\hat{\Lambda}\hat{ Q}\right)^{-1}\hat{L}^{-1/2}\quad\forall\ c=1,\ldots,M. \tag{65}\] Substituting Eq.(64) into Eq.(61) gives \[\mathcal{R}(q,q^{*})=\int d\mu(Q)\,e^{-\frac{1}{2}\eta\mathrm{Str}\hat{Q} \Lambda}\prod_{c=1}^{M}\mathrm{Sdet}\left(1_{4}+\gamma_{c}\left(\hat{\Lambda} \hat{Q}\right)\right)\] \[\times e^{-p^{1/2}\left[-q\sum_{c=1}^{M}|a_{c}|^{2}\gamma_{c}\hat{D}_{c}(\hat{Q })_{b2b1}+q^{*}\left\langle\mathbf{r}|\mathbf{r}\right\rangle\left(\hat{L}^{-1/2 }\hat{Q}\hat{L}^{-1/2}\right)_{b1b2}\right]} \tag{66}\] Substituting further such a \(\mathcal{R}(q,q^{*})\) into Eq.(50) one may notice that the integral over \(q,q^{*}\) can be easily performed resulting in the following Laplace transformed density of the single-point intensity: \[\mathcal{L}(p):=\left\langle e^{-pI_{\mathbf{r}}}\right\rangle \tag{67}\] \[=\int d\mu(Q)\,e^{-\frac{1}{2}\eta\mathrm{Str}\hat{Q}\hat{\Lambda}}\prod_{c=1}^ {M}\mathrm{Sdet}\left(1_{4}+\gamma_{c}\left(\hat{\Lambda}\hat{Q}\right)\right)\] \[\times\exp\left(-p\left\langle\mathbf{r}|\mathbf{r}\right\rangle\sum_{c=1}^{M }|a_{c}|^{2}\gamma_{c}\left(\hat{L}^{-1/2}\hat{Q}\hat{L}^{-1/2}\right)_{b1b2} \hat{D}_{c}\left(\hat{Q}\right)_{b2b1}\right).\] We see that the dependence on the Laplace parameter \(p\) in Eq.(67) is extremely simple, which is a direct consequence of the specific choice made by us in Eqs(55)-(56). This fact allows us to invert the Laplace transform immediately, getting the probability density for the single-point intensity via \[\mathcal{P}_{M}(I):=\left\langle\delta\left(I-I_{\mathbf{r}}\right)\right\rangle\] \[=\int d\mu(Q)\,e^{-\frac{1}{2}\eta\mathrm{Str}\hat{\Lambda}\hat{Q}}\prod_{c=1 }^{M}\mathrm{Sdet}\left(1_{4}+\gamma_{c}\left(\hat{\Lambda}\hat{Q}\right)\right) \tag{68}\] \[\times\delta\left(I-\sum_{c=1}^{M}|a_{c}|^{2}\gamma_{c}\left(\hat{L}^{-1/2} \hat{Q}\hat{L}^{-1/2}\right)_{b1b2}\hat{D}_{c}\left(\hat{Q}\right)_{b2b1} \right).\] where we eventually replaced \(\left\langle\mathbf{r}|\mathbf{r}\right\rangle=1\). Explicit evaluation of such integral is sketched in the Appendix A, and leads to the form featuring in Eq.(9). ### Joint probability of intensities at \(L\) observation points and extreme value statistics Let us now consider the computation of the joint probability density \(\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})\) of wave intensities \(I_{1}=|u(\mathbf{r}_{1})|^{2}\),..., \(I_{L}=|u(\mathbf{r}_{L})|^{2}\), where \(u(\mathbf{r}_{l}):=\left\langle\mathbf{r}_{l}|\mathbf{u}\right\rangle\) is the amplitude of the wave in a point \(\mathbf{r}_{l}\) inside the cavity, \(l=1,\ldots,L\). The starting point is to define for the parameters \(p_{1}>0,\ldots,p_{L}>0\) the joint Laplace transform: \[\mathcal{L}(p_{1},\ldots,p_{L}):=\left\langle e^{-\sum_{l=1}^{L}p_{l}u^{*}( \mathbf{r}_{1})u(\mathbf{r}_{l})}\right\rangle, \tag{69}\] which after applying Gaussian (Habbard-Stratanovich) transformations \(L\) times takes the form \[\mathcal{L}(p_{1},\ldots,p_{L})=\int\frac{\prod_{l=1}^{L}dq_{l}^{*}dq_{l}}{( \pi)^{L}}e^{-\sum\limits_{l=1}^{L}q_{l}^{*}q_{l}}\mathcal{R}(\{q_{l},q_{l}^{*} \}), \tag{70}\] where \[\mathcal{R}(q_{1},q_{1}^{*},\ldots,q_{L},q_{L}^{*}):=\left\langle e^{-i\sum_{l =1}^{L}\sqrt{p_{l}}(q_{l}^{*}u(\mathbf{r}_{l})+q_{l}u^{*}(\mathbf{r}_{l}))} \right\rangle. \tag{71}\] Now one may notice that Eq.(6) implies \[\sum_{l=1}^{L}\sqrt{p_{l}}\,q_{l}^{*}u(\mathbf{r}_{l})=\left\langle\mathbf{X} \right|\frac{1}{E-\hat{H}+i\hat{W}\hat{W}^{\dagger}}\left|\mathbf{w}_{\mathbf{ a}}\right\rangle, \tag{72}\] where we defined \[\left\langle\mathbf{X}\right|=\sum_{l=1}^{L}\sqrt{p_{l}}\,q_{l}^{*}\left\langle \mathbf{r}_{l}\right|\,,\quad\left|\mathbf{X}\right\rangle=\sum_{l=1}^{L} \sqrt{p_{l}}\,q_{l}\left|\mathbf{r}_{l}\right\rangle \tag{73}\] and similarly \[\sum_{l=1}^{L}\sqrt{p_{l}}\,u^{*}(\mathbf{r}_{l})=\left\langle\mathbf{w}_{ \mathbf{a}}\right|\frac{1}{E-\hat{H}+i\hat{W}\hat{W}^{\dagger}}\left|\mathbf{ X}\right\rangle\,. \tag{74}\] Using the above one can see that we need to evaluate \[\mathcal{R}(q_{1},q_{1}^{*},\ldots,q_{L},q_{L}^{*})=\] \[=\left\langle e^{-i\left\langle\mathbf{X}\right|\frac{1}{E-\hat{H}+i\hat{W} \hat{W}^{\dagger}}\left|\mathbf{w}_{\mathbf{a}}\right\rangle-i\left\langle \mathbf{w}_{\mathbf{a}}\right|\frac{1}{E-\hat{H}+i\hat{W}\hat{W}^{\dagger}} \left|\mathbf{X}\right\rangle}\right\rangle. \tag{75}\] Now, comparing Eq.(75)with Eq.(51) one may notice that putting \(p=q=1\) in the later, and replacing also \(\left|\mathbf{r}\right\rangle\rightarrow\left|\mathbf{X}\right\rangle\) makes the two expressions identical. Moreover, assuming that all observation points to be located far from every channel entrance implies that the vector \(\left|\mathbf{X}\right\rangle\), being a linear combination of \(\left|\mathbf{r}_{l}\right\rangle\), will be orthogonal to all channel vectors \(\left|\mathbf{w}_{c}\right\rangle\). Therefore the evaluation of ensemble average in Eq.(75) should be simply read off from the expression Eqs.(66) for \(\mathcal{R}(q_{1},q_{1}^{*})\) implying that \[\mathcal{R}(q_{1},q_{1}^{*},\ldots,q_{L},q_{L}^{*}):=\tilde{\mathcal{R}}\left( \left\langle\mathbf{X}|\mathbf{X}\right\rangle\right)\] \[=\int d\mu(\hat{Q})\,e^{-\frac{1}{2}\eta\mathrm{Str}\hat{Q}\hat{\Lambda}}\prod _{c=1}^{M}\mathrm{Sdet}\left(1_{4}+\gamma_{c}\left(\hat{\Lambda}\hat{Q}\right)\right) \tag{76}\] \[\times e^{-\left\langle\mathbf{X}|\mathbf{X}\right\rangle\left(\hat{L}^{-1/2} \hat{Q}\hat{L}^{-1/2}\right)_{b1b2}+\sum\limits_{c=1}^{M}|a_{c}|^{2}\gamma_{c} \hat{D}_{c}(\hat{Q})_{b2b1}},\] where we made explicit the fact that \(\mathcal{R}\) depends on the variables \(q,q_{l}^{*}\) for all \(l=1,\ldots,L\) (as well as on the Laplace parameters \(p_{l}\)) only via the norm: \[\left\langle\mathbf{X}|\mathbf{X}\right\rangle=\sum_{l=1}^{L}p_{l}q_{l}^{*}q_{l}, \tag{77}\] where in the above we exploited the inner basis orthogonality: \(\left\langle\mathbf{r}_{l_{1}}|\mathbf{r}_{l_{2}}\right\rangle=\delta_{l_{1}l_{2}}\). Substituting such dependence back into Eq.(70), passing to polar coordinates: \(q_{l}=\sqrt{R_{l}}e^{i\theta_{l}}\), and finally rescaling \(R_{l}\to p_{l}^{-1}R_{l}\) leads to: \[\mathcal{L}(p_{1},\ldots,p_{L})=\int\mathcal{\tilde{R}}\left(\sum_{l=1}^{L}R_{l} \right)\prod_{l=1}^{L}\frac{e^{-\frac{R_{l}}{p_{l}}}\,dR_{l}}{p_{l}}, \tag{78}\] In such a form the joint Laplace transform can be easily inverted due to the well-known identity involving the Bessel function \(J_{0}(z)\): \[\frac{e^{-\frac{R}{p}}}{p}=\int_{0}^{\infty}e^{-pI}J_{0}\left(2\sqrt{IR} \right)dI,\] yielding the joint probability density of \(L\) intensities in the form: \[\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})=\int_{0}^{\infty}\mathcal{\tilde{R}} \left(\sum_{l=1}^{L}R_{l}\right)\prod_{l=1}^{L}J_{0}\left(2\sqrt{IR_{l}} \right)\,dR_{l} \tag{79}\] At the next step we use the following chain of identities: \[\mathcal{\tilde{R}}\left(\sum_{l=1}^{L}R_{l}\right)=\int_{0}^{\infty}\mathcal{ \tilde{R}}\left(t\right)\delta\left(t-\sum_{l=1}^{L}R_{l}\right)\,dt\] \[=\int_{0}^{\infty}dt\mathcal{\tilde{R}}\left(t\right)\int_{-\infty}^{\infty}e ^{ik\left(t-\sum_{l=1}^{L}R_{l}\right)}\frac{dk}{2\pi} \tag{80}\] Substituting this back to Eq.(79), changing the order of integrations and using that \[\int_{0}^{\infty}J_{0}\left(2\sqrt{IR}\right)e^{-ikR}dR=\frac{1}{ik}e^{\frac{ i}{k}I} \tag{81}\] one arrives to the following representation for the joint probability density: \[\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})=\int_{0}^{\infty}dt\mathcal{\tilde{ R}}\left(t\right)\Phi_{L}\left(I_{1}+\ldots+I_{L};t\right), \tag{82}\] where for the function \(\Phi_{L}\left(I;t\right)\) one easily finds that \[\begin{split}\Phi_{L}&\left(I;t\right)\equiv\int_{ -\infty}^{\infty}e^{i\left(kt+\frac{L}{L}\right)}\frac{dk}{2\pi(ik)^{L}}\\ &=(-1)^{L-1}\frac{d^{L-1}}{dI^{L-1}}J_{0}\left(2\sqrt{It}\right).\end{split} \tag{83}\] where in the last step we used the inversion of Eq.(81). This finally implies: \[\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})=(-1)^{L-1}\frac{d^{L-1}}{dI^{L-1}} \int_{0}^{\infty}dt\mathcal{\tilde{R}}\left(t\right)J_{0}\left(2\sqrt{It}\right) \tag{84}\] \[=(-1)^{L-1}\frac{d^{L-1}}{dI^{L-1}}P_{L=1}(I)\left|{}_{I=\sum_{l=1}^{L}I_{l}}\right. \tag{85}\] coinciding with Eq.(30). To reflect that this joint probability density depends on individual intensities only via their sum \(\sum_{l=1}^{L}I_{l}\) we define the function \(\mathcal{\tilde{P}}_{M}^{(L)}(I)\) via \(\mathcal{P}_{M}^{(L)}(I_{1},\ldots,I_{L})=\mathcal{\tilde{P}}_{M}^{(L)}\left( \sum_{l=1}^{L}I_{l}\right)\). In particular, for finding the probability density for the sum of all intensities, Eq.(31), we use the identity: \[\int_{0}^{\infty}f\left(\sum_{l=1}^{L}I_{l}\right)\delta\left(I-\sum_{l=1}^{L }I_{l}\right)\prod_{l=1}^{L}dI_{l}=\frac{I^{L-1}}{(L-1)!}f(I).\] Our next step is to consider the simplest extreme value statistics, the distributions of the maximal and the minimal value in the pattern, defined as \[\mathrm{Prob}\left(I_{\mathrm{max}}<Y\right)=\int_{0}^{Y}\mathcal{P}_{M}^{(L) }(I_{1},\ldots,I_{L})\prod_{l=1}^{L}dI_{l} \tag{86}\] and similarly \[\mathrm{Prob}\left(I_{\mathrm{min}}>Y\right)=\int_{Y}^{\infty}\mathcal{P}_{M}^{ (L)}(I_{1},\ldots,I_{L})\prod_{l=1}^{L}dI_{l}.\] We will concentrate on the former as the most interesting. Using the same type representation as in Eq.(80): \[\mathcal{\tilde{P}}_{M}^{(L)}\left(\sum_{l}^{L}I_{l}\right)=\int_{0}^{\infty} \mathcal{\tilde{P}}_{M}^{(L)}(t)\,dt\int_{-\infty}^{\infty}e^{ik\left(t-\sum_{l= 1}^{L}I_{l}\right)}\frac{dk}{2\pi} \tag{87}\] one easily finds: \[Prob\left(I_{max}<Y\right)=\int_{0}^{\infty}\mathcal{\tilde{P}}_{M}^{(L)}(t)T_ {L}(t;Y)\,dt \tag{88}\] where we defined \[T_{L}(t;Y):=\int_{-\infty}^{\infty}e^{ikt}\left(\frac{1-e^{-ikY}}{ik}\right)^ {L}\frac{dk}{2\pi}.\] Expanding the binomial and using the identity: \[\int_{-\infty}^{\infty}\frac{e^{ikt}}{(\beta+ik)^{\nu}}\frac{dk}{2\pi}=\frac{t^ {\nu-1}}{\Gamma(\nu)}e^{-\beta t}\theta(t),\quad\beta>0,\nu>0\] where \(\theta(t)=1\) for \(t>0\) and zero otherwise, one finds \[T_{L}(t;Y)=\sum_{l=0}^{L}(-1)^{l}\left(\begin{array}{c}L\\ l\end{array}\right)\frac{(t-lY)^{L-1}}{\Gamma(L)}\theta(t-lY).\] In particular, one can see that \[T_{L}(t;Y)=\frac{t^{L-1}}{\Gamma(L)},\quad 0\leq t<Y\] and \(T_{L}(t;Y)=0\) for \(t>LY\). This fact, together with the relation Eq.(30) allows to integrate by parts in Eq.(88), which eventually results in the first of relations Eq.(37). Conclusion With this work we obtained a pretty complete description of intensity statistics inside irregularly shaped microwave resonator in the quantum chaos regime with broken time-reversal invariance, including multipoint distributions and extreme value statistics. In case of finite number of open channels and no absorption inside all expressions can be, in principle, reduced to elementary functions. In such a case the one-point intensity is generically power-law distributed, in clear difference with the well-known random gaussian wave conjecture, cf. Eq.(3), predicting the exponential Rayleigh law. The latter is only recovered in very open system limit, while keeping the incoming flux per channel constant. If however uniform losses in the cavity (modelled e.g. by infinite number of weakly coupled hidden channels) are taken into account, the powerlaw remains only valid in a restricted range of intensities, being cut exponentially at larger values. Interestingly, we demonstrated that the joint distribution of intensities sampled at many points depends only on the sum of individual intensities. Despite this simple dependence the intensities at different points are clearly correlated, unless the system is in the Rayleigh regime. In particular, by extracting the statistics of the highest intensity in an observation pattern of \(L\) points explicitly in the perfect coupling regime we were able to desmortrate that the ensuing extreme values distribution for \(L\rightarrow\infty\) differs from the classical laws. This provides an example of nontrivial EVS which is potentially accessible in experiments, provided the losses due to absorption can be effectively controlled. Unfortunately, such a pattern is definitely expected to hold only for systems with fully broken time reversal invariance. The problem of characterizing multipoint and extreme value statistics in systems with preserved time reversal invariance remains therefore fully open. ## Acknowledgements Y.V.F. acknowledges financial support from EPSRC Grant EP/V002473/1 "Random Hessians and Jacobians: theory and applications" and is grateful to Prof. Gregory Schehr for a discussion on extreme value statistics and bringing the reference [50] to his attention. ## Appendix A Parameterization of \(\hat{Q}-\)supermatrices and related formulas We use the same parametrization as in [51], and describe it below for convenience of the reader. First one defines two unitary \(2\times 2\) supermatrices \[\hat{U}_{1}=\exp\left(\begin{array}{cc}0&-\alpha^{*}\\ \alpha&0\end{array}\right),\quad\hat{U}_{2}=\exp i\left(\begin{array}{cc}0& -\beta^{*}\\ \beta&0\end{array}\right) \tag{89}\] where \(\alpha,\alpha^{*},\beta,\beta^{*}\) are anticommuting variables. In terms of those the \(4\times 4\) supermatrix \(\hat{Q}\) is defined as \[\hat{Q}=\left(\begin{array}{cccc}\hat{U}_{1}&\\ &\hat{U}_{2}\end{array}\right)\left(\begin{array}{cccc}\lambda_{1}&0&i\mu_{ 1}&0\\ 0&\lambda_{2}&0&\mu_{2}^{*}\\ i\mu_{1}^{*}&0&-\lambda_{1}&0\\ 0&\mu_{2}&0&-\lambda_{2}\end{array}\right)\left(\begin{array}{cccc}\hat{U} _{1}^{-1}&\\ &\hat{U}_{2}^{-1}\end{array}\right) \tag{90}\] where \[\begin{cases}1\leq\lambda_{1}<\infty,\quad\mu_{1}=|\mu_{1}|e^{i\phi_{1}}, \quad|\mu_{1}|^{2}=\lambda_{1}^{2}-1\\ -1\leq\lambda_{2}\leq 1,\quad\mu_{2}=|\mu_{2}|e^{i\phi_{2}},\quad,\quad|\mu_{2}|^{2 }=1-\lambda_{2}^{2}\end{cases}\] The measure \(d\mu(\hat{Q})\) will take the following form \[d\mu(\hat{Q})=-\frac{d\lambda_{1}d\lambda_{2}}{(\lambda_{1}-\lambda_{2})^{2}} d\phi_{1}d\phi_{2}d\alpha d\alpha^{*}d\beta d\beta^{*} \tag{91}\] It is immediate to check that in this parameterization Str\(\hat{\Lambda}\hat{Q}\) and Sdet\(\left(1_{4}+\gamma_{c}\hat{\Lambda}\hat{Q}\right)\) take the form \[\mbox{Str}\hat{\Lambda}\hat{Q}=2\left(\lambda_{1}-\lambda_{2}\right), \tag{92}\] \[\mbox{Sdet}\left(1_{4}+\gamma_{c}\hat{\Lambda}\hat{Q}\right)=\frac{1+2\gamma_ {c}\lambda_{1}+\gamma_{c}^{2}}{1+2\gamma_{c}\lambda_{2}+\gamma_{c}^{2}}\] correspondingly. We also need the following combinations: \[\hat{D}_{c}\left(\hat{Q}\right)_{b1b2}=-i\left[\frac{i\mu_{1}}{1+2\gamma_{c} \lambda_{1}+\gamma_{c}^{2}}\left(1+\frac{\beta^{*}\beta}{2}\right)\left(1- \frac{\alpha^{*}\alpha}{2}\right)\right.\] \[\left.+i\alpha^{*}\frac{\mu_{2}^{*}}{1+2\gamma_{c}\lambda_{2}+\gamma_{c}^{2}} \beta\right], \tag{93}\] which can be used to get also \(\left(\hat{L}^{-1/2}\hat{Q}\hat{L}^{-1/2}\right)_{b1b2}=\lim_{\gamma_{c}\to 0}\hat{D}_{c} \left(\hat{Q}\right)_{b1b2}\). Similarly, \[\hat{D}_{c}\left(\hat{Q}\right)_{b2b1}=-i\left[\frac{i\mu_{1}^{*}}{1+2\gamma_ {c}\lambda_{1}+\gamma_{c}^{2}}\left(1+\frac{\beta^{*}\beta}{2}\right)\left(1- \frac{\alpha^{*}\alpha}{2}\right)\right.\] \[\left.+i\beta^{*}\alpha\frac{\mu_{2}}{1+2\gamma_{c}\lambda_{2}+\gamma_{c}^{2}} \right]. \tag{94}\] Substituting all this to Eq.(68) gives: \[\mathcal{P}_{M}(I)=\int d\mu(\hat{Q})\,e^{-\eta(\lambda_{1}-\lambda_{2})} \prod_{c=1}^{M}\left(\frac{\lambda_{2}+g_{c}}{\lambda_{1}+g_{c}}\right) \tag{95}\] \[\times\delta\left(I-\langle{\bf r}|{\bf r}\rangle\sum_{c=1}^{M}|a_{c}|^{2}A_{c}( \hat{Q})\right),\] where \[A_{c}(\hat{Q})=\frac{1}{2}\frac{|\mu_{1}|^{2}}{\lambda_{1}+g_{c}}(1+\beta^{*} \beta-\alpha^{*}\alpha-\beta^{*}\beta\alpha^{*}\alpha)\] \[+\alpha^{*}\beta\frac{1}{2}\frac{\mu_{1}^{*}\mu_{2}^{*}}{\lambda_{1}+g_{c}}+ \beta^{*}\alpha\frac{1}{2}\frac{\mu_{1}\mu_{2}}{\lambda_{2}+g_{c}}-\beta^{*} \beta\alpha^{*}\alpha\frac{1}{2}\frac{|\mu_{2}|^{2}}{\lambda_{2}+g_{c}}.\] Now one may expand the Dirac \(\delta\)-function into anticommuting variables and perform the corresponding integrals, and then over angular variables \(\phi_{1,2}\). After straightforward algebraic manipulations one arrives at \[{\cal P}_{M}(I)=\delta\left(I\right)-\frac{d{\cal F}_{M}(I)}{dI}+\frac{d^{2}} {dI^{2}}\left(I{\cal F}_{M}(I)\right), \tag{96}\] where \({\cal F}_{M}(I)\) will be defined in (97) below. Here we note that as explained in the Appendix of the paper [39] the so-called "Efetov-Wegner" term \(\delta\left(I\right)\) in Eq.(96) gets eventually cancelled and can be omitted. The function \({\cal F}_{M}(I)\) is given explicitly by \[{\cal F}_{M}(I)=\int_{1}^{\infty}\int_{-1}^{1}\frac{d\lambda_{1}d\lambda_{2}}{ \left(\lambda_{1}-\lambda_{2}\right)^{2}}e^{-\eta\left(\lambda_{1}-\lambda_{2} \right)}\prod_{c=1}^{M}\left(\frac{\lambda_{2}+g_{c}}{\lambda_{1}+g_{c}}\right) \tag{97}\] \[\times\left(I+\frac{|\mu_{2}|^{2}}{2}\sum_{c=1}^{M}\frac{|a_{c}|^{2}}{\lambda _{2}+g_{c}}\right)\delta\left(I-\frac{|\mu_{1}|^{2}}{2}\sum_{c=1}^{M}\frac{|a_ {c}|^{2}}{\lambda_{1}+g_{c}}\right).\] After further manipulations using \(|\mu_{1}|^{2}=\lambda_{1}^{2}-1\), \(|\mu_{2}|^{2}=1-\lambda_{2}^{2}\) and noticing that the \(\delta-\)functional constraint implies \[I=\frac{\lambda_{1}-1}{2}\sum_{c=1}^{M}|a_{c}|^{2}\frac{\lambda_{1}+1}{\lambda _{1}+g_{c}}\] \[=\frac{\lambda_{1}-1}{2}\sum_{c=1}^{M}|a_{c}|^{2}\left(1-\frac{g_{c}-1}{ \lambda_{1}+g_{c}}\right)\] and \[\left(I+\frac{|\mu_{2}|^{2}}{2}\sum_{c=1}^{M}\frac{|a_{c}|^{2}}{\lambda_{2}+g _{c}}\right)\left|{}_{I=\frac{|\mu_{1}|^{2}}{2}\sum_{c=1}^{M}\frac{|a_{c}|^{2 }}{\lambda_{1}+g_{c}}}\right.\] \[=\frac{\lambda_{1}-\lambda_{2}}{2}\sum_{c=1}^{M}|a_{c}|^{2}\frac{\lambda_{1} \lambda_{2}+g_{c}(\lambda_{1}+\lambda_{2})+1}{(\lambda_{1}+g_{c})(\lambda_{2}+ g_{c})},\] we can bring the Eq.(97) to the form featuring in Eq.(9). **Appendix B: Relation to Nock et al. [16]** The paper [16] provided the explicit result for the probability density of the modulus \(|S_{ab}|:=r\) for the \(S-\)matrix entry between two different channels \(a\neq b\), where without reducing generality one may consider \(a=1\) and \(b=M\). For the systems with broken time-reversal invariance the probability density \(P_{r}(r)\) for the variable \(r\) (normalized in such a way that \(\int_{0}^{\infty}P_{r}(r)\,r\,dr=1\)) can be found in Eqs.(60)-(62) of [16] and represented in the form: \[P_{r}(r)=\frac{1}{r}\frac{\partial}{\partial r}r\frac{\partial}{\partial r}f(r) \tag{98}\] where \[f(r)=\frac{1}{2}\frac{(g_{1}+\lambda_{1})^{2}(g_{M}+\lambda_{1})^{2}}{(g_{1}+g _{M})\lambda_{1}^{2}+2(g_{1}g_{M}+1)\lambda_{1}+(g_{1}+g_{M})}\,{\cal U}(r) \tag{99}\] and \({\cal U}(r)\) is given by \[{\cal U}(r)=\int_{-1}^{1}\frac{d\lambda_{2}}{(\lambda_{1}-\lambda_{2})^{2}} \prod_{c=1}^{M}\frac{g_{c}+\lambda_{2}}{g_{c}+\lambda_{1}}\] \[\times\left(\frac{\lambda_{1}^{2}-1}{(g_{1}+\lambda_{1})(g_{M}+\lambda_{1})}+ \frac{1-\lambda_{2}^{2}}{(g_{1}+\lambda_{2})(g_{M}+\lambda_{2})}\right), \tag{100}\] with \(\lambda_{1}\) for a given \(r\) being defined via \[\lambda_{1}=\frac{(g_{1}+g_{M})r^{2}+\sqrt{(g_{1}-g_{M})^{2}r^{4}+4r^{2}(g_{1}g _{M}-1)+4}}{2(1-r^{2})}. \tag{101}\] On the other hand recall that according to (1) we have \[|S_{1M}|^{2}=4\left|\langle{\bf w}_{1}\right|\frac{1}{E-\hat{H}+i\hat{W}\hat{W} ^{\dagger}}\left|{\bf w}_{M}\rangle\right|^{2}:=r^{2}. \tag{102}\] where \(\hat{W}\hat{W}^{\dagger}=\sum_{c=1}^{M}{\bf w}_{c}\otimes{\bf w}_{c}^{\dagger}\). Consider now the limit \(\gamma_{M}=|{\bf w}_{M}|^{2}\to 0\) whereas keeping \(|{\bf w}_{c}|^{2}=\gamma_{c}\) of the order of unity for all \(c\neq M\). Physically this corresponds to almost closing the channel with \(c=M\), with the effective coupling \(g_{M}\approx\frac{1}{2|{\bf w}_{M}|}\gg g_{c}\), \(\forall c<M\). It is then easy to see that in such a limit \(|S_{1M}|^{2}\to 0\), whereas the product \(|S_{1M}|^{2}g_{M}/2\) remains finite and simply proportional to the intensity \(I\) at a single point inside the cavity given by (7), provided we reduce the number of open channels by one and consider the incoming wave amplitudes \(a_{c}\) to be nonzero only for \(c=1\). We therefore can extract the probability distribution of point intensity in this case performing the limit \(g_{M}\rightarrow\infty\) and \(r^{2}\to 0\) in (99) while keeping \(r^{2}g_{M}=2I\) and \(g_{c},\,\forall c<M\) finite. In such a limiting procedure we get: \[\lambda_{1}\to I+\sqrt{I^{2}+2g_{1}I+1},\quad\prod_{c=1}^{M}\frac{g_{c}+ \lambda_{2}}{g_{c}+\lambda_{1}}\rightarrow\prod_{c=1}^{M-1}\frac{g_{c}+\lambda_{ 2}}{g_{c}+\lambda_{1}}\] and \[\frac{1}{2}\frac{(g_{1}+\lambda_{1})^{2}(g_{M}+\lambda_{1})^{2}}{(g_{1}+g_{M}) \lambda_{1}^{2}+2(g_{1}g_{M}+1)\lambda_{1}+(g_{1}+g_{M})}\] \[\approx\frac{g_{M}}{2}\frac{(g_{1}+\lambda_{1})^{2}}{\lambda_{1}^{2}+2\lambda_ {1}g_{1}+1}=\frac{g_{M}}{2}\frac{(g_{1}+\lambda_{1})}{2\sqrt{I^{2}+2g_{1}I+1}}. \tag{103}\] Further we have \[\frac{\lambda_{1}^{2}-1}{(g_{1}+\lambda_{1})(g_{M}+\lambda_{1})}+\frac{1- \lambda_{2}^{2}}{(g_{1}+\lambda_{2})(g_{M}+\lambda_{2})}\] \[\approx\frac{1}{g_{M}}\left[\frac{\lambda_{1}^{2}-1}{(g_{1}+\lambda_{1})}+ \frac{1-\lambda_{2}^{2}}{(g_{1}+\lambda_{2})}\right]\] \[=\frac{(\lambda_{1}-\lambda_{2})}{g_{M}}\,\frac{[g_{1}(\lambda_{1}+\lambda_{2}) +\lambda_{1}\lambda_{2}+1]}{(g_{1}+\lambda_{1})(g_{1}+\lambda_{2})}=\frac{( \lambda_{1}-\lambda_{2})}{g_{M}}\frac{(\tilde{g}_{1}+\lambda_{2})}{g_{1}+ \lambda_{2}} \tag{104}\] where we used the definitions Eq.(11) and Eq.(13): \[\tilde{g}_{1}=\frac{g_{1}\lambda_{1}+1}{g_{1}+\lambda_{1}}=-I+\sqrt{I^{2}+2g_ {1}I+1}.\] Substituting all these factors back to Eq.(99)-(100) yields finally: \[f(r)\rightarrow\mathcal{F}_{M-1}(I)=\frac{1}{4\sqrt{I^{2}+2g_{1}I+1}}\frac{1} {\prod_{c=1}^{M-1}(g_{c}+\lambda_{1})}\] \[\times\int_{-1}^{1}\frac{d\lambda_{2}}{\lambda_{1}-\lambda_{2}}(\tilde{g}_{1}+ \lambda_{2})\,\prod_{c=2}^{M-1}(g_{c}+\lambda_{2}) \tag{105}\] which together with \[P(r)=\frac{1}{r}\frac{\partial}{\partial r}r\frac{\partial}{\partial r}f(r) \to P(I)=4\frac{\partial}{\partial I}I\frac{\partial}{\partial I} \mathcal{F}_{M-1}(I)\] indeed reproduces exactly the pair Eq.(9)-Eq.(15) with obvious replacement \(M\to M-1\).
2302.07912
Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Large multilingual models have inspired a new class of word alignment methods, which work well for the model's pretraining languages. However, the languages most in need of automatic alignment are low-resource and, thus, not typically included in the pretraining data. In this work, we ask: How do modern aligners perform on unseen languages, and are they better than traditional methods? We contribute gold-standard alignments for Bribri--Spanish, Guarani--Spanish, Quechua--Spanish, and Shipibo-Konibo--Spanish. With these, we evaluate state-of-the-art aligners with and without model adaptation to the target language. Finally, we also evaluate the resulting alignments extrinsically through two downstream tasks: named entity recognition and part-of-speech tagging. We find that although transformer-based methods generally outperform traditional models, the two classes of approach remain competitive with each other.
Abteen Ebrahimi, Arya D. McCarthy, Arturo Oncevay, Luis Chiruzzo, John E. Ortega, Gustavo A. Giménez-Lugo, Rolando Coto-Solano, Katharina Kann
2023-02-15T19:06:17Z
http://arxiv.org/abs/2302.07912v1
# Meeting the Needs of Low-Resource Languages: ###### Abstract Large multilingual models have inspired a new class of word alignment methods, which work well for the model's pretraining languages. However, the languages most in need of automatic alignment are low-resource and, thus, not typically included in the pretraining data. In this work, we ask: _How do modern aligners perform on unseen languages, and are they better than traditional methods?_ We contribute gold-standard alignments for Bribri-Spanish, Guarani-Spanish, Quechua-Spanish, and Shipibo-Konibo-Spanish. With these, we evaluate state-of-the-art aligners with and without model adaptation to the target language. Finally, we also evaluate the resulting alignments extrinsically through two downstream tasks: named entity recognition and part-of-speech tagging. We find that although transformer-based methods generally outperform traditional models, the two classes of approach remain competitive with each other. ## 1 Introduction Word alignment is a valuable tool for extending the coverage of natural language processing (NLP) applications to low-resource languages through, e.g., statistical machine translation (SMT; Koehn and Knowles, 2017; Duh et al., 2020) or annotation projection (Yarowsky et al., 2001; Smith and Smith, 2004; Nicolai et al., 2020; Eskander et al., 2020). The traditional approach for generating alignments has been with statistical methods such as Giza++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), which provide strong alignment quality while remaining quick and lightweight to run. Recently, new methods have been proposed which extract alignments from massive _pretrained multilingual models_, and outperform these longstanding methods (Dou and Neubig, 2021). However, results on other NLP tasks, such as part-of-speech (POS) tagging and named-entity recognition (NER), have shown that, while pretrained models generally work well out-of-the-box for high-resource languages, performance is far lower for low-resource ones, particularly those which are unseen during pretraining (Pires et al., 2019; Wu and Dredze, 2020; Muller et al., 2021; Lee et al., 2022). Models can be adapted (Gururangan et al., 2020; Chau et al., 2020) to improve performance, but this comes with a large computational cost. Given these two considerations, for _unseen_ low resource languages it remains unclear (1) whether modern neural approaches based on adapted pretrained models generate higher-quality alignments than traditional approaches and (2) if so, whether the quality difference is severe enough to justify the additional computational cost. We investigate this by collecting gold-standard alignments for Bribri Figure 1: A word alignment between Quechua and Spanish (shaded), as well as mBERT+TLM’s predicted alignment (marked by \(\times\)’s). FastAlign and Giza++ cannot take advantage of surface features of proper names and borrowings. We evaluate alignments intrinsically via AER and extrinsically with POS-tagging and NER models learned on annotations projected across alignments from Spanish. Shipibo-Konibo. These languages are low-resource and unrepresented in the pretraining data of popular models--a relevant real-world scenario. In addition to intrinsically evaluating alignment quality, we measure the downstream utility of each method for training POS-tagging and NER models by annotation projection. We find traditional and neural methods to be competitive, but pretrained models result in slightly lower alignment error rates and stronger downstream task performance, even for initially unseen languages. Through further analysis, we also find that adaptation may be a more reliable approach given minimally available resources. Taken together, these results indicate that alignment from multilingual models can indeed be a valuable tool for low-resource languages, but traditional approaches continue to be a strong option and should still be considered for practical applications. ## 2 Related Work AlignmentWord alignment is a long studied task, with origins in the IBM models for statistical machine translation Brown et al. (1993), which are the basis of Giza++ Och and Ney (2003) and FastAlign Dyer et al. (2013). As these approaches can only generate one-to-many alignments, models are trained in both forward and reverse directions (reversing the role of source and target), and final alignments are created via symmetrization heuristics Och and Ney (2000); Koehn et al. (2005); other approaches explicitly symmetrize during training Matusov et al. (2004); Liang et al. (2006).1 While these models rely on only position and word identity information, subword information can be integrated without requiring costly inference Berg-Kirkpatrick et al. (2010), leading to better parameter estimation for rare words. Alignments can also be extracted from neural translation models Chen et al. (2020); Zenkel et al. (2020). Footnote 1: The poor estimation of rare words’ translation parameters also motivates symmetrization; without this, rarely observed words become _garbage collector words_Moore (2004). Word alignment also enables annotation projection Yarowsky and Ngai (2001); Yarowsky et al. (2001) which can offer strong performance, particularly for low-resource languages Buys and Botha (2016); Ortega and Pillaipakkamnatt (2018); Nicolai and Yarowsky (2019); Nicolai et al. (2020); Eskander et al. (2020). Multilingual Transformer ModelsPretrained multilingual models Devlin et al. (2019); Conneau and Lample (2019); Conneau et al. (2020); Xue et al. (2021) have become the de facto standard approach for cross-lingual transfer. In general, these models are an extension of their monolingual variants, created by including data from many languages in their pretraining. They rely on a subword vocabulary Kudo and Richardson (2018) which jointly spans all of the pretraining languages. Models are pretrained using a masked language modeling (MLM) objective and a translation language modeling (TLM; Conneau and Lample (2019) objective that uses parallel sentences. Outside of continued pretraining Gururangan et al. (2020), models can be adapted using Adapters Pfeiffer et al. (2020) or through vocabulary adaptation Wang et al. (2020); Hong et al. (2021). Word alignment methods which depend on these models have also been proposed Jalili Sabet et al. (2020); Nagata et al. (2020); we focus on AWESoME align Dou and Neubig (2021) because it outperforms other unsupervised methods. ## 3 Experiment 1: Intrinsic Evaluation ### Experimental Setup LanguagesWe focus on four Indigenous languages spoken in the Americas for our experiments. **Bribri (bzd)** is a tonal language in the Chibchan family spoken by approximately 7000 people in Costa Rica. **Guarani (gn)** is a polysynthetic language in the Tupi-Guarani family spoken by around 6 million people across South America. **Quechua (quy)** is a family of Indigenous languages--from which we study Quechua Chanka--spoken across the Peruvian Andes by over 6 million people, and **Shipibo-Konibo (shp)** is a language spoken by around 30,000 people in Peru, Bolivia, and Brazil Cardenas and Zeman (2018). The latter three languages are agglutinative. Training DataFor training, we use the parallel data between Spanish and our languages described by Mager et al. (2021).2 We note that there is a distinct difference in the amount of unlabeled data available within the four languages: Guarani and Quechua have considerably more data available. These two languages also have monolingual text available in Wikipedia, which we extract using WikiExtractor Attardi (2015). The exact number of parallel and monolingual sentences for all languages is shown in Table B.1. Evaluation DataTo create gold standard alignments for evaluation, we sample multi-way parallel examples from AmericasNLI Ebrahimi et al. (2022), allowing for multi-parallel alignments Xia and Yarowsky (2017) across all languages. Samples for the development and test sets are taken from their respective splits in the AmericasNLI dataset. Development examples were collected first, manually checked, and corrected. Examples with misalignments in punctuation, numbers, or named entities were not used. After a period of development with this data, the test set of 50 examples was collected and manually verified. Annotations were collected using JHU's open-source Turkle platform.3 We ask annotators to only mark _sure_ alignments. Additional discussion on data collection and the test set can be found in SS6. Footnote 3: [https://github.com/hltcoe/turkle](https://github.com/hltcoe/turkle) MetricsWe evaluate automatic alignments via alignment error rate (AER; Och and Ney (2000)). Because we only collect sure alignments, this is equivalent to the balanced F-measure Fraser and Marcu (2007). We give additional metrics in Table C.3. ### Models Traditional AlignersWe use Giza++ Och and Ney (2003) and FastAlign Dyer et al. (2013) as our traditional aligners. Giza++ is based on IBM Models 1-5 Brown et al. (1993). FastAlign Dyer et al. (2013) is a re-parameterization of IBM Model 2. We use the implementation and hyperparameters of Zenkel et al. (2020), which relies on MGaiz++ Gao and Vogel (2008) and the standard FastAlign package. Both approaches run on CPUs, and their training time ranges between 6 seconds to 3 minutes for FastAlign, and 43 seconds to 22 minutes for Giza++. We use the union of the forward and reverse alignments, as this symmetrization heuristic offers the best result for all languages on the development set. We show the performance of other heuristics in Table C.2. Neural AlignersAWESoME Dou and Neubig (2021) identifies alignment links by considering cosine similarities between hidden layer representations of tokens in a neural encoder. We consider two such encoders: mBERT Devlin et al. (2019) and XLM-R Liu et al. (2019), and we use the default AWESoME configuration to extract alignments. We give layer-by-layer alignment performance in Figure C.1. Model AdaptationWe experiment with three adaptation schemes based on continued pretraining (+TLM, +MLM-T, and +MLM-ST) which rely on unlabeled data and further train the model using MLM Gururangan et al. (2020) before alignments are extracted. We focus on these objectives as they have been used by prior work for general model adaptation, and they work well in situations with limited resources Ebrahimi and Kann (2021). As we have access to bitext between Spanish and the target languages, for the +TLM scheme each example is the concatenation of a Spanish sentence with its translation. For +MLM-**T** we adapt using solely the target side of the available data, and for +MLM-**ST** we adapt on both the source and target; however, this data is treated as monolingual data and not explicitly aligned. +MLM-WT denotes target language adaptation which includes Wikipedia data. The duration of adaptation depends on the GPU and method used; it ranges from around 6 minutes for Bright to 4 hours for Quechua. We provide additional training details in Appendix A. ### Results Traditional vs. Neural AlignersWe present results in Table 1. The best traditional method is FastAlign, and the best neural approach is with mBERT+TLM. Comparing the two, we see that \begin{table} \begin{tabular}{l l c c c c} Model & Method & bzd & on & quy & ship & avg. \\ \hline AWESoME & BL & 70.03 & 63.13 & 67.02 & 60.41 & 65.15 \\ (mBERT) & +MLM-T & 68.95 & 49.68 & 46.59 & 58.17 & 55.85 \\ & +MLM-ST & 70.63 & 50.25 & 42.52 & 58.66 & 55.52 \\ & +TLM & 58.43 & **43.10** & **36.96** & **52.34** & **47.71** \\ AWESoME & BL & 80.15 & 73.11 & 75.24 & 69.21 & 74.43 \\ (XLM-R) & +MLM-T & 76.89 & 65.44 & 53.65 & 65.16 & 65.29 \\ & +MLM-ST & 77.53 & 64.55 & 52.90 & 66.56 & 65.39 \\ & +TLM & 74.90 & 58.84 & 43.25 & 63.48 & 60.12 \\ FastAlign & Union & **51.40** & 43.52 & 54.06 & 54.67 & 50.91 \\ Giza++ & Union & 55.61 & 49.92 & 66.01 & 60.84 & 58.10 \\ \hline mBERT & +MLM-WT & - & _40.00_ & 46.00 & - & 43.00 \\ XLM-R & +MLM-WT & - & 52.27 & 48.83 & - & 50.55 \\ \end{tabular} \end{table} Table 1: AER, in percentages, for each language and method. The best overall result for each language is bolded, while the best model within each method is underlined. We separate results which use Wikipedia, as they are not directly comparable. the lowest error rate is achieved with the neural approach for all languages except for Bribri, where FastAlign offers 7.03% absolute improvement. Of the other three languages, the performance for two is close: the difference in performance for Guarani is only 0.42% and 2.33% for Shipibo-Konibo. For Quechua, +TLM improves over FastAlign by 17.10%. Comparing Adaptation StrategiesWith mBERT, +MLM-T improves performance over the non-adapted baseline by 9.30% on average, with +MLM-ST increasing this gain to 9.63% and +TLM offering the highest improvement of 17.44%, consistent with prior work on _seen_ languages Dou and Neubig (2021). Per language, the largest and smallest gains are for Quechua (30.06%) and for Shipibo-Konibo (8.07%); intuitively, gains from adaptation are proportional to the size of the adaptation data. For XLM-R, we again see relative gains from adaptation, with +TLM offering the highest performance increase. Additional Monolingual DataNeural approaches can easily benefit from additional monolingual data. Adding Wikipedia data results in the highest performance for Guarani, outperforming the previous best approach by 3.1%. In contrast, while the additional data for Quechua does help relative to +MLM-T, it does not outperform +TLM. This difference in performance may be due to the relative sizes of the additional data; the Guarani Wikipedia has 1.3\(\times\) as many tokens as the target-side parallel data, while the Quechua Wikipedia only has 0.5\(\times\) as many. ## 4 Experiment 2: Extrinsic Evaluation We further compare aligner performance extrinsically by evaluating downstream task performance when using a projected training set. We consider two tasks: NER and POS tagging. ### Experimental Setup DataDue to the limited availability and quality of evaluation datasets, we focus on Guarani for this experiment. We use the test set provided by Rahimi et al. (2019) for NER and Universal Dependencies Nivre et al. (2020) for POS. For experiments where we finetune directly on English or Spanish, we use the provided training data. Annotation ProjectionTo create the projected training sets, we first annotate the (unlabeled) Spanish parallel data with Stanza Qi et al. (2020) and generate bidirectional alignments using each method. We then project the tags from Spanish to Guarani using type and token constraints as described by Buys and Botha (2016). ModelsFor baseline performance, we finetune mBERT on the provided English and Spanish training sets for each task. Additionally, we also finetune adapted versions of mBERT on Spanish training data - English is omitted as performance is worse and adaptation data is in Spanish. Finally, we evaluate performance when finetuning mBERT on the training sets created through projection. ### Results We present results for both tasks in Table 2. PosFor POS tagging, the baseline zero-shot performance is extremely poor, and we see a minimum increase of 11.71% accuracy when using any projection method. Giza++ outperforms FastAlign, as well as projection with +MLM-T, however the best performance is achieved with +MLM-ST, with +TLM offering the second best result. While the ordering of methods changes, the best performance is still achieved with the neural approaches, consistent with the results of Experiment 1. NerFor NER, baseline performance is high: inspecting the data shows that many entities have English or Spanish names, and as multilingual models already have knowledge of these two languages, \begin{table} \begin{tabular}{l l r r} \hline \hline Model & Train Source & POS & NER \\ \hline mBERT & en & 10.36 & 46.64 \\ & es & 19.82 & 49.18 \\ \hline +TLM & es & 36.94 & 49.62 \\ +MLM-T & es & 34.69 & **55.25** \\ +MLM-ST & es & 33.78 & 52.34 \\ \hline mBERT & mBERT & 31.53 & 47.54 \\ & +MLM-T & 38.29 & 47.97 \\ & +MLM-ST & **42.34** & 49.80 \\ & +TLM & 40.99 & 49.80 \\ & FastAlign & 37.84 & 46.55 \\ & Giza++ & 39.19 & 48.33 \\ \hline \hline \end{tabular} \end{table} Table 2: POS tagging (accuracy) and NER results (F1) for Guarani. _Model_ denotes if baseline or adapted mBERT is used. _Train Source_ defines the training data used for finetuning; language codes indicate training on original data, while alignment methods denote how a projected training set was created. standard aligners with projection may not effectively leverage surface word-form clues. However, they remain a valuable indication of alignment quality. Among the projection-based approaches, we find that using Giza++ again outperforms +MLM-T and FastAlign but falls short of +MLM-ST and +TLM. Overall, considering what both downstream tasks indicate regarding alignment quality, neural models adapted using Spanish and target-language data--either sentence-aligned or unaligned--consistently outperform traditional methods. ## 5 Analysis As data for low-resource languages often varies considerably in both amount and length, we consider two additional analysis experiments which control for these factors. We focus solely on Quechua, as it has the most parallel data available. Results are presented in Figure 2 with numerical results in Tables C.4 and C.5. Subset AnalysisFor this analysis, we ask how the performance of neural alignment depends on the amount of data and with how much data it surpasses traditional approaches. We subsample the adaptation data, and use this to extract alignments using both FastAlign and AWESoME. Results for this experiment can be seen in Figure 1(a). For reference, we also plot the AER obtained when using FastAlign on all the available training data as an upper bound for the performance of the traditional approaches. In the smallest extreme, all methods are roughly equivalent. However, as the number of examples increases, adaptation using +TLM and +MLM-WT improves at a faster rate than other approaches: with only 6400 sentence pairs, these approaches overtake the best expected performance of FastAlign. Length AnalysisAligner performance may not only be affected by the total number of examples available, but also by the length of these examples. This is doubly relevant for low-resource languages, as resources may be limited to sources which do not contain long (or even complete) sentences. To see how the performance of each method may vary when faced with examples of different lengths, we sort the unlabeled data by the number of characters, and partition the examples in groups of 7508, the total number of examples available for Bribri. We choose this amount as it is representative how much data may be available for other low-resource languages. As before, the expected upper bound FastAlign performance is denoted. For the shortest group, all methods are similar; however, AWESoME alignments improve with longer sequences, with +TLM showing the quickest decrease in error rate. We attribute the improved AER when adapting using longer sequences to the increased number of tokens available for adaptation. For Quechua, the performance of AWESoME align is sensitive to both the number of examples and sequence length. In contrast, FastAlign only shows a small improvement as example length increases. ## 6 Conclusion In this work, we have investigated the performance of modern word aligners versus classical approaches for languages _unseen_ to pretrained models. While classical methods remain competitive, the lowest AER on average is achieved by modern neural approaches. However, using these models comes with a larger computational cost. Therefore, the trade-off between training requirements and overall performance must be considered. If access to computing resources is limited or training time is a factor, classical approaches remain a viable approach which should not be discounted. Figure 2: Plots for data analysis. In Figure 1(b), a vertical line denotes the average example length for Bribri. ## Ethics and Limitations ### Ethics Statement When collecting data in an Indigenous language, it becomes vital that the process does not exploit any member of the community or commodity the language Schwartz (2022). Further, it is important that members of the community benefit from the dataset. While the creation of a word alignment dataset will not directly impact community members, we believe that it can contribute to the development of tools, such as translation systems, that can be directly beneficial, and that increasing the visibility of these languages within the research community will further spur the creation of useful systems. Our annotations were created by either co-authors of the paper or by native speakers of the languages, who were compensated at a rate chosen with the minimum hourly salary in their respective countries taken into account. ### Limitations Test Set SizeOne limitation of our work is the size of the evaluation set used for our main results. This arises from the general difficulty in collecting annotations and data for low-resource, and particularly Indigenous languages. The size of the test set was chosen to balance the trade-off between the cost of annotation collection and experimental validity. Fortunately, for the task of word alignment the main metric used to summarize performance--alignment error rate--does not depend directly on the number of examples in the evaluation set, but the total number of alignments, of which there are a sufficiently high number in our evaluation set. However, even when only considering the number of examples, our test set is still within the same order of magnitude as other widely used word alignment evaluation sets, such as the Romanian-English test set which consists of 248 examples Mihalcea and Pedersen (2003), and the English-Inuktitut and English-Hindi test sets which have 75 and 90 examples each, respectively Martin et al. (2005). We run a small experiment to gain insight into how much precision is lost when using a test set of size 50, versus 248, which we choose as this is the size of the widely used Romanian-English test set mentioned above. We take 100 independent samples without replacement from the Romanian-English test set, each of size 50, and evaluate the performance of FastAlign and AWESoME align. For FastAlign, we use the training data defined by Mihalcea and Pedersen (2003), and for Awesome, we use mBERT with no additional finetuning. The distributions of AER are shown in Figure A.1, with summary statistics in Table A.1. We can see that the standard deviation of both distributions is relatively low, around 2%. At the extremes, we see a difference of \(-4.70\%\) and \(+4.90\%\), and \(-4.28\%\) and \(+6.4\%\) for FastAlign and AWESoME align respectively, between the min/max values of our distribution as compared to the whole set AER. Considering these points, we believe that the size of our evaluation set does not invalidate our experimental results and main conclusions; however, we note that additional care must be taken when comparing specific models whose performances are close together, particularly when this performance is low or close to random. Test Set DomainOther limitations of our work arise from the sources of data used. Annotations were done using sentences sampled from AmericasNLI, which itself is a translation of XNLI. As such, any errors from the original XNLI dataset, which may have propagated through translation, will persist in our dataset as well (annotators were given the option to modify target language sentences to correct any errors). Furthermore, due to translation, the sentences may not be directly representative of a natural utterance which would be spoken by members of the communities. Language SelectionThe languages we highlight in this work are true low-resource languages, and present challenges commonly faced by other low-resource languages. Namely, these languages have a relatively small amount of easily available and clean unlabeled data, are typically unseen from most released pretrained models, and are morphologically different from typically used source languages. However, one feature of these languages which may inflate aligner performance is the language script: all of our target languages share the same script with the two source languages which we use. This may lead to higher occurrences of shared words or entities, making alignment easier. As such, our results may not generalize fully to other low-resource languages which have a different script from the source languages, or which may have a script which is unseen to the underlying pretrained model. ## Acknowledgements We would like to thank Roque Helmer Lunan-Montoya (Academia Mayor de la Lengua Quechua in Cuzco, Peru) and Richard Castro Mamani (Universidad Nacional de San Antonio Abad and Hinantin Software) for their help in annotating and verifying the Quechua-Spanish alignments. We would also like to thank Liz Karen Chavez Sanchez for annotating the Shipibo-Konibo-Spanish alignments. A.D.M. is supported by an Amazon Fellowship and a Frederick Jelinek Fellowship.
2306.04734
Machine-Learning Kronecker Coefficients
The Kronecker coefficients are the decomposition multiplicities of the tensor product of two irreducible representations of the symmetric group. Unlike the Littlewood--Richardson coefficients, which are the analogues for the general linear group, there is no known combinatorial description of the Kronecker coefficients, and it is an NP-hard problem to decide whether a given Kronecker coefficient is zero or not. In this paper, we show that standard machine-learning algorithms such as Nearest Neighbors, Convolutional Neural Networks and Gradient Boosting Decision Trees may be trained to predict whether a given Kronecker coefficient is zero or not. Our results show that a trained machine can efficiently perform this binary classification with high accuracy ($\approx 0.98$).
Kyu-Hwan Lee
2023-06-07T19:10:44Z
http://arxiv.org/abs/2306.04734v1
# Machine-learning Kronecker coefficients ###### Abstract. The Kronecker coefficients are the decomposition multiplicities of the tensor product of two irreducible representations of the symmetric group. Unlike the Littlewood-Richardson coefficients, which are the analogues for the general linear group, there is no known combinatorial description of the Kronecker coefficients, and it is an NP-hard problem to decide whether a given Kronecker coefficient is zero or not. In this paper, we show that standard machine-learning algorithms such as Nearest Neighbors, Convolutional Neural Networks and Gradient Boosting Decision Trees may be trained to predict whether a given Kronecker coefficient is zero or not. Our results show that a trained machine can efficiently perform this binary classification with high accuracy (\(\approx 0.98\)). ## 1. Introduction One of the major goals in representation theory is to decompose a representation into irreducible representations, and algebraic combinatorics plays a crucial role as it produces a practical way to describe such a decomposition. A prototype can be found in the decomposition of the tensor product of two irreducible representations of the general linear group \(\operatorname{GL}_{N}(\mathbb{C})\), where the Littlewood-Richardson rule completely describes the decomposition using skew semi-standard tableaux. Therefore it comes with surprise to realize that we do not have a similar success with the symmetric group until now. To be precise, let \(\mathfrak{S}_{n}\) be the symmetric group of degree \(n\) and consider representations of \(\mathfrak{S}_{n}\) over \(\mathbb{C}\). The irreducible representations \(S_{\lambda}\) of \(\mathfrak{S}_{n}\) are parametrized by partitions \(\lambda\vdash n\), and the dimension of \(S_{\lambda}\) is equal to the number of standard tableaux of shape \(\lambda\). Consider the tensor product of two irreducible representations \(S_{\lambda}\) and \(S_{\mu}\) for \(\lambda,\mu\vdash n\). Then their tensor product is decomposed into a sum of irreducible representations: \[S_{\lambda}\otimes S_{\mu}=\bigoplus_{\nu\vdash n}g^{\nu}_{\lambda,\mu}S_{\nu }\quad(g^{\nu}_{\lambda,\mu}\in\mathbb{Z}_{\geq 0}).\] The decomposition multiplicities \(g^{\nu}_{\lambda,\mu}\) are called the _Kronecker coefficients_. In stark contrast with the Littlewood-Richardson coefficients for \(\operatorname{GL}_{N}(\mathbb{C})\), no combinatorial description has been known for \(g^{\nu}_{\lambda,\mu}\) since Murnaghan [Mur] initially posed the question in 1938, and it is still considered as one of the main problems in the combinatorial representation theory. For special cases, various combinatorial formulas are known due to Remmel [Rem], Ballantine-Orellana [BO], Remmel-Whitehead [RW], Blasiak-Mulmuley-Sohoni [BMS] and Blasiak [Bla]. Recently, the coefficients \(g^{\nu}_{\lambda,\mu}\) have also been studied from the viewpoint of computational complexity. In particular, it is shown by Ikenmeyer, Mulmuley and Walter [IMW] that deciding whether a given Kronecker coefficient is non-zero is NP-hard, and other interesting results are obtained by Pak and Panova [1, 2]. In this paper we apply machine-learning to the Kronecker coefficients \(g^{\nu}_{\lambda,\mu}\) to show that a machine can be trained to learn these coefficients. Though this seems to be the first time for machine-learning to be applied to the coefficients \(g^{\nu}_{\lambda,\mu}\), there has been mounting evidence that a machine can be trained to learn mathematical structures. Indeed, starting with superstring theory [1, 2, 3, 4, 5], machine-learning has been applied, for example, to group theory [1], graph theory [1, 2], metric geometry [1, 10], knot invariants [13], and to number theory [12, 13, 14]. Moreover, a new paradigm of AI-aided research in mathematics is emerging [4]. As the main machine-learning methods for this paper, we utilize Nearest Neighbors, Convolutional Neural Networks and Gradient Boosting Decision Trees (specifically, LightGBM [1]) as they perform better than other standard machine-learning algorithms. For a broad introduction to machine-learning, see [1, 15]. Our datasets of the Kronecker coefficients \(g^{\nu}_{\lambda,\mu}\) are generated by SageMath[16]. The results of our experiments clearly show that a machine can be trained to distinguish between nonzero and zero Kronecker coefficients \(g^{\nu}_{\lambda,\mu}\) with high accuracy (\(\approx 98\%\)). We focus on \(\mathfrak{S}_{n}\) for \(12\leq n\leq 14\) since general features are clear enough from these cases, and the sizes of datasets are suitable for our purposes. (When \(n=14\), there are \(2,258,526\) data points in our dataset.) We hope that the results in this paper may invite researchers to further study Kronecker coefficients by taking machine-learning or AI-aided approaches. Specifically, attribute analysis would be the next step, which may produce some conjectures to describe Kronecker coefficients. This paper is organized as follows. In Section 2, the generation of training data and the experimental set-up are explained. In Section 3, we document the experimental outcomes and make some observations. In the final section, we offer some concluding remarks and directions for further research. ### Acknowledgments We are grateful to Greta Panova for helpful comments. ## 2. Methodology In this section we explain our experimental set-up. In particular, we present details about how to construct our datasets and overview the machine-learning strategies adopted. ### Generating Datasets We keep the notations in the introduction. Additionally, for a partition \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots)\) of \(n\), define \(d_{\lambda}:=n-\lambda_{1}\), called the _depth_ of \(\lambda\). The following theorem suggests a mild restriction for \((\lambda,\mu,\nu)\) to be considered. **Theorem 2.1**.: _[_1_, Theorem 2.9.22]_ _If \(g^{\nu}_{\lambda,\mu}\neq 0\) then_ \[|d_{\lambda}-d_{\mu}|\leq d_{\nu}\leq d_{\lambda}+d_{\mu}. \tag{2.1}\] The inequalities in (2.1) provide necessary conditions for \(g^{\nu}_{\lambda,\mu}\neq 0\) and the converse is not true. Actually, when \(n=12\), there are 456,533 triples \((\lambda,\mu,\nu)\) in total and 406,919 of them satisfy (2.1). Among them, 280,009 triples give \(g^{\nu}_{\lambda,\mu}\neq 0\) and the remaining 126,910 triples \(g^{\nu}_{\lambda,\mu}=0\). We will consider classification problems for datasets satisfying (2.1). There are symmetries among \(g^{\nu}_{\lambda,\mu}\) as stated in the lemma below, which will be explicitly taken into account when we make some datasets. **Lemma 2.2**.: _[_1_, p.61]_ _Let \(\lambda,\mu,\nu\vdash n\). Then the Kronecker coefficients \(g^{\nu}_{\lambda,\mu}\) are invariant under the permutations of \(\lambda,\mu,\nu\). That is, we have_ \[g^{\nu}_{\lambda,\mu}=g^{\nu}_{\mu,\lambda}=g^{\mu}_{\lambda,\nu}=g^{\mu}_{ \nu,\lambda}=g^{\lambda}_{\mu,\nu}.\] Now, for \(n\in\mathbb{Z}_{>0}\), let \(\mathcal{P}(n)\) be the set of partitions of \(n\). We identify each element \(\lambda\) of \(\mathcal{P}(n)\) with a sequence of length \(n\) by adding \(0\)-entries as many as needed. For example, when \(n=5\), we have \[\mathcal{P}(5)=\{(5,0,0,0,0),(4,1,0,0,0),(3,2,0,0,0),(3,1,1,0,0),\] \[(2,2,1,0,0),(2,1,1,1,0),(1,1,1,1,1)\}.\] Let \(\mathcal{Q}(n)\) be the set of triples \((\lambda,\mu,\nu)\in\mathcal{P}(n)^{3}\) that satisfies (2.1). For each triple \((\lambda,\mu,\nu)\in\mathcal{Q}(n)\), we form the following data points. (1) a \(1d\)-array of size \(1\times 3n\) given by concatenation: \[v_{1}(\lambda,\mu,\nu)=[\lambda,\mu,\nu].\] (2) a \(2d\)-array of size \(n\times 3\) given by \[v_{2}(\lambda,\mu,\nu)=[[\lambda],[\mu],[\nu]],\] where \([\lambda],[\mu],[\nu]\) are column vectors. (3) a \(3d\)-array of size \(6\times n\times 3\) consisting of the permutations of the triple: \[v_{3}(\lambda,\mu,\nu)=[[[\lambda],[\mu],[\nu]],[[\lambda],[\nu],[\mu]],[[\mu],[\lambda],[\nu]],[[\mu],[\nu],[\lambda]],[[\nu],[\lambda],[\mu]],[[\nu],[\mu], [\lambda]]]].\] The array \(v_{3}(\lambda,\mu,\nu)\) is defined to reflect Lemma 2.2 into the dataset. Define a class-valued function \(t(\lambda,\mu,\nu)\) on \(\mathcal{Q}(n)\) by \[t(\lambda,\mu,\nu)=\left\{\begin{array}{ll}0&\mbox{ if }g^{\nu}_{\lambda,\mu}=0, \\ 1&\mbox{ if }g^{\nu}_{\lambda,\mu}\neq 0.\end{array}\right.\] Finally, we have the following labeled datasets: \[\mathcal{D}^{(a)}_{n}=\{v_{a}(\lambda,\mu,\nu)\to t(\lambda,\mu,\nu)\ :\ ( \lambda,\mu,\nu)\in\mathcal{Q}(n)\}\quad\mbox{ for }a=1,2,3. \tag{2.2}\] ### Experimental strategy We perform our experiments following the steps explained below. 1. Choose \(n\). In this paper, we mainly consider \(12\leq n\leq 14\). When \(n=14\), we have \(\#(\mathcal{Q}(14))=2,258,526\), which makes the size of dataset reasonably big. 2. Determine an ML classifier. We mostly use Nearest Neighbors (NearN), Convolutional Neural Networks (CNNs) and LightGBM (LGBM). 3. Generate datasets of the form \(\mathcal{D}^{(a)}_{n}\) as in (2.2) using SageMath. According to the choice of an ML classifier, we use \(a=1\) for NearN and LGBM and \(a=2,3\) for CNNs. 4. Form a balanced dataset \(\mathcal{D}\) out of \(\mathcal{D}_{n}^{(a)}\) by sampling the same number of data points from each class \(t(\lambda,\mu,\nu)=i\) for \(i=0,1\). Choose a subset \(\mathcal{T}\subset\mathcal{D}\) and denote its complement by \(\mathcal{V}=\mathcal{D}-\mathcal{T}\). We will refer to \(\mathcal{T}\) as the training dataset, and \(\mathcal{V}\) as the validation dataset. We will typically choose \(70\%\) of data points for \(\mathcal{T}\) and \(30\%\) for \(\mathcal{V}\). 5. Train the classifier in step (2) on the set \(\mathcal{T}\). We implement the algorithms using Mathematica[Wolf], TensorFlow[TF] and LightGBM[LGBM]. 6. For all triples \((\lambda,\mu,\nu)\) in \(\mathcal{V}\), ask the classifier to determine \(t(\lambda,\mu,\nu)\), that is, whether \(g_{\lambda,\mu}^{\nu}=0\) or not. We record the precision and confusion matrix, which together constitute a good measure of performance of the classifier. 7. Repeat steps (4) to (6) for different choices of \(\mathcal{T}\). The precision values and confusion matrices recorded below are representative of several repetitions. 8. Repeat steps (1) to (7) for different choices of \(n,a\) and an ML classifier. ## 3. Results In this section we describe our experimental results for the binary classification: \(g_{\lambda,\mu}^{\nu}=0\) or \(\neq 0\). For \(n=12,13,14\), we use datasets \(\mathcal{D}_{n}^{(1)}\) to train NearN and LGBM classifiers. On the other hand, \(\mathcal{D}_{n}^{(2)}\) and \(\mathcal{D}_{n}^{(3)}\) are used for CNN classifiers to utilize the benefit of convolution layers. The results of CNN using \(\mathcal{D}_{n}^{(2)}\) will be denoted by CNN\({}_{2}\) and those using \(\mathcal{D}_{n}^{(3)}\) by CNN\({}_{3}\). The results are recorded in Tables 1 and 2. ### Specifics In the architecture of CNNs, we put one \(2d\)-convolution layer with \(32\) filters and one dense layer after flattening the outcome of the convolution layer. Though there are no significant differences in precisions for different kernel sizes of the convolution layers, the following kernel sizes perform slightly better than others: for CNN\({}_{2}\), kernel size \((n-4)\times 3\) with input size \(n\times 3\), and for CNN\({}_{3}\), kernel size \(2\times(n-4)\) with input size \(6\times n\times 3\), where \(n=12,13,14\). The total numbers of parameters are shown below: \[\text{CNN}_{2}\text{: }(96\times n-30),\qquad\text{CNN}_{3}\text{: }(192\times n+866).\] That is, \(1122,1218,1314\) for CNN\({}_{2}\) and \(3170,3362,3554\) for CNN\({}_{3}\) when \(n=12,13,14\), respectively. The outcomes for CNNs in Table 1 are from these kernel sizes. The hyper-parameters of LightGBM used in our experiments are below: 'metric': 'auc', 'boosting': 'gbdt', 'num_leaves': 63, 'feature_fraction': 0.5, 'bagging_fraction': 0.5, 'bagging_freq': 20, 'learning_rate': 0.01 ### Observations Let us make some observations from the results recorded in Tables 1 and 2. 1. The precisions of different ML classifiers can be compared as \[\text{NearN $<$ CNN}_{2}\text{ $<$ CNN}_{3}\text{ $<$ LGBM}.\] When \(n=14\), LGBM achieved precision \(0.9845\). 2. The classifiers generally perform better as \(n\) increases. However, the difference between \(n=13\) and \(n=14\) is much less, compared to that between \(n=12\) and \(n=13\). It seems that the precision is being stabilized as \(n\) grows. 3. In the confusion matrix one can see that class 1 (\(g^{\nu}_{\lambda,\mu}\neq 0\)) is more often misclassified than class 0 (\(g^{\nu}_{\lambda,\mu}=0\)). However, though rarely, the opposite may occur depending on samples. For example, see \(n=13\) of CNN\({}_{3}\) in Table 2. ## 4. Conclusions and Outlook The experimental results in this paper show that an ML classifier can be trained to distinguish nonzero Kronecker coefficients from zero ones with high accuracy. This provides somewhat convincing evidence that ML can be used to recognize structures in Kronecker coefficients, adding another example to new opportunities of developing ML techniques for studying mathematical structures. This paper focuses on the binary classification of \(g^{\nu}_{\lambda,\mu}=0\) or \(g^{\nu}_{\lambda,\mu}>0\). One can perform several variations of this experiment. For example, multi-class classification of \(g^{\nu}_{\lambda,\mu}=0,1,\dots,K\) or \(g^{\nu}_{\lambda,\mu}>K\) can be considered for some \(K\geq 1\). A different direction is to consider _induction_ and _reduction_ with respect to the inclusions \(\mathfrak{S}_{n}\subset\mathfrak{S}_{N}\) for \(n<N\). That is, we can train a machine with \(\mathfrak{S}_{N}\) and apply it to \(\mathfrak{S}_{n}\), and vice versa. Yet another direction is to investigate the datasets using dimensionality reduction techniques such as principal component analysis (PCA) or t-distributed stochastic neighborhood embedding (t-SNE) [MH]. With all these experimental results at hand, the next step would be to perform attribute analysis to understand what ML classifiers actually recognize in the datasets. This may lead to some conjectures on how to combinatorially determine whether a Kronecker coefficient is zero or not, or how to combinatorially describe Kronecker coefficients. Hopefully, it may be that ML can even provide some hints about how to prove the conjectures.
2301.01833
Classical multivariate Hermite coordinate interpolation on n-dimensional grids
In this work, we study the Hermite interpolation on $n$-dimensional non-equally spaced, rectilinear grids over a field $\Bbbk $ of characteristic zero, given the values of the function at each point of the grid and the partial derivatives up to a maximum degree. First, we prove the uniqueness of the interpolating polynomial, and we further obtain a compact closed form that uses a single summation, irrespective of the dimensionality, which is algebraically simpler than the only alternative closed form for the $n$-dimensional classical Hermite interpolation [1]. We provide the remainder of the interpolation in integral form; we derive the ideal of the interpolation and express the interpolation remainder using only polynomial divisions, in the case of interpolating a polynomial function. Moreover, we prove the continuity of Hermite polynomials defined on adjacent $n$-dimensional grids, thus establishing spline behavior. Finally, we perform illustrative numerical examples to showcase the applicability and high accuracy of the proposed interpolant, in the simple case of few points, as well as hundreds of points on 3D-grids using a spline-like interpolation, which compares favorably to state-of-the-art spline interpolation methods.
Aristides I. Kechriniotis, Konstantinos K. Delibasis, Iro P. Oikonomou, Georgios N. Tsigaridas
2023-01-04T21:57:16Z
http://arxiv.org/abs/2301.01833v5
# Classical multivariate Hermite coordinate interpolation in n-dimensional grid ###### Abstract In this work, we study the Hermite interpolation on \(n\)-dimensional non-equal spaced, rectilinear grids over a field \(\mathbb{k}\) of characteristic zero, given the values of the function at each point of the grid and the partial derivatives up to a maximum degree. First, we prove the uniqueness of the interpolating polynomial, and we further obtain a compact closed form that uses a single summation, irrespective of the dimensionality. We provide the remainder of the interpolation in integral form; moreover, we derive the ideal of the interpolation and express the interpolation remainder using only polynomial divisions, in the case of interpolating a polynomial function. The arithmetic complexity of the derived closed formula compares favourably with the only alternative closed form for the \(n\)-dimensional classical Hermite interpolation [1]. Finally, we perform illustrative numerical examples to showcase the applicability and high accuracy of the proposed interpolant, compared to other interpolation methods. keywords: polynomial interpolation, multivariate Hermite, classical Hermite interpolation, n-Dimensional grid Pacs: 03.65.Pm, 03.50.De, 41.20.-q + Footnote †: journal: multivariate Hermite url][email protected] ## 1 Introduction and Notations This work focuses on multivariate classical Hermite interpolation with support points arranged on an n-dimensional non-equally spaced rectilinear grid (\(nD\) grid), given the value of a function, as well as its derivatives up to an arbitrary maximum order, defined independently for each point and each dimension. An old survey of Hermite interpolation methods [2] discusses the issues of uniform, regular and singular interpolation in \(n\) dimensions, however no closed form, or interpolation error is provided for the interpolant polynomial. In addition, a number of works deal with bivariate Hermite-like interpolation, where the order of partial derivatives along the two dimensions are not independent [3; 4], which is a different than the classical Hermite interpolation. Let \(\mathbb{N}_{0}\) the set of non-negative integers, and \(\mathbb{N}\) the set of positive integers. We first state the generalized univariate (1D) Hermite interpolation formula proposed by Spitzbart [5]: **Theorem 1.1**.: _Let \(A\) be a finite subset of \(\mathbb{R}\), and \(\nu:A\rightarrow\mathbb{N}\) the multiplicity function. Further, let \(V(A,\nu)\) be the \(\mathbb{R}\)-vector space \(\left\{p\in\mathbb{R}\left[x\right]:\text{deg}\ p<\sum_{a\in A}\nu\left(a\right) \right\}.\) Given the real numbers \(t_{a}^{k},a\in A,\ k\in\left\{0,1,...,\nu\left(a\right)-1\right\}\), then there is a unique \(p\in V\left(A,\nu\right)\) such that \(p^{\left(b\right)}\left(a\right)=t_{a}^{k},\ a\in A\,,\ k\in\left\{0,1,..., \nu\left(a\right)-1\right\}\), given by_ \[p=\sum_{a\in A}\sum_{k=0}^{\nu\left(a\right)-1}t_{a}^{k}H_{a}^{k}, \tag{1.1}\] _where \(H_{a}^{k}\left(x\right)=H_{a}\left(x\right)\dfrac{\left(x-a\right)^{k}}{k!} \sum_{t=0}^{\nu\left(a\right)-k-1}\dfrac{\left(x-a\right)^{t}}{t!}\dfrac{d^{t }}{dx^{t}}\left(\dfrac{1}{H_{a}\left(x\right)}\right)\left(a\right)\), with \(H_{a}\left(x\right):=\prod_{\begin{subarray}{c}c\in A\\ c\neq a\end{subarray}}\left(\dfrac{x-c}{a-c}\right)^{\nu\left(a\right)}.\)_ In the above notation, \(a\) denotes any of the support points \(A\) (where the values of the unknown function, as well as the values of its derivatives are given) and the multiplicity function \(v\) holds the maximum order of derivative minus 1. The above Theorem can be extended when \(A\) is a subset of any field \(\Bbbk\) of characteristic zero. In our previous work [6], we derived a new closed form expression of the univariate Hermite interpolating polynomial for the general case of arbitrarily spaced data, that was algebraically significantly simpler than Theorem 1.1, since it only requires simple matrix multiplications, rather than n-order derivatives of rational polynomial functions \(H_{z}^{k}(x)\) of Eq.(1.1). The 1D Hermite interpolant of Theorem 1.1 can be easily extended to multivariate interpolation. Namely, in [7] the two dimensional (2D) interpolation is considered. In [8] we extended our previous work [6] into two dimensions (2D), deriving closed-form expression that was again much more compact than [9; 7] and applicable in the case of support points arranged on a non-equidistant grid. For both the 1D and 2D cases, we provided means for computationally efficient implementations of the proposed Hermite interpolating polynomials that achieved computational complexity comparable to other popular and much simpler interpolation techniques, such as cubic splines, whereas the measured error when applied to clinical medical images was superior. For the generalization of Hermite interpolation into \(n\) dimensions, we require the following notations. Let \(A\) be a set, \(\left|A\right|\) the cardinality of \(A\), and \(A^{n}:=A\underset{n-times}{\times}A\). Given the sets \(A_{1},...,A_{n}\), then \(\mathbf{A}:=\)\(A_{1}\times...\times A_{n}\). Further, the element \(\left(a_{1},...a_{n}\right)\in\mathbf{A}\) will be denoted by \(\mathbf{a}\). Let \(\mathbf{0}=\left(0,0,...,0\right),\ \mathbf{1}=\left(1,1,...,1\right)\) be the zero vector and ones vector, respectively. Thus, points \(\mathbf{a}=\left(a_{1},...a_{n}\right)\) are arranged on a non-regular N-dimensional grid \(\mathbf{A}\). Let \(\Bbbk\) be a field of characteristic zero. For \(\mathbf{a}\in\Bbbk^{n}\), and \(\mathbf{m}\in\)\(\mathbb{N}_{0}^{n}\) we denote \(\mathbf{a}^{\mathbf{m}}:=\prod_{i=1}^{n}a_{i}^{m_{i}}\). Let \(\mathbf{k}=\left(k_{1},...k_{n}\right)\) be an \(n\)-dimensional vector of non-negative integers, holding the order of partial derivatives of the interpolating polynomial with respect to each variable. In \(\mathbb{N}_{0}^{n}\) we define the relation "\(\leq\) " as follows: \(\mathbf{k}\leq\mathbf{m}\) if and only if \(k_{i}\leq m_{i}\), for every \(i=1,...,n.\) Clearly \(\left(\mathbb{N}_{0}^{n},\leq\right)\) is a poset (\(\mathbb{N}_{0}^{n}\) is partially ordered). If \(\mathbf{k}\leq\mathbf{m}\) and \(\mathbf{k}\neq\mathbf{m}\), then \(\left[\mathbf{k},\mathbf{m}\right]:=\left\{\mathbf{l}\in\mathbb{N}_{0}^{n}: \mathbf{k}\leq\mathbf{l}\leq\mathbf{m}\right\}=\left[k_{1},m_{1}\right]\times \cdots\times\left[k_{n},m_{n}\right]\), and is valid \(\left[\left[\mathbf{k},\mathbf{m}\right]\right]=\prod_{i=1}^{n}\left(m_{i}-k_{ i}+1\right)\). Given the finite subsets \(A_{i}\), \(i=1,...,n\) of the field \(\Bbbk\), and the multiplicity functions \(\nu_{i}:A_{i}\rightarrow\mathbb{N},\ i=1,...,n.\) Then for \(i\in\left\{1,...,n\right\}\), and \(a\in A_{i}\) we define \[H_{\left(i,a\right)}\left(x_{i}\right):=\prod_{\begin{subarray}{c}c\in A_{i}\\ c\neq a\end{subarray}}\left(\dfrac{x_{i}-c}{a-c}\right)^{\nu\left(a\right)} \in\Bbbk\left[x_{i}\right]. \tag{1.2}\] Let \(\nu:\mathbf{A}\rightarrow\mathbb{N}^{n}\) be the generalized multiplicity function given by \(\nu\left(\mathbf{a}\right):=\left(\nu_{1}\left(a_{1}\right),...,\nu_{n} \left(a_{n}\right)\right)\). For \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) we define \[H_{\left(\mathbf{a},\mathbf{k}\right)}\left(x_{1},...x_{n}\right):=\prod_{i=1 }^{n}\dfrac{\left(x_{i}-a_{i}\right)^{k}H_{\left(i,a_{i}\right)}\left(x_{i} \right)}{k_{i}!}\in\Bbbk\left[x_{1},...,x_{n}\right]. \tag{1.3}\] Let \(V\left(\mathbf{A},\nu\right)\) denote the \(\Bbbk\)-vector space \(\left\{f\in\Bbbk\left[x_{1},...,x_{n}\right]:\text{deg}_{i}\,f<\sum_{a\in A_{i} }\nu_{i}\,\left(a\right),i=1,...,n\right\},\) a basis of which is the following \[B\left(\mathbf{A},\nu\right):=B\left(\mathbf{A},\nu\right):=\left\{ \mathbf{x}^{\mathbf{k}}:\,\mathbf{k}\in\left[\mathbf{0},-\mathbf{1}+\sum_{a\in A }\nu\left(\mathbf{a}\right)\right]\right\}.\] Further, let us define \(H_{a_{j}}\left(x_{j}\right):=\prod\limits_{\begin{subarray}{c}c\in A_{j}\\ c\neq a_{j}\end{subarray}}\left(\dfrac{x_{j}-c}{a_{j}-c}\right)^{\nu\left(a \right)}\), for \(a_{j}\in A_{j}\), as well as \[H_{a_{j}}^{k_{j}}\left(x_{j}\right):=H_{a_{j}}\left(x_{j}\right)\dfrac{\left( x_{j}-a_{j}\right)^{k_{j}}}{k_{j}!}\sum\limits_{t=0}^{\nu\left(a_{j}\right)-k_{j}-1 }\dfrac{\left(x_{j}-a_{j}\right)^{t}}{t!}\dfrac{d^{t}}{dx_{j}^{t}}\left(\dfrac {1}{H_{a_{j}}\left(x_{j}\right)}\right)\left(a_{j}\right)\in\Bbbk[x],\] and subsequently, \(H_{\mathbf{a}}^{\mathbf{k}}\left(x_{1},...x_{n}\right):=\prod\limits_{j=1}^{n }H_{a_{j}}^{k_{j}}\left(x_{j}\right)\in\Bbbk\left[x_{1},...,x_{n}\right].\) Let us also define \(P(\mathbf{A},\nu)\) as follows \[P\left(\mathbf{A},\nu\right):=\left\{H_{\left(\mathbf{a}\mathbf{k}\right)}: \mathbf{a}\in\mathbf{A},\,\mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\right\}.\] Finally, we define below the partial derivative operator acting on \(f:\) \[\partial^{\mathbf{k}}:=\prod\limits_{i=1}^{n}\partial_{i}^{k_{i}},\ \partial_{\mathbf{a}}^{\mathbf{k}}f\left(\mathbf{x}\right):=\partial^{ \mathbf{k}}f\left(\mathbf{x}\right)\left|_{\mathbf{x}=\mathbf{a}}\right.,\text { where }\partial_{i}^{k}:=\dfrac{\partial^{k}}{\partial x_{i}^{k}}.\] To our knowledge, the only \(n\)-dimensional multivariate generalization of the 1D classical Hermite interpolation in Spitzbart [5] was proposed very recently in [1], and it is presented in the following Theorem: **Theorem 1.2**.: _Let \(t_{\mathbf{a}}^{\mathbf{k}}\in\mathbb{R}\,,\ \mathbf{a}\in\mathbf{A}:=A_{1}\times...\times A_{n},\ \mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right].\) Then there exists a unique \(p\in V\left(\mathbf{A},\nu\right)\) such that \(\partial_{\mathbf{a}}^{\mathbf{k}}p=t_{\mathbf{a}}^{\mathbf{k}},\ \mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\), given by_ \[p=\sum\limits_{\mathbf{a}\in\mathbf{A}}\sum\limits_{\mathbf{m}\in\left[ \mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]}t_{\mathbf{a}}^{ \mathbf{k}}H_{\mathbf{a}}^{\mathbf{k}}=\sum\limits_{a_{1}\in A_{1}}\cdots\sum \limits_{a_{n}\in A_{n}}\sum\limits_{k_{1}=0}^{\nu_{1}\left(a_{1}\right)-1} \cdots\sum\limits_{k_{n}=0}^{\nu_{n}\left(a_{n}\right)-1}\prod\limits_{j=1}^{n }H_{a_{j}}^{k_{j}}\left(x_{j}\right)t_{\mathbf{a}}^{\mathbf{k}}. \tag{1.4}\] The authors of that work however did not provide the remainder of the interpolating polynomial. Furthermore, the algebraic complexity of the Hermite polynomial expressed in Theorem1.2 increases rapidly with the number of dimensions \(n\), which can be confirmed by comparing equations (1.1) and (1.4). The present work deals with the Hermite interpolating polynomial into a regular, non-equidistant grid of support points in arbitrarily high dimensions (\(nD\) grids) and provides an elegant, compact closed-form expression with only one summation over support points and without requiring derivatives of rational polynomial functions. We also provide the interpolation remainder, which, to our knowledge, has not been given so far. The article is organized as follows. Having defined some necessary notations, we provide in the next section the lemmas and remarks required for Theorem 2.2 about the uniqueness of the interpolating polynomial and Theorem 2.5 for the proposed closed formula of the interpolant. The remainder of the interpolating polynomial is also proven in section 3. In section 4 the ideal of the interpolation is studied, deriving a reduced Grobner basis of the ideal. This basis is utilized to express the remainder of the interpolation of polynomial functions using cascaded polynomial divisions. The notations introduced in this paper, also facilitate a very short proof of Theorem 1.2, which is provided in section 5 for completeness. Finally, simple algebraic examples are given, showing that the bilinear and trilinear interpolation are special cases of the proposed n-dimensional Hermite interpolation. In addition, few arithmetic implementations of the proposed interpolation of known functions are provided, demonstrating its superiority in accuracy against other popular interpolation techniques. ## 2 The proposed Hermite interpolation on \(n\)-\(D\) rectilinear grids **Remark 1**.: It is easy to verify that \[\left|\left|\left[\mathbf{a}^{\mathbf{k}}:\ \mathbf{a}\in\mathbf{A},\, \mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right] \right|\right|=\sum\limits_{\mathbf{a}\in\mathbf{A}}\sum\limits_{i\in\left[ \mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]}1=\sum\limits_{ \mathbf{a}\in\mathbf{A}}\prod\limits_{i=1}^{n}\nu_{i}\left(a_{i}\right)=\prod \limits_{i=1}^{n}\sum\limits_{a\in A_{i}}\nu_{i}\left(a\right).\] **Remark 2**.: The cardinality of the set \(P(\mathbf{A},\nu)\) is given by \(\left|P\left(\mathbf{A},\nu\right)\right|=\left|\left[\prod_{i=1}^{n}\left( \mathbf{x}-\mathbf{a}\right)^{\mathbf{k}}:\ \mathbf{a}\in\mathbf{A},\,\mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\right|\right|\),\) which can be simplified: \[\left|P\left(\mathbf{A},\nu\right)\right|=\sum_{\mathbf{a}\in\mathbf{A}}\prod_ {i=1}^{n}\nu_{i}\left(a_{i}\right).\] The above yields \(\left|P\left(\mathbf{A},\nu\right)\right|=\prod_{i=1}^{n}\sum_{a_{i}\in A_{i}} \nu_{i}\left(a\right)=\dim_{k}V\left(\mathbf{A},\nu\right)\). The set of partial derivative operators has equal cardinality: \[\left|\left\langle\partial_{\mathbf{a}}^{\mathbf{m}},\ \mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\right|\right|=\prod_{i=1}^{n}\sum_{a_{i}\in A_{i}} \nu_{i}\left(a\right).\] **Remark 3**.: Since \(\deg_{i}H_{\left(\mathbf{a},\mathbf{k}\right)}=k_{i}+\sum_{a\in A_{i}-\left[a _{i}\right]}\nu_{i}\left(a\right)<\sum_{a\in A_{i}}\nu_{i}\left(a\right)\), the following holds \[P\left(\mathbf{A},\nu\right)\subset V\left(\mathbf{A},\nu\right).\] **Remark 4**.: For \(\mathbf{a},\mathbf{b}\in\mathbf{A}\), and \(\mathbf{k},\mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) by using the Leibniz derivative rule we easily get: \[\partial_{\mathbf{a}}^{\mathbf{k}}H_{\left(\mathbf{b},\mathbf{m}\right)}= \left\{\begin{array}{llll}0,&\text{if}&\mathbf{a}\neq\mathbf{b}\\ 0,&\text{if}&\mathbf{a}=\mathbf{b}\quad\text{and}\quad\mathbf{k}<\mathbf{m}\\ 0,&\text{if}&\mathbf{a}=\mathbf{b}\quad\text{and}\quad\mathbf{k},\mathbf{m} \quad\text{are incomparable}\\ 1,&\text{if}&\mathbf{a}=\mathbf{b}\quad\text{and}\quad\mathbf{m}=\mathbf{k} \end{array}\right.\.\] **Lemma 2.1**.: _Let \(\left(G,\leq\right)\) be a poset, and \(A\) a finite subset of \(G\). Then the system of linear equations_ \[\sum_{\beta\leq a}c_{a,\beta}x_{\beta}=d_{a},a\in A, \tag{2.1}\] _has unique solution, where \(c_{a,\beta}\in\Bbbk\) with \(c_{a,a}\neq 0\) for all \(a\in A\)._ Proof.: Let \(\left|A\right|=m\). Then we number the elements of \(A\) as follows: We choose randomly a minimal element of \(A\) which will be denoted by \(a_{1}.\) Next we choose a minimal element of \(A-\left\{a_{1}\right\}\) denoted by \(a_{2}\), and so on. That is \(A=\left\{a_{1},\ldots,a_{m}\right\}.\) Therefore (2.1) can be rewritten as \[\sum_{i=1}^{j}\varepsilon_{i,j}c_{a_{i}}x_{a_{i}}=d_{a_{j}},j=1,\ldots,m,\] where \[\varepsilon_{i,j}:=\begin{cases}0,&\text{if}\ \ a_{j}<a_{i}\\ 0,&\text{if}\ \ a_{i},a_{j}\text{ are not comparable}\\ 1,&\text{if}\ \ a_{i}\leq a_{j},\end{cases}\] Clearly, the matrix of the coefficients of the variables \(x_{a_{i}}\) is lower triangular, and because all its diagonal elements \(\varepsilon_{j,j}c_{a_{j}}\) are non zero, we conclude that (2.1) has exactly one solution. We now present the Theorem for the uniqueness of the multivariate Hermite interpolating polynomial. **Theorem 2.2**.: _Given the elements \(t_{\mathbf{a}}^{\mathbf{k}}\in\Bbbk\,,\ \mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\), there exists a unique \(f\in V\left(\mathbf{A},\nu\right)\) such that \(\partial_{\mathbf{a}}^{\mathbf{m}}f=t_{\mathbf{a}}^{\mathbf{m}},\ \mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\)._ Proof.: It is sufficient to show that there are unique elements \(x_{\mathbf{a}}^{\mathbf{m}}\in k,\ \mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\), such that the polynomial \[f=\sum_{\mathbf{b}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0},\nu\left( \mathbf{b}\right)-\mathbf{1}\right]}x_{\mathbf{b}}^{\mathbf{k}}H_{\left(\mathbf{ b},\mathbf{k}\right)} \tag{2.2}\] satisfies the conditions \[\partial_{\mathbf{a}}^{\mathbf{m}}f=t_{\mathbf{a}}^{\mathbf{m}}.\] By applying the derivative operators \(\partial_{\mathbf{a}}^{\mathbf{m}}\), \(\mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0,}\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\) to (2.2), we get the following system consisting of a number of \(\sum_{\mathbf{a}\in\mathbf{A}}\prod_{i=1}^{n}\nu_{i}\left(a_{i}\right)\) linear equations with an equal number of variables \(x_{\mathbf{a}}^{\mathbf{k}}\), \(\mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0,}\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\): \[\partial_{\mathbf{a}}^{\mathbf{m}}f=\sum_{\mathbf{b}\in\mathbf{A}}\sum_{ \mathbf{k}\in\left[\mathbf{0,}\nu\left(\mathbf{b}\right)-\mathbf{1}\right]}x_ {\mathbf{b}}^{\mathbf{k}}\partial_{\mathbf{a}}^{\mathbf{m}}H_{\left(\mathbf{0, k}\right)},\] or \[\sum_{\mathbf{b}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0,}\nu\left( \mathbf{b}\right)-\mathbf{1}\right]}x_{\mathbf{b}}^{\mathbf{k}}\partial_{ \mathbf{a}}^{\mathbf{m}}H_{\left(\mathbf{0,k}\right)}=t_{\mathbf{a}}^{ \mathbf{m}},\] which by Remark 3 can be partitioned to the following linear equation systems \[\sum_{\mathbf{k}\in\left[\mathbf{0,}\mathbf{m}\right]}x_{\mathbf{a}}^{ \mathbf{k}}\partial_{\mathbf{a}}^{\mathbf{m}}H_{\left(\mathbf{a,k}\right)}=t_ {\mathbf{a}}^{\mathbf{m}},\ \mathbf{m}\in\left[\mathbf{0,}\nu\left(\mathbf{a} \right)-\mathbf{1}\right]. \tag{2.3}\] Clearly each one of the linear systems (2.3) has a number of \(\left|\nu\left(\mathbf{a}\right)\right|=\prod_{i=1}^{n}\nu_{i}\left(a_{i}\right)\) variables, \(x_{\mathbf{a}}^{\mathbf{k}}\), and consists of \(\left|\nu\left(\mathbf{a}\right)\right|\) equations. Since \(\left[\mathbf{0,}\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) is a finite subset of the partially ordered \(\mathbb{Z}^{n}\), and because, by Remark 4 for each \(\mathbf{m}\in\left[\mathbf{0,}\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) is valid \(\partial_{\mathbf{a}}^{\mathbf{m}}H_{\left(\mathbf{a,m}\right)}=1\),then by Lemma 2.1 follows that the system (2.2) has exact a solution. As a result of Theorem 2.2 we immediately obtain the following Corollaries, which will be used in Section 4. **Corollary 2.3**.: _The set \(P\left(\mathbf{A,}\nu\right)\) is a basis of the \(\Bbbk\)-vector space \(V\left(\mathbf{A,}\nu\right).\)_ Proof.: **Corollary 2.4**.: _If \(f\) is an element of \(V\left(\mathbf{A,}\nu\right)\), such that \(\partial_{\mathbf{a}}^{\mathbf{m}}f=0\) for each \(\mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0,}\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\), then \(f=0.\)_ We will derive an expression for the interpolating polynomial \(f\) in Theorem 2.2. First we will use the degree reverse lexicographic order, which will be denoted by \(<.\) More specifically,\(\left(k_{1},\ldots,k_{n}\right)<\left(l_{1},\ldots l_{n}\right)\), if either of the following holds: * \(k_{1}+\cdots+k_{n}<l_{1}+\cdots+l_{n}\), or * \(k_{1}+\cdots+k_{n}=l_{1}+\cdots+l_{n}\) and \(k_{i}>l_{i}\) for the largest \(i\) for which \(k_{i}\neq l_{i}\). For example, the reverse lexicographic order of the elements in \(\left[\mathbf{0,}\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) is \[1_{\mathbf{a}}: = \left(0,\ldots,0\right)<2_{\mathbf{a}}:=\left(0,\ldots,0,1\right) <3_{\mathbf{a}}:=\left(0,\ldots,0,1,0\right)<\cdots<\left(n+1\right)_{\mathbf{a }}:=\left(1,0,\ldots,0\right)\] \[< \left(n+2\right)_{\mathbf{a}}:=\left(0,\ldots,0,2\right)<\cdots<\left( \left|\nu\left(\mathbf{a}\right)\right|-1\right)_{\mathbf{a}}:=\left(\nu_{1} \left(a_{1}\right)-1,...,\nu_{n}\left(a_{n}\right)-1\right).\] Note that from \(\mathbf{m}\leq\mathbf{n}\) follows \(\mathbf{m}\preceq\mathbf{n}\), and from \(\mathbf{m}<\mathbf{n}\) follows \(\mathbf{m}<\mathbf{n}\). That means \(\preceq\) is a linear extension of \(\leq\). The following Theorem provides the closed form of the interpolating polynomial. **Theorem 2.5**.: _The formula of the interpolating polynomial \(f\) in (2.2) is the following_ \[f:=\sum_{\mathbf{a}\in\mathbf{A}}\Lambda_{\mathbf{a}}^{-1}T_{\mathbf{a}}H_{ \mathbf{a}}=\sum_{\mathbf{a}\in\mathbf{A}}\sum_{i=1}^{\left|\nu\left(\mathbf{a }\right)\right|}\left(I_{\left|\nu\left(\mathbf{a}\right)\right|}-\Lambda_{ \mathbf{a}}\right)T_{\mathbf{a}}H_{\mathbf{a}}, \tag{2.4}\] _where_ \[\Lambda_{\mathbf{a}} = \left[\begin{array}{c}1\\ \varepsilon_{2,1}\partial_{\mathbf{a}}^{2_{\mathbf{a}}}H_{\left(\mathbf{a,1_{ \mathbf{a}}}\right)}\\ \vdots\\ \vdots\\ \varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|-1,1}\partial_{\mathbf{a}} ^{\left(\mathbf{a}\right)\left(-\mathbf{1}\right)_{\mathbf{a}}}H_{\left(\mathbf{a,1 _{\mathbf{a}}}\right)}\\ \varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|,1}\partial_{\mathbf{a}} ^{\left(\mathbf{a}\right)\left(\mathbf{a}\right)\left(\mathbf{a}\right)}H_{ \left(\mathbf{a,1_{\mathbf{a}}}\right)}\\ \varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|,1}\partial_{\mathbf{a}} ^{\left(\mathbf{a}\right)\left(\mathbf{a}\right)}H_{\left(\mathbf{a,1_{ \mathbf{a}}}\right)}\\ \end{array}\right]\left(\begin{array}{c}1\\ \cdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|-1}\partial_{\mathbf{a}} ^{\left(\mathbf{a}\right)\left(-\mathbf{1}\right)_{\mathbf{a}}}H_{\left(\mathbf{a,1 _{\mathbf{a}}}\right)}\\ \end{array}\right.\right.\varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|,1} \partial_{\mathbf{a}}^{\left(\mathbf{a}\right)\left(\mathbf{a}\right)}H_{ \left(\mathbf{a,2_{\mathbf{a}}}\right)}\\ \end{array}\right.\cdots\varepsilon_{\left|\nu\left(\mathbf{a}\right)\right|, \left|\nu\left(\mathbf{a}\right)\right|-1}\partial_{\mathbf{a}}^{\left( \mathbf{a}\right)\left(\mathbf{a}\right)}H_{\left(\mathbf{a,(\nu\left(\mathbf{a} \right)-1}\right)_{\mathbf{a}}}\\ \end{array}\right], \tag{2.5}\] \[x_{\mathbf{a}} = \left[\begin{array}{c}x_{\mathbf{a}}^{1_{\mathbf{a}}}\\ \vdots\\ x_{\mathbf{a}}^{\left|\nu\left(\mathbf{a}\right)\right|}\end{array}\right],\,T_{ \mathbf{a}}=\left[\begin{array}{c}l_{\mathbf{a}}^{1_{\mathbf{a}}}\\ \vdots\\ l_{\mathbf{a}}^{\left|\nu\left(\mathbf{a}\right)\right|}\end{array}\right],H_{ \mathbf{a}}=\left[\begin{array}{c}H_{\left(\mathbf{a,1_{\mathbf{a}}}\right)} \\ \vdots\\ H_{\left(\mathbf{a,\omega\left(\mathbf{a}\right)}\right)}\end{array}\right],\,\left|\nu \left(\mathbf{a}\right)\right|=\prod_{i=1}^{n}\nu_{i}\left(a_{i}\right).\] Proof.: The interpolating polynomial \(f\) in (2.2) can be equivalently written as \[f=\sum_{\mathbf{a}\in\Lambda}\sum_{i=1}^{|\nu\left(\mathbf{a}\right)|}x_{\mathbf{ a}}^{i_{\mathbf{a}}}H_{\left(\mathbf{a},i_{\mathbf{a}}\right)}\] or \[f=\sum_{\mathbf{a}\in\Lambda}x_{\mathbf{a}}H_{\mathbf{a}}, \tag{2.6}\] where \(x_{\mathbf{a}}=\left[x_{\mathbf{a}}^{i_{\mathbf{a}}}\right]\) is a column matrix of size \(\left|\nu\left(\mathbf{a}\right)\right|\times 1\), and \(H_{\mathbf{a}}=\left[H_{\left(\mathbf{a},i_{\mathbf{a}}\right)}\right]\) is a row matrix of size \(1\times\left|\nu\left(\mathbf{a}\right)\right|.\) Then the system of linear equations becomes, \[\sum_{i=1}^{j}\varepsilon_{i,j}\partial_{\mathbf{a}}^{j_{\mathbf{a}}}H_{\left( \mathbf{a},i_{\mathbf{a}}\right)}x_{\mathbf{a}}^{i_{\mathbf{a}}}=r_{\mathbf{a }}^{i_{\mathbf{a}}},\text{ }j=1,...,\left|\nu\left(\mathbf{a}\right)\right|,\] where \[\varepsilon_{i,j}:=\begin{cases}0,&\text{ if }\ j_{\mathbf{a}}<i_{\mathbf{a}} \\ 0,&\text{ if }\ i_{\mathbf{a}},\ j_{\mathbf{a}}\text{ are not comparable}\\ 1,&\text{ if }\ i_{\mathbf{a}}\leq j_{\mathbf{a}},\end{cases}\] Now we write this system of linear equations in matrix form: \[\Lambda_{\mathbf{a}}x_{\mathbf{a}}=T_{\mathbf{a}}, \tag{2.7}\] where \(\Lambda_{\mathbf{a}}=\left[\varepsilon_{i,j}\partial_{\mathbf{a}}^{i_{\mathbf{ a}}}H_{\left(\mathbf{a},i_{\mathbf{a}}\right)}\right]\) is a lower unitriagular matrix of size \(\left|\nu\left(\mathbf{a}\right)\right|\times\left|\nu\left(\mathbf{a}\right)\right|\), and \(T_{\mathbf{a}}=\left[r_{\mathbf{a}}^{i_{\mathbf{a}}}\right]\) is a column matrix of size \(\left|\nu\left(\mathbf{a}\right)\right|\times 1\). From (2.7) we have \(x_{\mathbf{a}}=\Lambda_{\mathbf{a}}^{-1}T_{\mathbf{a}}\), or by Lemma 2.1 in ref 2, \[x_{\mathbf{a}}=\sum_{i=1}^{\left|\nu\left(\mathbf{a}\right)\right|}\left(I_{ \left|\nu\left(\mathbf{a}\right)\right|}-\Lambda_{\mathbf{a}}\right)^{i}T_{ \mathbf{a}}, \tag{2.8}\] where \(I_{\nu\left(\mathbf{a}\right)\mid}\) is unit matrix of size \(\left|\nu\left(\mathbf{a}\right)\right|\). Finally, substituting (2.8) in (2.6) yileds the required formula. ## 3 Remainder of the interpolation for \(\Bbbk=\mathbb{R}\) or \(\mathbb{C}\) In this section, we derive an expression for the error of the interpolation formula given in Theorem 2.5, which is identical to the error of Theorem 1.2, for the class of real functions \(f\left(x_{1},\ldots,x_{n}\right)\), which can be continued analytically as a single valued, regular function \(f\left(z_{1},\ldots,z_{n}\right)\) of \(n\) complex variables in a certain cross-product region \(D_{z_{1}}\times\cdots\times D_{z_{n}}\). In the case of a single variable \(x\), let \(C\) be a closed contour in the region \(D_{z}\) of analytic continuation of a real function \(f(x)\) containing the points \(a\in A\) in its interior. Let as denote \[H\left(x\right)=\prod_{a\in A}\left(x-a\right)^{\nu\left(a\right)}.\] By applying the residue theorem to the contour integral \[\frac{1}{2\pi i}\int_{C}\frac{f\left(z\right)}{\left(z-x\right)H\left(z\right) }dz,\] we obtain (see [7]) \[f(x)=p\left(x\right)+\frac{H\left(x\right)}{2\pi i}\int_{C}\frac{f\left(z \right)}{\left(z-x\right)H\left(z\right)}dz, \tag{3.1}\] where \[p\left(x\right)=\sum_{a\in A}\sum_{k=0}^{\nu\left(a\right)-1}f^{\left(k\right) }\left(a\right)H_{a}^{k},\] From (3.1) we have \[\frac{1}{2\pi i}\int_{C}\frac{f\left(z\right)}{z-x}dz=p\left(x\right)-\frac{H \left(x\right)}{2\pi i}\int_{C}\frac{f\left(z\right)}{\left(z-x\right)H\left(z \right)}dz,\] or equivalently \[\frac{1}{2\pi i}\int_{C}\frac{H\left(z\right)-H\left(x\right)}{H\left(z\right) \left(z-x\right)}f\left(z\right)dz=p\left(x\right). \tag{3.2}\] In the following, we assume that \(C_{j},\;j=1,\ldots,n\) are simple closed contours in the regions \(D_{j}\) containing \(A_{j}\) in their interior, of analyticity of \(f\left(x_{1},\ldots,x_{j-1},z_{j},x_{j+1},\ldots,x_{n}\right)\), where \(x_{1},\ldots,x_{j-1},x_{j+1},\ldots,x_{n}\) are fixed. Further, we assume that \(f\left(z_{1},\ldots,z_{n}\right)\) is simultaneously analytic in \(D_{z_{1}}\times\cdots\times D_{z_{n}}\). Finally, we introduce the following notation: \[H_{r}(w_{r})=\prod_{\alpha\in A_{r}}(w_{r}-a)^{w_{r}\left(a\right)},\;r=1, \ldots,n.\] **Lemma 3.1**.: _The following identity holds:_ \[\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}}\frac{\prod_{r= 1}^{n}\left(H_{r}\left(z_{r}\right)-H_{r}\left(x_{r}\right)\right)}{\prod_{r= 1}^{n}\left(z_{r}-x_{r}\right)H_{r}\left(z_{r}\right)}f\left(z_{1},\ldots,z_{n }\right)dz_{1}\ldots dz_{n}=\sum_{\mathbf{a}\in\mathbf{A}}\sum_{\mathbf{k}\in \left[\mathbf{0},\mathbf{y}\left(\mathbf{a}-\mathbf{I}\right)\right]}H_{ \mathbf{a}}^{\mathbf{k}}\partial_{\mathbf{a}}^{\mathbf{k}}f. \tag{3.3}\] Proof.: We will prove by induction with respect to \(n\): For \(n=1\), (3.3) reduces to (3.1). Let (3.3) hold for \(n-1\). Then \[\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}} \frac{\prod_{r=1}^{n}\left(H_{r}\left(z_{r}\right)-H_{r}\left(x_{r}\right) \right)}{\prod_{r=1}^{n}\left(z_{r}-x_{r}\right)H_{r}\left(z_{r}\right)}f\left( z_{1},\ldots,z_{n}\right)dz_{1}\ldots dz_{n}\] \[= \frac{1}{2\pi i}\int_{C_{n}}\frac{H_{n}\left(z_{n}\right)-H_{n} \left(x_{n}\right)}{\left(z_{n}-x_{n}\right)H_{n}\left(z_{r}\right)}\left( \frac{1}{\left(2\pi i\right)^{n-1}}\int_{C_{1}}\cdots\int_{C_{n-1}}\frac{\prod _{r=1}^{n-1}\left(H_{r}\left(z_{r}\right)-H_{r}\left(x_{r}\right)\right)}{ \prod_{r=1}^{n-1}\left(z_{r}-x_{r}\right)H_{r}\left(z_{r}\right)}f\left(z_{1}, \ldots,z_{n}\right)dz_{1}\ldots dz_{n-1}\right)dz_{n}\] \[= \frac{1}{2\pi i}\int_{C_{n}}\frac{H_{n}\left(z_{n}\right)-H_{n} \left(x_{n}\right)}{\left(z_{n}-x_{n}\right)H_{n}\left(z_{r}\right)}\sum_{a_{ \mathbf{a}}\in A_{1}}\cdots\sum_{a_{\mathbf{a}}\in A_{n}}\sum_{k_{1}=0}^{n_{1} \left(a_{1}\right)-1}\cdots\sum_{k_{n-1}=0}^{n_{1}\left(a_{n-1}\right)-1}\prod _{j=1}^{n-1}H_{a_{j}}^{k_{j}}\left(x_{j}\right)\frac{\partial^{k_{1}+\cdots+k_ {n-1}}}{\partial x_{1}^{k_{1}}\cdots\partial x_{n-1}^{k_{k-1}}}f\left(a_{1}, \ldots,a_{n-1},z_{n}\right)dz_{n}\] \[= \sum_{a_{\mathbf{a}}\in A_{1}}\cdots\sum_{a_{\mathbf{a}}\in A_{n} }\sum_{k_{1}=0}^{n_{1}\left(a_{1}\right)-1}\cdots\sum_{k_{n-1}=0}^{n_{1} \left(a_{n-1}\right)-1}\prod_{j=1}^{n}H_{a_{j}}^{k_{j}}\left(x_{j}\right)\left( \frac{1}{2\pi i}\int_{C_{n}}\frac{H_{n}\left(z_{n}\right)-H_{n}\left(x_{n} \right)}{\left(z_{n}-x_{n}\right)H_{n}\left(z_{r}\right)}\frac{\partial^{k_{1}+ \cdots+k_{n-1}}}{\partial x_{1}^{k_{1}}\cdots\partial x_{n-1}^{k_{n-1}}}f \left(a_{1},\ldots,a_{n-1},z_{n}\right)dz_{n}\right).\] Thus, using (3.2) we obtain: \[\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}} \frac{\prod_{r=1}^{n}\left(H_{r}\left(z_{r}\right)-H_{r}\left(x_{r}\right) \right)}{\prod_{r=1}^{n}\left(z_{r}-x_{r}\right)H_{r}\left(z_{r}\right)}f \left(z_{1},\ldots,z_{n}\right)dz_{1}\ldots dz_{n}\] \[= \sum_{a_{1}\in A_{1}}\cdots\sum_{a_{\mathbf{a}}\in A_{n}}\sum_{k_{ 1}=0}^{n_{1}\left(a_{1}\right)-1}\cdots\sum_{k_{n}=0}^{n_{r}\left(a_{n}\right)-1 }\prod_{j=1}^{n}H_{a_{j}}^{k_{j}}\left(x_{j}\right)\frac{\partial^{k_{n}+\cdots+k _{1}}}{\partial x_{n}^{k_{n}}\ldots\partial x_{1}^{k_{1}}}f\left(a_{1},\ldots,a _{n}\right)\] \[= \sum_{\mathbf{a}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0}, \mathbf{y}\left(\mathbf{a}\right)-\mathbf{I}\right]}H_{\mathbf{a}}^{\mathbf{k}} \partial_{\mathbf{a}}^{\mathbf{k}}f.\] The interpolation error is derived in the following Theorem. **Theorem 3.2**.: _The error of the interpolation formula of Theorem 2.5 is defined as_ \[R:=f-\sum_{\mathbf{a}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0},\mathbf{y }\left(\mathbf{a}\right)-\mathbf{I}\right]}t_{\mathbf{a}}^{\mathbf{k}}H_{ \mathbf{a}}^{k},\] _where \(t_{\mathbf{a}}^{\mathbf{k}}=\partial_{\mathbf{a}}^{\mathbf{k}}f\), is given by:_ \[R=\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}}\frac{\prod_{r= 1}^{n}H_{r}\left(z_{r}\right)-\prod_{r=1}^{n}\left(H_{r}\left(z_{r}\right)-H_{r }\left(x_{r}\right)\right)}{\prod_{r=1}^{n}\left(z_{r}-x_{r}\right)H_{r} \left(z_{r}\right)}f\left(z_{1},\ldots,z_{n}\right)dz_{1}\ldots dz_{n}. \tag{3.4}\] Proof.: From (3.3) we have \[\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}}\frac {\prod_{r=1}^{n}H_{r}\left(z_{r}\right)-\prod_{r=1}^{n}\left(H_{r}\left(z_{r} \right)-H_{r}\left(x_{r}\right)\right)}{\prod_{r=1}^{n}\left(z_{r}-x_{r}\right) H_{r}\left(z_{r}\right)}f\left(z_{1},\ldots,z_{n}\right)dz_{1}\ldots dz_{n}\] \[= \frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}} \frac{1}{\prod_{r=1}^{n}\left(z_{r}-x_{r}\right)}f\left(z_{1},\ldots,z_{n} \right)dz_{1}\ldots dz_{n}-\sum_{\mathbf{a}\in\mathbf{A}}\sum_{\mathbf{k}\in \left[0,\nu\left(\mathbf{a}\right)-\mathbf{1}\right]}H_{\mathbf{a}}^{\mathbf{k }}\partial_{\mathbf{a}}^{\mathbf{k}}f.\] By applying the residue Theorem \(n\)-times, we obtain \[\frac{1}{\left(2\pi i\right)^{n}}\int_{C_{1}}\cdots\int_{C_{n}}\frac{1}{\prod _{r=1}^{n}\left(z_{r}-x_{r}\right)}f\left(z_{1},\ldots,z_{n}\right)dz_{1} \ldots dz_{n}=f\left(x_{1},\ldots,x_{n}\right). \tag{3.6}\] From relations (3.5) and (3.6) we get (3.4). **Remark 5**.: Applying Theorem 3.2 for \(n=2\) we obtain the error formula for the bivariate Hermite polynomial, as given in [7, Section 5]. **Remark 6**.: Lemma 3.1 provides an alternative way of generating the Hermite polynomial that interpolates any function \(f\), provided that the integral can be calculated. **Remark 7**.: Lemma 3.1 and Theorem 3.2 hold also for complex functions of complex variables and for finite subsets \(A_{1},\ldots,A_{n}\) of \(\mathbb{C}\). **Remark 8**.: Let us denote by \(H\left(D_{z_{1}}\times\cdots\times D_{z_{n}}\right)\) the ring of all holomorphic functions on \(D_{z_{1}}\times\cdots\times D_{z_{n}}\) and by \(\left(H_{1},\ldots,H_{n}\right)\) the ideal generated by the polynomials \(H_{1},\ldots,H_{n}\). Consider the linear operator \(T_{\left(\mathbf{A},\nu\right)}:H\left(D_{z_{1}}\times\cdots\times D_{z_{n}}\right) \to V\left(\mathbf{A},\nu\right)\) given by \[T_{\left(\mathbf{A},\nu\right)}\left(f\right)=\frac{1}{\left(2\pi i\right)^{ n}}\int_{C_{1}}\cdots\int_{C_{n}}\frac{\prod_{r=1}^{n}\left(H_{r}\left(z_{r} \right)-H_{r}\left(x_{r}\right)\right)}{\prod_{r=1}^{n}\left(z_{r}-x_{r} \right)H_{r}\left(z_{r}\right)}f\left(z_{1},\ldots,z_{n}\right)dz_{1}\ldots dz _{n}.\] Clearly by Lemma 3.1 we have that \(T_{\left(\mathbf{A},\nu\right)}\) is an epimorphism and \[\ker T_{\left(\mathbf{A},\nu\right)}=\left\{g\in H\left(D_{z_{1}}\times\cdots \times D_{z_{n}}\right):\;\partial_{\mathbf{a}}^{\mathbf{m}}g=0,\;\mathbf{a} \in\mathbf{A},\;\mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)- \mathbf{1}\right]\right\}.\] By using Leibniz derivative rule for multivariable functions we have that for any \(f\in H\left(D_{z_{1}}\times\cdots\times D_{z_{n}}\right)\) and any \(g\in\ker T_{\left(\mathbf{A},\nu\right)}\) hold \(\partial_{\mathbf{a}}^{\mathbf{m}}\left(fg\right)=0\). Therefore \(\ker T_{\left(\mathbf{A},\nu\right)}\) is an ideal of \(H\left(D_{z_{1}}\times\cdots\times D_{z_{n}}\right)\). In other words, \(T_{\left(\mathbf{A},\nu\right)}\) is an ideal projector. Further by residue Theorem we get \(T_{\left(\mathbf{A},\nu\right)}\left(H_{r}\right)=0\), \(r=1,\ldots,n\). Therefore \(\left(H_{1},\ldots,H_{n}\right)\subseteq\ker T_{\left(\mathbf{A},\nu\right)}\). In the next section the connection between the ideals \(\left(H_{1},\ldots,H_{n}\right)\) and \(\ker T_{\left(\mathbf{A},\nu\right)}\) will also be determined for any infinite field. ## 4 On the ideal of the interpolation Let \(A_{i},\;i=1,\ldots,n\), be finite subsets of an infinite field \(\Bbbk\), and \(\nu_{i}\left(a\right)\) the multiplicity of \(a\in A_{i}\). Consider the polynomials \[H_{i}\left(x_{i}\right)=\prod_{a\in A_{i}}\left(x_{i}-a\right)^{\nu_{i}\left(a \right)}\in\Bbbk\left[x_{i}\right],\;i=1,\ldots,n.\] For \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\), denote by \(\overline{\xi}\) the residue class of \(g\) in the quotient ring \(\Bbbk\left[x_{1},\ldots,x_{n}\right]/\left(H_{1},\ldots,H_{n}\right)\). It is easy to verify that the linear operator \(T_{\left(\mathbf{A},\nu,\Bbbk\right)}:\Bbbk\left[x_{1},\ldots,x_{n}\right] \to V\left(\mathbf{A},\nu\right)\), given by \[T_{\left(\mathbf{A},\nu,\Bbbk\right)}\left(f\right)=\sum_{\mathbf{a}\in \mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)- \mathbf{1}\right]}H_{\mathbf{a}}^{\mathbf{k}}\partial_{\mathbf{a}}^{\mathbf{k}}f,\] is a projector. Further, similarly to Remark 8, it can be shown that \(\ker T_{\left(\mathbf{A},\nu,\Bbbk\right)}\) is an ideal of \(\Bbbk\left[x_{1},\ldots,x_{n}\right]\). So \(T_{\left(\mathbf{A},\nu,\Bbbk\right)}\) is an ideal projector. **Lemma 4.1**.: _The \(\Bbbk\)-algebra \(\Bbbk\left[x_{1},\ldots,x_{n}\right]/\ker T_{\left(\mathbf{A},\nu;\Bbbk\right)}\) as a \(\Bbbk\)-vector space is isomorphic to \(V\left(\mathbf{A},\nu\right)\), and the following is valid_ \[\dim_{\Bbbk}\Bbbk\left[x_{1},\ldots,x_{n}\right]/\ker T_{\left(\mathbf{A}, \nu;\Bbbk\right)}=\prod_{i=1}^{n}\sum_{\alpha\in A_{i}}\nu_{i}\left(a\right)\] Proof.: Let \(\phi:\Bbbk\left[x_{1},\ldots,x_{n}\right]\rightarrow\Bbbk\left[x_{1},\ldots,x_{n}\right]/\ker T_{\left(\mathbf{A},\nu;\Bbbk\right)}\)be the canonical map. Consider any basis \(B\) of the vector space \(V\left(\mathbf{A},\nu\right)\). It is sufficient to show, that \(\phi\left(B\right)\) is a basis of \(\Bbbk\left[x_{1},\ldots,x_{n}\right]/\ker T_{\left(\mathbf{A},\nu;\Bbbk \right)}:\) Starting from \(\sum_{b\in B}c_{b}\phi\left(b\right)=0\), \(c_{b}\in\Bbbk\) we have \(\sum_{b\in B}c_{b}\phi\left(b\right)\in\ker T_{\left(\mathbf{A},\nu;\Bbbk \right)}\), from which we obtain \[\partial_{\mathbf{a}}^{\mathbf{m}}\sum_{b\in B}c_{b}b=0,\ \mathbf{a}\in \mathbf{A}\text{ and }\mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1} \right].\] Further because \(B\subset V\left(\mathbf{A},\nu\right)\) we have \(\sum_{b\in B}c_{b}b\in V\left(\mathbf{A},\nu\right).\)Consequently by Corollary 2.4 holds \(\sum_{b\in B}c_{b}b=0.\) Therefore for each \(b\in B\) holds \(c_{b}=0\),which means \(\phi\left(B\right)\) is linearly indepedend. Now we will show that \(\phi\left(B\right)\) generate \(\Bbbk\left[x_{1},\ldots,x_{n}\right]/\ker T_{\left(\mathbf{A},\nu;\Bbbk \right)}\): By Theorem 2.2, for any \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) there is exact one \(f=\sum_{b\in B}c_{b}b\in V\left(\mathbf{A},\nu\right)\) such that \(\partial_{\mathbf{a}}^{\mathbf{m}}f=\partial_{\mathbf{a}}^{\mathbf{m}}g\), \(\mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1} \right].\) Consequently \(\phi\left(g\right)=\phi\left(f\right)\), which can be written as \[\phi\left(g\right)=\sum_{b\in B}c_{b}\phi\left(b\right).\] **Lemma 4.2**.: _Given the polynomials \(f_{i}\in\Bbbk\left[x_{i}\right],\ i=1,\ldots,n\), the quotient ring \(\Bbbk\left[x_{1},,x_{n}\right]/\left(f_{1},\ldots,f_{n}\right)\) as a \(\Bbbk\)-vector space is isomorphic to \(\left[f\in\mathbb{C}\left[x_{1},\ldots,x_{n}\right]:\deg_{i}f<\deg f_{i},i=1, \ldots,n\right],\) and_ \[\dim_{\Bbbk}\Bbbk\left[x_{1},\ldots,x_{n}\right]/\left(f_{1},\ldots,f_{n} \right)=\prod_{i=1}^{n}\deg f_{i}.\] Proof.: It is sufficient to prove that for any polynomially \(f\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) there exist unique \(c_{\left(k_{1},\ldots,k_{n}\right)}\in\Bbbk,\ 0\leq k_{1}<\deg f_{1},\ldots,0\leq k_{n}<\deg f _{n}\), such that \[f=\sum_{k_{1}=0}^{\deg f_{1}-1}\cdots\sum_{k_{n}=0}^{\deg f_{n}-1}c_{\left(k_{ 1},\ldots,k_{n}\right)}x_{1}^{k_{1}}\ldots x_{n}^{k_{n}}+\left(f_{1},\ldots,f_{ n}\right).\] To this end, we will apply the method of induction over \(n\): For \(n=1\) obviously the claim is valid. Let the claim be valid for \(n-1,n>1\). Dividing \(f\) by \(f_{n}\left(x_{n}\right)\), we have \(f=pf_{n}+q\), where \(p,q\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) are uniquely deteminated holding \(\deg_{n}q<\deg f_{n}\). Therefore \[f=\sum_{i=0}^{\deg f_{i}-1}q_{i}x_{n}^{i}+\left(f_{n}\right), \tag{4.1}\] where \(q_{i}\) are uniquely deteminated polynomials of \(\Bbbk\left[x_{1},\ldots,x_{n-1}\right]\), and \(\left(f_{n}\right)\) is the ideal of \(\Bbbk\left[x_{1},\ldots,x_{n}\right]\) generated by \(f_{n}\). Now by induction, the polynomials \(q_{i}\) can be written as, \[q_{i}=\sum_{0\leq k_{1}<\deg f_{1}}\cdots\sum_{0\leq k_{n-1}<\deg f_{n-1}}c_{ \left(k_{1},\ldots,k_{n-1}\right)}^{i}x_{1}^{k_{1}}\ldots x_{n-1}^{k_{n-1}}+ \left(f_{1},\ldots,f_{n-1}\right), \tag{4.2}\] where \(c_{\left(k_{1},\ldots,k_{n-1}\right)}^{i}\in\Bbbk\) are uniquely determined. By substitution of (4.2) in (4.1) we obtain the required expression. **Theorem 4.3**.: _The following holds_ \[\left(H_{1},\ldots,H_{n}\right)=\ker T_{\left(\mathbf{A},\nu;\Bbbk\right)}.\] Proof.: Applying the Leibniz derivative rule we conclude that for each \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) the following is valid \(\partial_{\mathbf{a}}^{\mathbf{m}}\left(gH_{i}\right)=0,\ i=1,\ldots,n\), for each \(\mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1} \right].\) Therefore \(H_{i}\in\ker T_{\left(\mathbf{A},\nu;\Bbbk\right)},\ i=1,\ldots,n\), or equivalently \[\left(H_{1},\ldots,H_{n}\right)\subseteq\ker T_{\left(\mathbf{A},\nu;\Bbbk \right)}.\] Now it is sufficient to show that \(\ker T_{(\mathbf{A},\nu;k)}\subseteq(H_{1},\ldots,H_{n})\): Let \(g\) be any element of \(I\). By Lemma 4.2 there is unique polynomial \(p\in\{f\in\Bbbk\left[x_{1},\ldots,x_{n}\right]:\deg_{i}f<\deg H_{i},i=1,\ldots,n\}=V\left(\mathbf{A},\nu\right)\) such that \[g=p+(H_{1},\ldots,H_{n}) \tag{4.3}\] From (4.3) we obtain that for any \(\mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) and any \(\mathbf{a}\in\mathbf{A}\) \[\partial_{\mathbf{a}}^{\mathbf{m}}g=\partial_{\mathbf{a}}^{\mathbf{m}}p,\] which implies \(\partial_{\mathbf{a}}^{\mathbf{m}}p=0\). Therefore, Corollary 2.4 yields that \(p=0\), which combined with (4.3) gives \(g\in\left(H_{1},\ldots,H_{n}\right).\) **Remark 9**.: _By Theorem 4.3 it follows that \(\left\{H_{1}\left(x_{1}\right),\ldots,H_{n}\left(x_{n}\right)\right\}\) is a reduced Grobner basis of \(\ker T_{(\mathbf{A},\nu;k)}\)._ Given the polynomials \(f_{i}\in\Bbbk\left[x_{i}\right],\ i=1,\ldots,n,\)\(A_{i}\), the sets of their roots in the algebraic closure \(\overline{\Bbbk}\) of \(\Bbbk\), and \(\nu_{i}\left(a\right)\) the multiplicity of \(a\in A_{i}\), we obtain \(f_{i}\left(x_{i}\right)=\prod_{a\in A_{i}}\left(x_{i}-a\right)^{\nu_{i}\left(a \right)},\ i=1,\ldots,n\). The ideal of \(\overline{\Bbbk}\left[x_{1},\ldots,x_{n}\right]\) is denoted by \(\overline{I}\) and it is generated by \(f_{1},\ldots,f_{n}\). For \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) let us denote by \(\overline{g}\) the residue class of \(g\) in \(\overline{\Bbbk}\left[x_{1},\ldots,x_{n}\right]/\overline{I}\). **Corollary 4.4**.: _The ideal \(\left(f_{1},\ldots,f_{n}\right)\) of \(\Bbbk\left[x_{1},\ldots,x_{n}\right]\) is equal to \(\ker T_{\left(\mathbf{A},\nu;\overline{k}\right)}\cap\Bbbk\left[x_{1}, \ldots,x_{n}\right].\)_ Proof.: By Theorem 4.3 we get \(\left(f_{1},\ldots,f_{n}\right)\subseteq\ker T_{\left(\mathbf{A},\nu; \overline{k}\right)}\cap\Bbbk\left[x_{1},\ldots,x_{n}\right]\). Therefore, it is sufficient to prove that \(\ker T_{\left(\mathbf{A},\nu;\overline{k}\right)}\cap\Bbbk\left[x_{1}, \ldots,x_{n}\right]\subseteq\left(f_{1},\ldots,f_{n}\right)\) : Let \(f\) be any element of \(J\left(\mathbf{A},\nu\right)\cap\Bbbk\left[x_{1},\ldots,x_{n}\right].\) Then by Theorem 4.3 we get \[f=\sum_{i=1}^{n}f_{i}g_{i}, \tag{4.4}\] where \(g_{i}\in\overline{\Bbbk}\left[x_{1},\ldots,x_{n}\right]\). Consider the finite field extension \(\Bbbk\subset\Bbbk_{1}\), where \(\Bbbk_{1}\) is the field constructed by adjunction of all coefficients of the polynomials \(g_{i}\). Let \(r\) be the degree of \(\Bbbk\subset\Bbbk_{1}\) and \(\rho\) a primitive element. Then (4.4) can be written as \[f=\sum_{j=0}^{r-1}\rho^{j}\sum_{i=1}^{n}f_{i}g_{ij}, \tag{4.5}\] where \(g_{ij}\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\). Finally, since \(\left\{1,\rho,\ldots,\rho^{r-1}\right\}\) is \(\Bbbk\)- linearly independent, from (4.5) we obtain \[f=\sum_{i=1}^{n}f_{i}g_{i0}.\] That is \(f\in I\). **Remark 10**.: _Corollary 4.4 can also be formulated as: \(f\in\left(f_{1},\ldots,f_{n}\right)\subset\Bbbk\left[x_{1},\ldots,x_{n}\right]\), if and only if \(\partial_{\mathbf{a}}^{\mathbf{m}}f=0\), for each \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\)._ Let \(A_{i},\ i=1,\ldots,n\), be finite subsets of an infinite field \(\Bbbk\), and \(\nu_{i}\left(a\right)\) the multiplicity of \(a\in A_{i}\), and the polynomials \(H_{i}\left(x_{i}\right)=\prod_{a\in A_{i}}\left(x_{i}-a\right)^{\nu_{i}\left(a \right)}\in\Bbbk\left[x_{i}\right],\ i=1,\ldots,n.\) Given any polynomial \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) we apply the Euclidean polynomial division \(n\)-times. Assuming that we start with the \(1\)st dimension, there are unique polynomials \(q_{1},r_{1}\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) with \(\deg_{1}r_{1}<\deg H_{1}\), such that \(g=H_{1}q_{1}+r_{1},\). Continuing with the \(2\)nd dimension (\(i=2\)), there are unique polynomials \(q_{2},r_{2}\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) with \(\deg_{i}r_{2}<\deg H_{i}\), \(i=1,2\) such that \(r_{1}=H_{2}q_{2}+r_{2}.\) Continuing in a similar manner until the last dimension, \(n\), we obtain unique polynomials \(q_{n},r_{n}\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\) with \(\deg_{i}r_{n}<\deg H_{i}\), \(i=1,2,\ldots,n\) such that \(r_{n-1}=H_{n}q_{n}+r_{n}.\) Combining the above \(n\) polynomial divisions we get \[g=r_{n}+\sum_{i=1}^{n}H_{i}q_{i}. \tag{4.6}\] It is easy to verify that for \(n\) dimensions there are \(n!\) different orders of application of the polynomial divisions, all of which lead to the same result \(r_{n}\). **Proposition 4.5**.: _Given any polynomial \(g\in\Bbbk\left[x_{1},\ldots,x_{n}\right]\), the interpolating polynomial of the interpolation as presented in Theorem 2.2 with \(t_{\mathbf{a}}^{\mathbf{m}}:=\partial_{\mathbf{a}}^{\mathbf{m}}g,\ \mathbf{a}\in\mathbf{A},\ \mathbf{m}\in \left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) is equal to \(r_{n}\), and the remainder is equal to_ \[\sum_{i=1}^{n}H_{i}q_{i}.\] Proof.: From \(\deg_{i}r_{n}<\deg H_{i}\), \(i=1,2,\ldots,n\) we have that \(r_{n}\in V\left(\mathbf{A},\nu\right),\) and from (4.6) by Remark 10 we obtain \(\partial_{\mathbf{a}}^{\mathbf{m}}g=\partial_{\mathbf{a}}^{\mathbf{m}}r_{n}, \ \mathbf{a}\in\mathbf{A},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a} \right)-\mathbf{1}\right]\). These arguments in combination with (4.6) lead us immediately to the claims of the current proposition. Proposition 4.5 can be used to express the remainder of the Hermite interpolation in closed form using the cascaded polynomial divisions in case of interpolating a polynomial function. In the general case of any differentiable function to be interpolated (other than polynomial), this error calculation is applicable after expanding the function using Taylor Theorem. The following examples demonstrates error calculation for a univariate and a 3-D polynomial function. **Example 4.6**.: Let \(g=(x-2)^{11}+x^{4}+9\). Consider the grid \(A=\{0.7,1.2,1.7,2.2\}\), with multiplicity function \(\nu(a)=2\), for every point on the grid \(a\in A\). We derive the polynomial \(H_{1}(x)=(x-0.7)^{2}(x-1.2)^{2}(x-1.7)^{2}(x-2.2)^{2}\). Applying Eq.(4.6), we obtain the quotient \(q\) and the remainder \(r\) of the polynomial division \(\frac{g}{H_{1}}\). According to Eq.(4.6), \(r\) is the Hermite polynomial with \(\deg r=\sum(a_{i})-1=7\), given by \[r=\frac{9273}{178}x^{7}-593x^{6}+\frac{31499}{11}x^{5}-\frac{106481}{14}x^{4} +\frac{108109}{9}x^{3}-\frac{349850}{31}x^{2}+\frac{40932}{7}x-\frac{6409}{5}.\] The error of the Hermite interpolation is easily obtained by \[R = H_{1}q_{1}=x^{11}-22x^{10}+220x^{9}-1320x^{8}+\frac{52279}{10}x^ {7}-14191x^{6}+\frac{293749}{11}x^{5}-\frac{173166}{5}x^{4}+\frac{272051}{9}x^ {3}\] \[-\frac{33749}{2}x^{2}+\frac{37916}{7}x-\frac{3786}{5}.\] It can be observed that the Hermite polynomial \(r(x)\) is unique among the polynomials of degree up to 7 that satisfies the requirements \(g(a_{i})=r(a_{i})\) and \(\frac{d\xi}{dx}=\frac{d\xi}{dx}\) for each supporting point \(a_{i}\). The interpolated function \(g(x)\) and the resulting Hermite polynomial are depicted in Figure 1 (a). The interpolation error is plotted in the semi-logarithmic graph of Figure 1 (b). The error remains very low in \([a_{1},a_{4}]\) and approaches 0 very fast in the vicinity of the \(a_{i}\). **Example 4.7**.: _Let us consider \(g=(x-2y)^{7}+(x+z^{2})^{4}\) and the grid \(\mathbf{A}=\{-1,0,1\}\times\{-1,0,1\}\times\{-1,0,1\}\), with multiplicity function \(\nu(\mathbf{a})=(2,2,2)\), for every point on the grid \(\mathbf{a}\in\mathbf{A}\). We derive the polynomials \(H_{1}(x)=(x-1)^{2}(x)^{2}(x+1)^{2}\), \(H_{2}(y)=(y+1)^{2}(y)^{2}(y+1)^{2}\) and \(H_{3}(z)=(z+1)^{2}(z)^{2}(z+1)^{2}\). Applying Eq.(4.6), we obtain the quotient \(q_{1}\) and the remainder \(r_{1}\) of the polynomial division \(\frac{g}{H_{1}}\), Subsequently, we obtain the quotient \(q_{2}\) and the remainder \(r_{2}\) of the polynomial division \(\frac{r_{1}}{H_{2}}\) and finally, the quotient \(q_{3}\) and the remainder \(r_{3}\) of the polynomial division \(\frac{r_{2}^{2}}{H_{3}}\). According to Eq.(4.6), \(r_{3}\) is the Hermite polynomial given by_ \[r_{3}=84x^{5}y^{2}+2x^{5}-280x^{4}y^{3}-28x^{4}y+560x^{3}y^{4}-672x^{2}y^{5}+ 14x^{2}y+3x^{2}z^{2}+896xy^{4}-448xy^{2}+3xz^{4}-256y^{5}+128y^{3}+2z^{4}-z^{2}\] _The remainder \(R=H_{1}q_{1}+H_{2}q_{2}+H_{3}q_{3}\) is calculated as:_ \[R=z^{2}(z^{2}-1)^{2}+x^{2}(x-14y)(x-1)^{2}(x+1)^{2}+y^{2}(448x-128y)(y-1)^{2}( y+1)^{2}.\] ## 5 An alternative quick proof of Theorem 1.2 In this section we provide a quick proof of Theorem 1.2, as an alternative to the proof given in [1]. In addition to the already defined \(H_{a}^{k}(x)\), we define the following \[H_{\mathbf{a}}^{\mathbf{k}}\left(x_{1},\ldots,x_{n}\right):=\prod_{i=1}^{n}H_{a_{ i}}^{k_{i}}\left(x_{i}\right)\in\Bbbk\left[x_{1},\ldots,x_{n}\right],\text{ and }\] \[Q\left(\mathbf{A},\nu\right):=\left\{H_{\mathbf{a}}^{\mathbf{k}}:\mathbf{a} \in\mathbf{A},\mathbf{k}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)- \mathbf{1}\right]\right\}.\] First, we state the following remarks and lemma. **Remark 11**.: From \(\deg_{i}H_{\mathbf{a}}^{\mathbf{k}}=k_{i}+\sum_{a\in\mathcal{A}_{i}-\left[a_{ i}\right]}\nu_{i}\left(a\right)<\sum_{a\in\mathcal{A}_{i}}\nu_{i}\left(a\right)\) follows \(Q\left(\mathbf{A},\nu\right)\subset V\left(\mathbf{A},\nu\right)\), and \(\left|Q\left(\mathbf{A},\nu\right)\right|=\prod_{i=1}^{n}\sum_{a\in\mathcal{A} _{i}}\nu_{i}\left(a\right)=\dim_{\xi}V\left(\mathbf{A},\nu\right).\) **Remark 12**.: For any \(a,\ b\in A\), and \(k,m\in\left\{1,\ldots,\nu\left(a\right)-1\right\}\) we have (see [5]) \[\partial^{m}H_{a}^{k}\left(b\right)=\left\{\begin{array}{ll}0,&\text{if}\quad a \neq b\\ \delta_{km}&\text{if}\quad a=b\end{array}\right.,\] where \(\delta_{km}\) is the Kronecker Delta. From this we immediately obtain, for any \(\mathbf{a},\mathbf{b}\in\mathbf{A}\), and any \(\mathbf{k},\ \mathbf{m}\in\left[\mathbf{0},\nu\left(\mathbf{a}\right)-\mathbf{1}\right]\) we have \[\partial_{\mathbf{a}}^{\mathbf{m}}H^{\mathbf{k}}\left(\mathbf{b}\right)= \left\{\begin{array}{ll}0,&\text{if}\quad\mathbf{a}\neq\mathbf{b}\\ \delta_{\mathbf{km}}&\text{if}\quad\mathbf{a}=\mathbf{b}\end{array}\right..\] **Lemma 5.1**.: \(Q\left(\mathbf{A},\nu\right)\) _is a basis of \(V\left(\mathbf{A},\nu\right)\)._ Figure 1: (a) The interpolated function and Hermite polynomial of the Example 4.6, along with the support points (b) Semi-logarithmic plot of the remainder of interpolation. Proof.: By Remark 11 it is sufficient to show that \(Q\left(\mathbf{A}\text{,}\nu\right)\) is linearly independent: From \(\sum_{\mathbf{b}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{b} )-\mathbf{1}\right]}c_{\mathbf{b}}^{\mathbf{m}}H_{\mathbf{b}}^{\mathbf{m}}=0,\ c_{ \mathbf{b}}^{\mathbf{m}}\in\mathbb{C}\) for each \(\mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{a})-\mathbf{1}\right]\) follows \(\sum_{\mathbf{b}\in\mathbf{A}}\sum_{\mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{ b})-\mathbf{1}\right]}c_{\mathbf{b}}^{\mathbf{m}}\partial_{\mathbf{a}}^{k}H_{ \mathbf{b}}^{\mathbf{m}}=0\), from which by Remark 12 we conclude \(c_{\mathbf{a}}^{\mathbf{k}}=0\). **Theorem 5.2**.: _Given \(t_{\mathbf{a}}^{\mathbf{k}}\in,\ \mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},\nu( \mathbf{a})-\mathbf{1}\right].\) Then there is a unique \(p\in V\left(\mathbf{A}\text{,}\nu\right)\) such that \(\partial_{\mathbf{a}}^{\mathbf{k}}p=t_{\mathbf{a}}^{\mathbf{k}},\ \mathbf{a}\in\mathbf{A},\)\(\mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{a})-\mathbf{1}\right]\), and \(p\) is given by_ \[p=\sum_{\mathbf{a}\in\mathbf{A}}\sum_{\mathbf{m}\in\left[\mathbf{0},\nu( \mathbf{a})-\mathbf{1}\right]}t_{\mathbf{a}}^{\mathbf{k}}H_{\mathbf{a}}^{ \mathbf{k}}=\sum_{a_{1}\in A_{1}}\cdots\sum_{a_{n}\in A_{n}}\sum_{k_{n}=0}^{ \nu_{1}(a_{n})-1}\cdots\sum_{k_{n}=0}^{\nu_{n}(a_{n})-1}\ \prod_{j=1}^{n}\partial_{i}^{k}H_{a_{j}}^{k_{j}}\left(x_{j}\right)t_{ \mathbf{a}}^{\mathbf{k}}.\] Proof.: By Remark 12 for each \(\mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{a})-\mathbf{1}\right]\) we have \[\partial_{\mathbf{a}}^{\mathbf{k}}p=\sum_{\mathbf{b}\in\mathbf{A}}\sum_{ \mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{b})-\mathbf{1}\right]}t_{\mathbf{b} }^{\mathbf{m}}\partial_{\mathbf{a}}^{\mathbf{k}}H_{\mathbf{b}}^{\mathbf{m}}=t _{\mathbf{a}}^{\mathbf{k}}.\] The uniqueness of \(p\) follows By Lemma 5.1. ## 6 Computational Complexity **Remark 13**.: Regarding the computational burden introduced by the calculation of the formula (2.4): Let \(n\) be the number of dimensions. Consider the grid \(\mathbf{A}=A_{1}\times\cdots\times A_{n}\), and \(N_{1}=\left|A_{1}\right|,\ldots,N_{n}=\left|A_{n}\right|\) the corresponding cardinalities. Let \(k=\left|\nu(\mathbf{a})\right|\), where \(\nu(\mathbf{a})\) is the multiplicity function and \(N\) the number of support points, with \(N=N_{1}\cdot\ldots\cdot N_{n}\). The computational complexity of \(H_{(i,a)}\left(x_{i}\right)\) in (1.2), for all \(i\in\left\{1,\ldots,n\right\}\) and \(a\in A_{i}\), is \[\left(\left(N_{1}-1\right)+\cdots+\left(N_{n}-1\right)\right)\cdot c_{1}=(N_{1 }+\cdots+N_{n}-n)kc_{1}, \tag{6.1}\] where \(c_{1}\) is a constant factor. The computational complexity of \(H_{(\mathbf{a},\mathbf{k})}\) in (1.3), for every \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{k}\in\left[\mathbf{0},\nu(\mathbf{a})-\mathbf{1}\right]\), is \[N\cdot n\cdot k\cdot c_{2}, \tag{6.2}\] where \(c_{2}\) is a constant factor. The computational complexity introduced by the calculation of inverse of matrices \(\Lambda_{\mathbf{a}}\) in (tade), for every \(\mathbf{a}\in\mathbf{A}\) is \[\left(k^{3}+k\cdot n\cdot\left(k^{2}-k\right)\right)\cdot N\cdot c_{3}, \tag{6.3}\] where \(c_{3}\) is a constant factor. The matrix multiplications done in (2.4) have computational complexity \[N\cdot k^{2}\cdot c_{4}, \tag{6.4}\] where \(c_{4}\) is a constant factor. Combining (6.1), (6.2), (6.3), and (6.4), the resulting complexity is \[\left(N_{1}+\cdots+N_{n}-n\right)\cdot k\cdot c_{1}+N\cdot n\cdot k\cdot c_{2}+ \left(k^{3}+k\cdot n\cdot\left(k^{2}-k\right)\right)\cdot N\cdot c_{3}+N\cdot k ^{2}\cdot c_{4}=O(k^{3}\cdot n\cdot N). \tag{6.5}\] Regarding the computational complexity resulting from the closed form expression in [1, Theorem 4.2]: The computational burden of calculating all \(H_{a_{j}}(x_{j})\) and \(G_{a_{j}}^{k_{j}}\) is \[\left(N_{1}+\cdots+N_{n}-n\right)\cdot k+N\cdot k^{2}\cdot c_{5},\] and the resulting complexity in order to compute (1.4) is \[N_{1}+\cdots+N_{n}-n+N^{2}\cdot k^{n+2}\cdot n\cdot c_{6}=O(N^{2}\cdot k^{n+2} \cdot n). \tag{6.6}\] where \(c_{5},c_{6}\) are constant factors. It becomes obvious that the computational complexity of the Hermit interpolant in the form of [1, Theorem 4.2] is not polynomial with respect to \(k\), whereas the complexity of the proposed Hermite interpolant is of polynomial time with respect to \(k\). ## 7 Hermite splines interpolation approach In this section, we present the \(n\)-dimensional Hermit interpolation via splines, which results in a less computationally intensive implementation. As it can be observed from Eq. (1.2) and (1.3), the maximum degree of each of the univariate Hermite polynomials for each dimension is given by the following formula: \[\max_{a\in A_{i}}\min_{b\in A_{i}}\{v_{i}(a)-v_{i}(b)\}-1+\sum_{a\in A_{i}}v_{i} (a). \tag{7.1}\] The maximum total degree of the polynomial is: \[\sum_{i=1}^{n}\left(\max_{a\in A_{i}}\min_{b\in A_{i}}\{v_{i}(a)-v_{i}(b)\}-1+ \sum_{a\in A_{i}}v_{i}(a)\right) \tag{7.2}\] Thus, even for small numbers of support points on each axis and small values of multiplicity function \(v_{i}\) the maximum total degree is of considerable value. For example if \(|A_{1}|\times|A_{2}|\times|A_{3}|=7\times 7\times 7\), and \(\mathbf{v}=(v_{1}(a),v_{2}(a),v_{3}(a))=(3,3,3)\), for every point on the grid \(a\in\mathbf{A}\), the resulting degree of the polynomials for each dimension is \(7\cdot 3+0-1=20\), and the total degree is \(60\). Such high degree polynomials are numerically unstable, since very small variations of the high order coefficients may cause significant errors in the value of the polynomial. For this reason, the piecewise calculation of the Hermite polynomial using only the closest support points for each position \(\mathbf{x}\) is preferred. More specifically, if the number of local support points is \(|B_{i}|<|A_{i}|\) for each dimension \(i\), then for any point \(\mathbf{q}\in[\min\{A_{1}\},\max\{A_{1}\}]\times[\min\{A_{2}\},\max\{A_{2}\}] \times...\times[\min\{A_{n}\},\max\{A_{n}\}]\), the local support points along the \(i^{\text{th}}\) dimension are selected as follows \[B_{i}:=\begin{cases}\left[[q_{i}]-\frac{|B_{i}|-1}{2},\ldots,[q_{i}]+\frac{|B_ {i}|-1}{2}\right]&,\text{ if }n\;|B_{i}|\mod 2=1\\ \left[[q_{i}]-\frac{|B_{i}|}{2}+1,\ldots,[q_{i}]+\frac{|B_{i}|}{2}\right]&, \text{ otherwise}\end{cases},\;i=1,\ldots,n, \tag{7.3}\] where \(\lfloor.\rfloor\) denotes the rounding to the smallest nearest integer (floor) operator and \([.]\) denotes the rounding to the nearest integer (round) operator. Fig. 2 and 3 visualize the support points for a random point \(\mathbf{q}\) denoted by the green circle and \(|B_{1}|=|B_{2}|=3\) in 2D and \(|B_{1}|=|B_{3}|=|B_{3}|=3\) in 3D, respectively, according to Eq.(2). Figure 2: Visualization of an example of spline local support points round a random point (green circle) in 2D, with \(|B_{1}|=|B_{2}|=3\). ## 8 Examples **Example 8.1**.: Consider the grid \(\mathbf{A}=A_{1}\times A_{2}\), where \(A_{1}=\{0,1\},\ A_{2}=\{0,1\}\), with multiplicity function \(v(a)=(v_{1}(a),\ v_{2}(a))=(2,2)\), for every point on the grid, \(a\in\mathbb{A}\). Given the values of the interpolating polynomial \(f\), \(t_{\mathbf{a}}^{\mathbf{k}},\ \mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},v( \mathbf{a})-\mathbf{1}\right]\), namely \(t_{(0,0)}^{(0,0)}t_{(1,0)}^{(0,0)}t_{(0,1)}^{(0,0)}t_{(1,1)}^{(0,0)}t_{(1,1)}^ {(0,0)}t_{(1,1)}^{(0,0)}t_{(1,1)}^{(1,1)}\) at each one of the support points, as well as the values of the partial derivatives up to the 1st degree: \(t_{(0,0)}^{(0,1)}t_{(1,0)}^{(0,1)}t_{(0,1)}^{(0,1)}t_{(1,1)}^{(0,1)}t_{(0,0)}^{ (1,0)}t_{(1,0)}^{(1,0)}t_{(1,1)}^{(1,0)}t_{(1,1)}^{(1,1)}t_{(0,0)}^{(1,1)}t_{(1,0)}^{(1,1)}t_{(1,1)}^{(1,1)}\), the following conditions must hold: \[d_{\mathbf{a}}^{\mathbf{k}}f=t_{\mathbf{a}}^{\mathbf{k}},\ \mathbf{a}\in\mathbf{A},\ \mathbf{k}\in\left[\mathbf{0},v(\mathbf{a})-\mathbf{1}\right].\] We compute the \(H_{(i,a)}\left(x_{i}\right)\) quantities given by (1.2) for \(i\in\{1,2\}\) and \(a\in A_{i}\): \[H_{(1,0)}=x_{1}^{2}-2x_{1}+1,H_{(2,0)}=x_{2}^{2}-2x_{2}+1\] \[H_{(1,1)}=x_{1}^{2},H_{(2,1)}=x_{2}^{2}\] Furthermore, we compute the \(H_{(\mathbf{a},\mathbf{k})}\) quantities given by (1.3) for every \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{k}\in\left[\mathbf{0},v(\mathbf{a})-\mathbf{1}\right]\). The numbering of the elements of \(\left[\mathbf{0},v(\mathbf{a})-\mathbf{1}\right]\) is given by the degree reverse lexicographic order. More specifically, \[1_{\mathbf{a}}=(0,0)\prec 2_{\mathbf{a}}=(0,1)\prec 3_{\mathbf{a}}=(1,0) \prec 4_{\mathbf{a}}=(1,1)\,.\] \[H_{(0,0),(0,0)}=(x_{1}^{2}-2x_{1}+1)(x_{2}^{2}-2x_{2}+1),H_{(0,1),(0,0)}=(x_{1}^{2}-2x_{1}+1)x_{2}^{2}\] \[H_{(0,0),(0,1)}=(x_{1}^{2}-2x_{1}+1)(x_{2}^{3}-2x_{2}^{2}+x_{2}),H_{(0,1),(0,1)}=(x_{1}^{2}-2x_{1}+1)(x_{2}^{3}-x^{2})\] \[H_{(0,0),(1,0)}=(x_{1}^{3}-2x_{1}^{2}+x_{1})(x_{2}^{2}-2x_{2}+1),H_{(0,1),(1,0)}=(x_{1}^{3}-2x_{1}^{2}+x_{1})x_{2}^{2}\] \[H_{(0,0),(1,1)}=(x_{1}^{3}-2x_{1}^{2}+x_{1})(x_{2}^{3}-2x_{2}^{2} +x_{2}),H_{(0,1),(1,1)}=(x_{1}^{3}-2x_{1}^{2}+x_{1})(x_{2}^{3}-x^{2})\] \[H_{(1,0),(0,0)}=x_{1}^{2}(x_{2}^{2}-2x_{2}+1),H_{(1,1),(0,0)}=x_{1}^{2}x_{2}^{2}\] \[H_{(1,0),(0,1)}=x_{1}^{2}(x_{2}^{3}-2x_{2}^{2}+x_{2}),H_{(1,1),(0,1)}=x_{1}^{2}(x_{2}^{3}-x_{2}^{2})\] \[H_{(1,0),(1,0)}=(x_{1}^{3}-x_{1}^{2})(x_{2}^{2}-2x_{2}+1),H_{(1,1),(1,0)}=(x_{1}^{3}-x_{1}^{2})x_{2}^{2}\] \[H_{(1,0),(1,1)}=(x_{1}^{3}-x_{1}^{2})(x_{2}^{2}-2x_{2}^{2}+x_{2}),H_{(1,1),(1,1)}=(x_{1}^{3}-x_{1}^{2})(x_{2}^{3}-x_{2}^{2})\] Figure 3: Visualization of an example of spline local support points round a random point (green circle) in 3D, with \(|B_{1}|=|B_{2}|=|B_{3}|=3\). Thus, \(H_{(0,0)}=(H_{(0,0),(0,0)},\ H_{(0,0),(0,1)},\ H_{(0,0),(1,0)},\ H_{(0,0),(1,1)})^{T}\). Similarly, for \(H_{(0,1)},\ H_{(1,0)}\) and \(H_{(1,1)}\). For each point on the grid, we create the lower triangular matrix, \(L_{\mathbf{a}}\): \[\Lambda_{(0,0)}^{-1}=\left(\begin{array}{cccc}1&0&0&0\\ -2&1&0&0\\ -2&0&1&0\\ 4&-2&-2&1\end{array}\right)\ \,\ \ \ \Lambda_{(0,1)}^{-1}=\left(\begin{array}{ cccc}1&0&0&0\\ 2&1&0&0\\ -2&0&1&0\\ -4&-2&2&1\end{array}\right),\] \[\Lambda_{(1,0)}^{-1}=\left(\begin{array}{cccc}1&0&0&0\\ -2&1&0&0\\ 2&0&1&0\\ -4&2&-2&1\end{array}\right)\ \,\ \ \ \Lambda_{(1,1)}^{-1}=\left(\begin{array}{ cccc}1&0&0&0\\ 2&1&0&0\\ 2&0&1&0\\ 4&2&2&1\end{array}\right),\] The interpolating polynomial obtained by using the formula of Theorem 2.5 is \[f = \left(\Lambda_{(0,0)}^{-1}\cdot\left(\begin{array}{c}t_{(0,0)} ^{(0,0)}\\ t_{(0,1)}^{(0,1)}\\ t_{(0,0)}^{(1,0)}\\ t_{(0,0)}^{(1,1)}\\ t_{(0,0)}^{(1,1)}\end{array}\right)\right)^{T}\left(\begin{array}{c}H_{(0,0),(0,0)}\\ H_{(0,0),(1,0)}\\ H_{(0,0),(1,1)}\\ H_{(0,0),(1,1)}\end{array}\right)+\left(\Lambda_{(0,1)}^{-1}\cdot\left( \begin{array}{c}t_{(0,1)}^{(0,0)}\\ t_{(0,1)}^{(0,1)}\\ t_{(0,1)}^{(1,0)}\\ t_{(0,1),(1,1)}\end{array}\right)^{T}\left(\begin{array}{c}H_{(0,1),(0,0)} \\ H_{(0,1),(0,1)}\\ H_{(0,1),(1,1)}\end{array}\right) \tag{8.1}\] \[+ \left(\Lambda_{(1,0)}^{-1}\cdot\left(\begin{array}{c}t_{(1,0)} ^{(0,1)}\\ t_{(1,0)}^{(1,1)}\\ t_{(1,0)}^{(1,1)}\\ t_{(1,1)}^{(1,1)}\end{array}\right)\right)^{T}\left(\begin{array}{c}H_{(1,0),(0)}\\ H_{(1,0),(1,1)}\\ H_{(1,1),(1,0)}\end{array}\right)+\left(\Lambda_{(1,1)}^{-1}\cdot\left( \begin{array}{c}t_{(0,1)}^{(0,1)}\\ t_{(1,1)}^{(1,1)}\\ t_{(1,1)}^{(1,1)}\end{array}\right)\right)^{T}\left(\begin{array}{c}H_{(1,1),(0,0)}\\ H_{(1,1),(0,1)}\\ H_{(1,1),(1,0)}\end{array}\right).\] **Example 8.2**.: Let us expand the previous example by using the proposed Hermite polynomial in order to interpolate a given function \(g(x_{1},x_{2})=e^{x_{1}+x_{2}}\). To this end, it is sufficient to set the values of \(i_{ij}^{(km)}\) equal to \(\partial_{ij}^{(km)}g\). More specifically: \[\left(\begin{array}{c}t_{(0,0)}^{(0,0)}\\ t_{(0,0)}^{(1,1)}\\ t_{(0,0)}^{(1,1)}\\ t_{(1,0)}^{(1,1)}\end{array}\right)=\left(\begin{array}{c}1\\ 1\\ 1\\ 1\end{array}\right),\left(\begin{array}{c}t_{(0,1)}^{(0,0)}\\ t_{(0,1)}^{(0,1)}\\ t_{(0,1)}^{(1,1)}\end{array}\right)=\left(\begin{array}{c}t_{(1,0)}^{(0,0)} \\ t_{(1,0)}^{(1,1)}\\ t_{(1,0)}^{(1,1)}\end{array}\right)=\left(\begin{array}{c}e\\ e\\ e\\ e\end{array}\right),\left(\begin{array}{c}t_{(0,1)}^{(0,1)}\\ t_{(1,1)}^{(1,1)}\\ t_{(1,1)}^{(1,1)}\end{array}\right)=\left(\begin{array}{c}e^{2}\\ e^{2}\\ e^{2}\end{array}\right).\] Substituting the values of \(i_{ij}^{(km)}\) in (8.1) of the previous example and performing some algebra, yields the following Hermite polynomial. \[f(x_{1},x_{2})=(1-x_{1}(-1+(5+e(-2+x_{1})-3x_{1})x_{1}))(1+x_{2}+(-5+2ex_{2}^{ 2}-(-3+e)x_{2}^{3})\] This polynomial is identical with the one obtained by Eq. (3.3) from Lemma 3.1. The calculated RMSE between the original function and the interpolated polynomial, in a grid of \(11\times 11\) equally spaced points in \([0,1]\times[0,1]\) is \(0.0085\), which suggests a very accurate interpolation. **Example 8.3**.: Consider the function \(g(x_{1},x_{2})=e^{-(x_{1}-3)^{2}-(x_{2}-3)^{2}}+e^{-(x_{1}-4)^{2}-(x_{2}-4)^{2}}\) and the grid of support points \(\mathbf{A}=A_{1}\times A_{2}\), where \(A_{1}=\{0,1,2,3,4,5\},\ A_{2}=\{0,1,2,3,4,5\},\ A_{2}=\{0,1,2,3,4,5\},\) with multiplicity function (maximum order of derivation -1) \(v(\mathbf{a})=(v_{1}(a),\ v_{2}(a))\), for every point on the grid, \(a\in\mathbb{A}\). Thus in total \(6\times 6\) support points have been defined. The 3d plot of the function, \(g(x_{1},x_{2})\) is depicted in Figure 4. In addition, the difference between \(g(x_{1},x_{2})\) and the interpolated function using the proposed Hermit with maximum order of derivative \(v(\mathbf{a})=(v_{1}(a),\ v_{2}(a))=(2,2)\) and the cubic spline interpolation (which was the best performer of the methods under comparison) are shown in Figure 5(a) and (b) respectively. Note the the color scale is identical in all difference images. In this example we utilize the values of \(g\) and its partial derivatives up to maximum order (m.o.) \(v(\mathbf{a})-1\) at each support point \(\mathbf{a}\), to interpolate the function at \(51\times 51\) equally distributed points in \([0,5]\times[0,5]\). The cubic spline, linear, cubic polynomial and the Hermit Makima interpolation methods are also applied at the same points, using the Matlab implementation. The achieved RMSE by each method is shown in Table 1, calculated at the aforementioned points. These interpolation methods have been used for comparison purposes in the remaining examples. Figure 5: The difference between \(g(x_{1},x_{2})\) of Example 8.3 and the interpolated function using (a) the proposed Hermit with multiplicity function \(v(a)=(v_{1}(a),\ v_{2}(a))=(2,2)\) and (b) the cubic spline interpolation. Note the the color scale is identical in all difference images. **Example 8.4**.: Consider the function \(g(x_{1},x_{2},x_{3})=e^{\frac{-(x_{1},x_{2},x_{3})-(x_{1}-x_{2},x_{3})^{2}}{3}}-e^{ \frac{-(x_{1}-x_{1},x_{2},x_{3})^{2}-(x_{1}-x_{2})^{2}}{5}}\), and the grid of support points \(\mathbf{A}=A_{1}\times A_{2}\times A_{3}\), where \(A_{1}=\{0,1,2,3\},\ A_{2}=\{0,1,2,3,4\}\) and \(A_{3}=\{0,1,2\}\), with multiplicity function \(v(\mathbf{a})=(v_{1}(\mathbf{a}),\ v_{2}(\mathbf{a}),\ v_{3}(\mathbf{a}))=(1,1, 1),(2,2,2)\) and \((3,3,3)\), for every point on the grid, \(\mathbf{a}\in\mathbf{A}\). Thus, in total we utilize \(4\times 5\times 3\) support points to derive the Hermite interpolating polynomial. In this example we use the values of \(g\) and its partial derivatives up to maximum order, \(v(\mathbf{a})-1\), at each support point \(\mathbf{a}\), to interpolate the function at \(13\times 17\times 9\) equally distributed points in \([0,3]\times[0,4]\times[0,2]\). The cubic spline, linear, cubic polynomial and the Hermit Makima interpolation methods are also applied at the same points. The achieved RMSE by each method is shown in Table 2, calculated at the aforementioned points. **Example 8.5**.: Consider the grid \(\mathbf{A}=A_{1}\times A_{2}\), where \(A_{1}=\{0,1\},\ A_{2}=\{0,1\}\), with multiplicity function \(v(a)=(v_{1}(a),\ v_{2}(a))=(1,1)\), for every point on the grid, \(a\in\mathbf{A}\). The given values of the support points are \(c_{00},\ c_{01},\ c_{10},\ c_{11}\), such that \(t_{(i,j)}^{(0,0)}=c_{ij},i,j=\{0,1\}\). The expressions \(H_{(i,a)}\left(x_{i}\right)\) in (1.2) are obtained for \(i\in\{1,2\}\) and \(a\in A_{i}\) as following: \(H_{(1,0)}=1-x_{1},\ H_{(1,1)}=x_{1},\ H_{(2,0)}=1-x_{2}\), and \(H_{(2,1)}=x_{2}\). Furthermore, we derive \(H_{(\mathbf{a},\mathbf{k})}\) given by (1.3) for every \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{k}=(0,0)\): \[H_{(0,0),(0,0)} = (1-x_{1})(1-x_{2}),\] \[H_{(0,1),(0,0)} = (1-x_{1})x_{2},\] \[H_{(1,0),(0,0)} = x_{1}(1-x_{2}),\] \[H_{(1,1),(0,0)} = x_{1}x_{2}.\] Note that for every \(\mathbf{a}\in\mathbf{A}\), \(\Lambda_{\mathbf{a}}=1\). Thus, the interpolating polynomial is given by the following expression: \[f(x_{1},x_{2})=c_{00}(1-x_{1})(1-x_{2})+c_{01}(1-x_{1})x_{2}+c_{10}x_{1}(1-x_{ 2})+c_{11}x_{1}x_{2} \tag{8.2}\] Observe that the above expression is identical to the bilinear interpolating polynomial for the unit step grid. **Example 8.6**.: _Consider the grid \(\mathbf{A}=A_{1}\times A_{2}\times A_{3}\), where \(A_{1}=\{0,1\},\ A_{2}=\{0,1\},\ A_{3}=\{0,1\}\), with multiplicity function \(v(a)=(v_{1}(a),\ v_{2}(a),\ v_{3}(a))=(1,1,1)\), for every point on the grid, \(a\in\mathbf{A}\). The given values of the support points are \(c_{000},c_{001},c_{010},c_{100},c_{011},c_{011},c_{110},c_{111}\), such that \(t_{(i,j,k)}^{(0,0,0)}=c_{ijk},i,j,k=\{0,1\}\)._ _We derive \(H_{(i,a)}\left(x_{i}\right)\) given by (1.2) for \(i\in\{1,2\}\) and \(a\in A_{i}\): \(H_{(1,0)}=1-x_{1},\ H_{(1,1)}=x_{1},\ H_{(2,0)}=1-x_{2},H_{(2,1)}=x_{2},H_{(3,0)}=1-x_{3},\text{and}\ H_{(1,1)}=x_{3}\). Furthermore, we obtain \(H_{(\mathbf{a},\mathbf{k})}\) given by (1.3) for every \(\mathbf{a}\in\mathbf{A}\) and \(\mathbf{k}=(0,0,0)\):_ \[H_{(0,0),(0,0)}=(1-x_{1})(1-x_{2})(1-x_{3}),H_{(0,0,1),(0,0)}=(1-x_{1})(1-x_{2})x_{3}\] \[H_{(0,1),(0,0),(0,0)}=(1-x_{1})x_{2}(1-x_{3}),H_{(1,0,0),(0,0)}=x_{1}(1-x_{2})(1-x_{3})\] \[H_{(0,1,1),(0,0)}=(1-x_{1})x_{2}x_{3},H_{(1,0,1),(0,0,0)}=x_{1}(1-x_{2})x_{3}\] \[H_{(1,1,0),(0,0,0)}=x_{1}x_{2}(1-x_{3}),H_{(1,1,1),(0,0,0)}=x_{1}x_{2}x_{3}\] \begin{table} \begin{tabular}{|l l|} \hline Method & RMSE \\ \hline Proposed Hermite \(v(\mathbf{a})=(1,1,1)\) & 0.0152 \\ \hline Proposed Hermite \(v(\mathbf{a})=(2,2,2)\) & 0.0001 \\ \hline Proposed Hermite \(v(\mathbf{a})=(3,3,3)\) & 1.2776e-06 \\ \hline Linear & 0.0528 \\ \hline Cubic & 0.0187 \\ \hline Makima & 0.0209 \\ \hline Spline & 0.0152 \\ \hline \hline \end{tabular} \end{table} Table 2: The achieved RMSE for the proposed Hermite polynomial and other interpolation methods, in the case of Example 8.4 _Note that for every \(\mathbf{a}\in\mathbf{A}\), \(\Lambda_{\mathbf{a}}=1\). Thus, the interpolating polynomial is given by the following expression:_ \[f(x_{1},x_{2},x_{3}) = c_{000}(1-x_{1})(1-x_{2})(1-x_{3})+c_{001}(1-x_{1})x_{2}(1-x_{3})\] \[+c_{010}(1-x_{1})x_{2}(1-x_{3})+c_{100}x_{1}(1-x_{2})(1-x_{3})+c_{0 11}(1-x_{1})x_{2}x_{3}\] \[+c_{101}x_{1}x_{2}(1-x_{3})+c_{110}x_{1}x_{2}(1-x_{3})+c_{111}x_{1 }x_{2}x_{3}\] _Observe that the above expression is identical to the trilinear interpolating polynomial for the unit step grid._ **Example 8.7**.: Consider the function \(g(x_{1},x_{2},x_{3})=x_{1}\sin(x_{2})+\frac{x_{2}\sin(x_{1})}{10}-x_{1}\sin \left(\frac{x_{2}x_{3}}{4}\right)\), and the grid of support points \(\mathbf{A}=A_{1}\times A_{2}\times A_{3}\), where \(A_{1}=A_{2}=A_{3}=\{-7,-6,\ldots,7\}\), with multiplicity function \(\nu(\mathbf{a})=(v_{1}(\mathbf{a}),\ v_{2}(\mathbf{a}),\ v_{3}(\mathbf{a}))=(1, 1,1),(2,2,2)\), for every point on the grid, \(\mathbf{a}\in\mathbf{A}\). Thus, in total we utilize \(15\times 15\times 15\) support points to derive the Hermite interpolating polynomial. In this example we use the values of \(g\) and its partial derivatives up to maximum order, \(\nu(\mathbf{a})-1\), at each support point \(\mathbf{a}\), to interpolate the function at \(57\times 57\times 57\) equally distributed points in \([-7,7]\times[-7,7]\times[-7,7]\). The cubic spline, linear, cubic polynomial and the Hermit Makima interpolation methods are also applied at the same points. The achieved RMSE by each method is shown in Table 3, calculated at the aforementioned points. In addition, we implement the Hermit interpolation via splines, where we use a smaller grid. Namely, for the current example, the grid of spline support points \(\mathbf{B}=B_{1}\times B_{2}\times B_{3}\), where \(B_{1}=B_{2}=B_{3}=\{0,1,2\}\), with multiplicity function \(\nu(\mathbf{b})=(v_{1}(\mathbf{b}),\ v_{2}(\mathbf{b}),\ v_{3}(\mathbf{b}))=(2, 2,2)\), for every point on the grid, \(b\in\mathbf{B}\). \begin{table} \begin{tabular}{|l l|} \hline Method & RMSE \\ \hline Proposed Hermite \(\nu(\mathbf{a})=(2,2,2)\) & 0.0015 \\ \hline Linear & 1.3318 \\ \hline Cubic & 0.6699 \\ \hline Makima & 0.7748 \\ \hline Spline & 0.7461 \\ \hline \hline \end{tabular} \end{table} Table 3: The achieved RMSE for the proposed Hermite polynomial and other interpolation methods, in the case of Example 8.4 **Example 8.8**.: Consider the function of Example 8.7, \(g(x_{1},x_{2},x_{3})=x_{1}\sin(x_{2})+\frac{x_{3}\sin(x_{3})}{10}-x_{1}\sin\left( \frac{x_{3}\pi_{3}}{4}\right)\) and the support points arranged on the same 3D grid. The task is to interpolate the values of this function at the points of a plane \(\pi\), as shown in Fig(7). The interpolation is performed for \((x_{1},x_{2},\pi(x_{1},x_{2})\), with \((x_{1},x_{2})\in[1,18]\times[1,18]\) and \(\pi\) is the plane through point \((10.5,10.5,10.5)\) with normal vector equal to \(\left(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right)\). To this end, we utilize the proposed Hermite interpolation as splines \(B=B_{1}\times B_{2}\times B_{3}\) \begin{table} \begin{tabular}{|l l|} \hline Method & RMSE \\ \hline Proposed Hermite spline \(3\times 3\times 3\), \(v(\mathbf{a})=(2,2,2)\) & 1.2297 \\ \hline Proposed Hermite spline \(3\times 3\times 3\), \(v(\mathbf{a})=(3,3,3)\) & 0.0678 \\ \hline Proposed Hermite spline \(5\times 5\times 5\), \(v(\mathbf{a})=(2,2,2)\) & 0.3674 \\ \hline Proposed Hermite spline \(5\times 5\times 5\), \(v(\mathbf{a})=(3,3,3)\) & 0.0020 \\ \hline Linear & 5.5936 \\ \hline Cubic & 5.3674 \\ \hline Makima & 5.3271 \\ \hline Spline & 5.6318 \\ \hline \hline \end{tabular} \end{table} Table 4: The achieved RMSE for the proposed Hermite spline polynomial and other interpolation methods, in the case of Example 8.8 Figure 7: Spline interpolation along an oblique plane of function \(g(x_{1},x_{2},x_{3})=x_{1}\sin(x_{2})+\frac{x_{2}\sin(x_{1})}{10}-x_{1}\sin\left( \frac{x_{2}x_{3}}{4}\right)\) (a) the first \(6\times 6\times 6\) support points are displayed, for clarity, with the oblique plane superimposed (b) the actual values of function \(g\) along the oblique plane and (c) the interpolation error (difference from the actual function values) achieved by the proposed Hermite interpolation for multiplicity \(\nu(\mathbf{a})=(2,2,2)\) and \(\nu(\mathbf{a})=(3,3,3)\), as well as the other methods. ## 9 Conclusions In this work a new derivation of the coordinate Hermite interpolating polynomial, given the values of the function and its partial derivatives on \(n\)-dimensional non-regular grids has been proposed. The algebraic and computational complexity of the derived formula is significantly lower compared to the only alternative formulation in the literature [1]. Furthermore, we provided the remainder of the interpolation in integral form, as well as cascaded polynomial divisions, in case of interpolating polynomial functions, using the Grobner basis of the ideal of the interpolation. The proposed interpolant polynomial can be used for accurate interpolation of data in any number of dimensions, with any degree of partial derivatives at the measured data points. A few arithmetic results of analytic functions are provided, demonstrating the superiority of Hermite interpolation in 2 and 3 dimensions, against a number of other interpolation techniques.
2308.04298
FRB 20121102A: images of the bursts and the varying radio counterpart
As more Fast Radio Bursts (FRBs) are being localised, we are learning that some fraction have persistent radio sources (PRSs). Such a discovery motivates an improvement in our understanding of the nature of those counterparts, the relation to the bursts themselves and why only some FRBs have PRSs. We report on observations made of FRB 20121102A with the MeerKAT radio telescope. Across five epochs, we detect the PRS associated with FRB 20121102A. Our observations are split into a cluster of four epochs (MJD 58732 - 58764) and a separate single epoch about 1000days later. The measured flux density is constant across the first four observations but then decays by more than one-third in the final observation. Our observations on MJD 58736 coincided with the detections of 11 bursts from FRB 20121102A by the MeerTRAP backend, seven of which we detected in the image plane. We discuss the importance of image plane detections when considering the commensal transient searches being performed with MeerKAT and other radio facilities. We find that MeerKAT is so sensitive that within a two-second image, we can detect any FRB with a flux density above 2.4mJy at 1.3GHz and so could localise every FRB that has been detected by CHIME to date.
L. Rhodes, M. Caleb, B. W. Stappers, A. Andersson, M. C. Bezuidenhout, L. N. Driessen, I. Heywood
2023-08-08T14:44:40Z
http://arxiv.org/abs/2308.04298v1
# FRB 20121102A: images of the bursts and the varying radio counterpart ###### Abstract As more Fast Radio Bursts (FRBs) are being localised, we are learning that some fraction have persistent radio sources (PRSs). Such a discovery motivates an improvement in our understanding of the nature of those counterparts, the relation to the bursts themselves and why only some FRBs have PRSs. We report on observations made of FRB 20121102A with the MeerKAT radio telescope. Across five epochs, we detect the PRS associated with FRB 20121102A. Our observations are split into a cluster of four epochs (MJD 58732 - 58764) and a separate single epoch about 1000 days later. The measured flux density is constant across the first four observations but then decays by more than one-third in the final observation. Our observations on MJD 58736 coincided with the detections of 11 bursts from FRB 20121102A by the MeerTRAP backend, seven of which we detected in the image plane. We discuss the importance of image plane detections when considering the commensal transient searches being performed with MeerKAT and other radio facilities. We find that MeerKAT is so sensitive that within a two-second image, we can detect any FRB with a flux density above 2.4 mJy at 1.3 GHz and so could localise every FRB that has been detected by CHIME to date. keywords: radio continuum: transients - transients: fast radio bursts ## 1 Introduction Fast Radio Bursts (FRBs) are flashes of coherent radio emission lasting around a few to hundreds of milliseconds (e.g. Lorimer et al., 2007; Keane et al., 2012; Petroff et al., 2016). The dispersion measure (DM), the result of electrons along the line of sight delaying the arrival of lower frequency photons, can be used as a proxy for distance and has implied that FRBs must originate from cosmological distances (Thornton et al., 2013). There are many theories as to the origin of FRBs (Zhang, 2020, 2022) but the recent discovery of pulsed radio emission from Soft Gamma-ray Repeater (SGR) 1935+2154 by the STARE2 and CHIME telescopes suggests that magnetars like SGR 1935+2154 may produce some of the population of FRBs (Bochenek et al., 2020; The Chime/Frb Collaboration et al., 2020). The luminosity of the bursts from SGR 1935+2154 overlap with the low end of the luminosity distribution of cosmological FRBs (The Chime/Frb Collaboration et al., 2020; Bochenek et al., 2020). Surveys designed to monitor the sky to find FRBs have discovered that at least a subset of the population repeat (CHIME/FRB Collaboration et al., 2019; Fonseca et al., 2020). Such a discovery has provided the community with another clue as to the origin of FRBs: at least part of the population cannot originate from cataclysmic events. FRB 20121102A was the first known repeating FRB (Spitler et al., 2016; Scholz et al., 2016). Rajwade et al. (2020) and Cruces et al. (2021) first showed that this source has active periods occurring every \(\sim\)160 days confirming that some FRBs are produced by objects in binaries or are processing (Chime/Frb Collaboration et al., 2020; Levin et al., 2020; Lyutikov et al., 2020; Zanazzi and Lai, 2020). FRB 20121102A was also the first FRB to be localised. Chatterjee et al. (2017) were able to constrain the position of FRB 20121102A to sub-arcsecond levels using the Karl G. Jansky Very Large Array (VLA). The position was coincident with a dwarf galaxy at \(z=0.19\) (luminosity distance of 926 Mpc for \(H_{0}=70\) km/s/Mpc, \(\Omega_{M}=0.3\) Tendulkar et al., 2017). The position of the FRB is coincident with a persistent radio source (PRS). High angular resolution radio observations show the PRS to be unresolved on all scales and constrain the source size to be \(\lesssim 0.7\)pc (Marcote et al., 2017). The PRS associated with FRB 20121102A has now been detected between 100 MHz and 30 GHz (Chatterjee et al., 2017; Resmi et al., 2021) and its spectral shape is best described by a broken power law with a break around 8 GHz. Long-term monitoring has shown variability on day-timescales at a level of 10% (Chatterjee et al., 2017), which has since been attributed to scintillation (Waxman, 2017). The leading theory as to the origin of the PRS is a pulsar wind nebula (PWN) containing a highly magnetised pulsar i.e. a magnetar (Marcote et al., 2017; Metzger et al., 2017; Yang & Dai, 2019). A PWN would be produced due to the confinement of a young magnetar's wind within the environment of supernova remnant (SNR) in which the magnetar is encased (see Gaensler & Slane, 2006, for a review). As the supernova expands and evacuates the local interstellar medium, the magnetar produced in the supernova powers a nebula inside the evacuated region. The spin down energy of the magnetar powers the expansion of the nebula out to the SNR and the highly energetic particles in the wind are responsible for the persistent radio emission. Successful attempts to reproduce observations using a pulsar wind model have been made, demonstrating that is a viable origin for the observed persistent radio emission (Yang & Dai, 2019; Metzger et al., 2017). Studies of the persistent source's spectrum show that the radio emission is still very young. The spectral break at 8 GHz has led to the calculation of an approximate PRS age of a few hundred years (Resmi et al., 2021). A young system would also explain why the PWN is still so luminous. Despite the extensive study of the PRS associated with FRB 20121102A and others; it is still not known if these systems are separate and distinct from the population of FRBs without PRSs (Law et al., 2022). In this paper, we present observations of the persistent radio source associated with FRB 20121102A with the MeerKAT telescope. In Section 2 and 3, we describe the observations that were made and the results of these observations including image plane detections of seven bursts previously reported in Caleb et al. (2020), respectively. In Section 4 we discuss how our results fit with previous observations and the implications of detecting FRBs with MeerKAT for future commensal searches. ## 2 Observations Observations with MeerKAT were obtained through a Director's discretionary time proposal (DDT-20190905-MC-01, PI: M. Caleb). The MeerKAT radio telescope is a 64-dish radio interferometer based in the Karoo desert in South Africa (Jonas & MeerKAT Team, 2016). A total of five observations were made on 2019 September 6\({}^{\rm th}\), 10\({}^{\rm th}\); October 6\({}^{\rm th}\), 8\({}^{\rm th}\), and 2022 September 26\({}^{\rm th}\) (MJDs 58732, 58736, 58762, 58764 and 59848, respectively). Data were taken at a central frequency of 1.28 GHz with a bandwidth of 856 MHz, which was split into 4096 channels for calibration. The observations were made with 8-second integrations with the exception of the 10\({}^{\rm th}\) September which used 2-second integrations. J0408-6565 and J0534+1927 were used as the primary and secondary calibrators for each observation, respectively. Calibration and flagging were performed in casa (Common Astronomy Software Applications McMullin et al., 2007). Before calibration, flagging was performed on the calibrator scans. Bandpass and phase-only gain calibration was performed on the primary. Complex gain calibration was performed on both calibrators after which a flux scale was applied, using the known model of the primary, to the secondary. The corrections were then applied to the target field. The calibrated target fields were flagged to remove RFI with tricolour(Hugo et al., 2022) and imaged using wsclean(Offringa et al., 2014). Imaging used a Briggs weighting of -0.3. The declination of the FRB 20121102A is far north with respect to the MeerKAT observing range and so the synthesised beam in each image is particularly elongated. The field was imaged in two separate spectral bands to calculate a two-point spectral index of the PRS (the source was too faint to detect the PRS with a higher spectral resolution). The flux density and spectral index associated with the PRS in each observation are reported in Table 1. Any reported flux densities were measured using the casa task _imfit_. ## 3 Results ### Persistent Radio Source The final images showed a point source at the phase centre of the field, its position consistent with those reported for previous detections of the PRS associated with FRB 20121102A (Marcote et al., 2017; Andrianjafy et al., 2023). The PRS was clearly detected, with a signal-to-noise (S/N) \(>\) 10\(\sigma\), in each observation. There is no significant variability in the measured flux density of the PRS during the first four observations (2019). However, between the 2019 observing campaign and the 2022 observation, there is a significant drop in the PRS's flux density. The flux density over the three-year period decreases by about one-third. The flux densities are given in Table 1 and Figure 1. To ensure that the flux density reduction of the PRS is real and not a calibration error, we used the TRAnsients Pipeline (TraP, Swinbank et al., 2015) to search for any systematic changes in flux density across the whole field over the five observations. For each source, we calculated \(\eta\) (Equation 1), which is analogous to a \(\chi^{2}\) statistic by comparing a given source to a model with a constant flux density, and \(V\) (Equation 2) which is a measure of the spread of flux densities for a given source. \(\eta\) and \(V\) are defined as \[\eta_{\nu}\equiv\chi^{2}_{N-1}=\frac{1}{N-1}\sum_{i=1}^{N}\frac{\left(F_{i,v}- \overline{F_{\nu}}^{*}\right)^{2}}{\sigma_{i}^{2}} \tag{1}\] \[V_{\nu}\equiv\frac{s_{v}}{F_{\nu}}=\frac{1}{\overline{F_{\nu}}}\sqrt{\frac{N }{N-1}\left(\overline{F_{\nu}^{2}}-\overline{F_{\nu}}^{2}\right)} \tag{2}\] where \(F_{i,v}\) is the flux density of a source at frequency \(\nu\) in observation \(i\), \(\overline{F_{\nu}}^{*}\) is the weighted mean flux density of a source across all observations, \(\overline{F_{\nu}}\) is the non-weighted mean, \(\sigma_{i}\) is the uncertainty associated with the measured flux density and \(N\) is the number of observations. The results of the TraP are shown in Figure 2. Each point corresponds to the values of \(\eta\) and \(V\) for a given source within the FRB field. The position of the PRS in Figure 2 is denoted with a green star. Also labelled with a triangle and square are an artefact and resolved radio galaxy, respectively. Most of the sources have \(\eta\leq 1\) which corresponds to non-variable objects or objects that are only detected (at low significance) in a single epoch. The horizontal structure around \(\log_{10}V\lesssim-0.8\) corresponds to variability at the level of the calibration uncertainty. Sources with high \(V\) and low \(\eta\) are usually imaging artefacts. High flux density sources can have large \(\eta\), as \(\eta\) is approximately correlated with signal-to-noise squared (see Equation 1). Any small deviations in flux density, be they astrophysical or instrumental, can inflate \(\eta\) for bright sources for whom the statistical \begin{table} \begin{tabular}{l c c c} \hline \hline Date (mjd) & Observation duration (hrs) & S\({}_{\nu}\) (\(\mu\)Jy/beam) & \(\alpha\) (S\({}_{\nu}\)\(\propto\nu^{\alpha}\)) \\ \hline 58732.16 & 3.0 & 260\(\pm\)26 & -0.6\(\pm\)1.0 \\ 58736.13 & 3.0 & 269\(\pm\)27 & -0.5\(\pm\)1.0 \\ 58762.10 & 2.5 & 287\(\pm\)27 & -0.1\(\pm\)0.4 \\ 58764.10 & 3.0 & 270\(\pm\)28 & -0.4\(\pm\)1.0 \\ 59848.07 & 3.0 & 189\(\pm\)18 & -0.8\(\pm\)0.6 \\ \hline \hline \end{tabular} \end{table} Table 1: A table presenting the PRS flux densities and rms noise values in the five FRB 20121102A MeerKAT observations. The uncertainty on the flux is a combination of the statistical error and a 10% calibration error added in quadrature. The large uncertainties on the spectral index measurements are a combination of the relatively small bandwidth over which to make the measurement as well as the low flux density of the PRS. uncertainty is small. The distribution of the points in the \(\eta\)-V plane shows no significant evidence of strong systematics introduced by the calibration process (Andersson et al., 2022; Driessen et al., 2022). The histograms at the top and right-hand sides of Figure 2 are approximately Gaussian in shape. Any global reduction in flux density in the final epoch would appear as strong deviations from the Gaussian-shaped histograms. Given the Gaussian shape, we can conclude that there is no evidence of a global flux density reduction in the final epoch with respect to the other four which makes the flux density reduction observed credible. The high S/N of the PRS detections allowed for two-point spectral index measurements. Across the five observations, we calculated a weighted mean spectral index of \(\alpha=-0.4\pm 0.5\), (where \(S_{\nu}\propto\nu^{\alpha}\)). The spectral index did not change significantly when the flux density of the PRS decreased. ### FRB detection in the image plane Caleb et al. (2020) and Platts et al. (2021) report on the detection and analysis of bursts during this observing campaign (on 2019 September 10th) via the MeerTRAP backend1 which searches for pulsars and fast transients whilst piggybacking on the large survey programmes at MeerKAT (Stappers, 2016). A total of 11 bursts were detected over a period of 2.5 hours. Andrianjard et al. (2023) presents a methodology for detecting bursts from FRB 20121102A in the image-plane. Like Andrianjard et al. (2023), we also imaged each two-second integration where a burst was detected and found that three bursts were detected when imaging the whole band at once. The flux densities are given in the final column of Table 2. The emission detected from FRBs is heavily frequency-dependent. Therefore, we split the band into two halves to search for bright emission in either subband. We found that the noise was higher in the lower half of the band \(\sim 0.6\) mJy/beam at 1.06 GHz compared to \(\sim 0.3\) mJy/beam in Figure 1: The radio light curve of the PRS associated with FRB 20121102A at L-band (Table 1, Marcote et al., 2017; Plavin et al., 2022). The grey horizontal region denotes the average (\(\pm 1\) standard deviation) flux density as measured by MeerKAT between MJD 58732 and 58764. The purple highlighted region denotes the period where significant 3 GHz variability was observed, the vertical extent indicates the maximum and minimum 3 GHz flux density measured (Chatterjee et al., 2017). Figure 2: A plot of the amplitude of the variability (V) as a function of the statistical significance of the variability (\(\eta\)) for every source within the FRB 20121102A MeerKAT field. On the top and right-hand side of the plot are two histograms of the two parameters. The colour scale of each data point denotes the average flux density of each source. Three sources are highlighted: the green star denotes the FRB host i.e. the PRS, the triangle is an imaging artefact and the square is a resolved jet whose variability is correlated with the synthesised beam shape. at 1.5 GHz. The increased noise at lower frequencies is a result of more RFI in that part of the band and so a higher fraction of the data is flagged out. We detect a further four bursts in one half of the band. The increased noise makes detections at lower frequencies more difficult, only one of the four additional detections is on the lower half of the band. Where we don't detect a counterpart across the whole band but do in one subband, we provide peak flux densities and \(3\sigma\) upper limits where appropriate. In total, we detected seven of the 11 bursts reported in Caleb et al. (2020). ## 4 Discussion ### The persistent radio counterpart The average MeerKAT flux density of the PRS: \(\sim 260\,\mu\)Jy/beam, which corresponds to a radio spectral luminosity at 1.28 GHz of 2.6\(\times 10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\) (or \(3.3\times 10^{38}\) erg s\({}^{-1}\)) at a luminosity distance of 926 Mpc (Tendulkar et al., 2017). Such a high radio luminosity is appropriate for what has been theorised for a high magnetic field (magnetar) PWN (\(\sim 10^{38}\) erg/s), thus supporting the theory that this persistent source may instead be produced by a young, highly magnetized pulsar, whose stronger magnetic field could produce more luminous radio emission (Yang & Dai, 2019; Maitra et al., 2018; Bucciantini et al., 2011; Dubner et al., 2008). At the time of writing, there are four other localised FRBs with associated persistent radio emission: FRBs 20201124A, 20190714A, 20190520B and 20210405I, however not all are confirmed PRSs (Chibueze et al., 2022; Ravi et al., 2022; Niu et al., 2022; Driessen et al., 2023). FRB 20210405I is most likely associated with the host galaxy as a whole and not only the FRB (Driessen et al., 2023). The source associated with FRB 20201124A is most likely to be a result of star formation because observations with the EVN (European VLBI Network) show some resolved structure (Marcote et al., 2021). FRBs 20190714A and 20190520B both have confirmed PRSs with very similar luminosities to FRB 20121102A: \(1\times 10^{29}\) and \(3\times 10^{29}\) erg/s/Hz, respectively (Chibueze et al., 2022; Niu et al., 2022). The host galaxy of FRB 20190520B has a very low star formation rate ruling out star formation as the likely origin of the radio emission. A PWN would produce a spectral index around \(-0.5<\alpha<0\)(\(5\propto\nu^{\alpha}\)Reynolds & Chevalier, 1984; Maitra et al., 2018; Metzger et al., 2017). We measure a weighted average spectral index of \(-0.4\pm 0.5\) which is in agreement with that expected for such a nebula. Our observations of the PRS associated with FRB 20121102A show a reduction in flux density at 1.3 GHz of about 30% over three years. This motivates a search for L-band observations of the PRS in the literature. Figure 1 shows the L-band flux density as a function of time between January 2016 (MJD 57400) and November 2022 (MJD 59850). The black circles denote observations made with VLBI facilities (Marcote et al., 2017), the square is from the VLA (Charterjee et al., 2017) and the black stars are detections with MeerKAT. Around MJD 57650, there are observations at L-band with both the VLA and EVN (European VLBI Network) facilities, resulting in two different flux densities, however, we note that the two values still fall within \(2\sigma\) of each other. Furthermore, subsequent VLBI observations obtain similar flux densities as that measured with the VLA indicating that there is no evidence of resolving out of emission by higher angular resolution observations. Therefore we do not expect to measure a different flux density with MeerKAT purely because the array has a different resolving power. Figure 1 shows that over a period of nearly seven years, the flux density of the PRS varies: between MJD 57400 and 59850, the average flux density increased from below 180 \(\mu\)Jy to around 260 \(\mu\)Jy, an increase of around 30 %, and then decreased again back to the level measured about MJD 57400. Given that all the flux density measurements presented in this paper are obtained using the same telescope, it motivates the idea that the changes in flux density of the PRS are real. Variability was also observed at 3 GHz by Chatterjee et al. (2017), where the PRS varied between \(\sim 150-240\)\(\mu\)Jy, (shown with a purple shaded region in Figure 1). Subsequent work concluded that the likely origin of the variability was scintillation (Waxman, 2017). We find that the increased flux density observed at 3 GHz is spectrally consistent with the flux densities observed with MeerKAT i.e. extrapolating the brightest MeerKAT flux densities from 1.3 to 3 GHz using our measured spectral index results (\(\alpha=-0.4\pm 0.5\)) in measurements that are consistent with the brightest flux densities measured at 3 GHz. Unfortunately, there are no high-cadence 1.3 GHz observations during the VLA observing period. The variability observed at 3 and 1.3 GHz probes very different timescales: days/weeks vs years. Using the model presented in Hancock et al. (2019), we estimate that the modulation amplitude due to refractive scintillation along the line of sight to the PRS is approximately 25%, similar to the observed amplitude. The predicted variability timescale is far shorter (\(\sim\)0.5 years) than what we observe making it unlikely that scintillation is the cause of the variability. However, we note that there is also the possibility that we were unlucky. Given the low cadence of our observations, there is a chance that we happened to observe when the variability due to scintillation was at a low point. There are also large uncertainties in the models used to calculate the observed effects of scintillation and so our quoted value of the scintillation timescale is only approximate. In order to completely rule out the possibility of scintillation, we need more observations of the PRS over a range of timescales. Despite the spectral and luminosity similarities, the PWN models (Metzger et al., 2017; Yang & Dai, 2019) developed to explain the PRS do not predict the variability that has been observed at 3 and now 1.3 GHz (Chatterjee et al., 2017). Other PRSs have been discovered too recently to search for variability on the timescale observed in FRB 20121102A. If the variability is intrinsic to the source, and not caused by scintillation, the increase in flux density is particularly hard to explain within the PWN scenario. Given the size constraints (\(<0.7\)pc Metzger et al., 2017), we propose that the varying PRS is from the termination shock. The shock is produced at the interface between the supernova ejecta and the pulsar wind and produces synchrotron radiation. In the case of the Crab Nebula and 3C 58, the termination shock radius is less than a parsec in diameter - consistent with our observations of the PRS (Green & Scheuer, 1992; Slane et al., 2002; Kennel & Coroniti, 1984). If the system that produces FRB 20121102A is particularly young, as indicated by the spectral break at \(-8\) GHz (Resmi et al., 2021), then it is possible that flux variability originates from variation in the termination shock. This could be the result of interaction between the pulsar wind and a reverse shock from the supernova that produced the magnetar in the first place. ### FRB detections In Section 3.2, we demonstrated that three bursts were bright enough to be detected when imaging the whole band. A further four were detected in one-half of the band. The same dataset was imaged by Andrianjafy et al. (2023), who reported detections of bursts two, three, five, seven, eight and eleven. Despite not detecting burst two, we recover an additional two image plane bursts: nine and ten. Andrianjafy et al. (2023) provides a clear methodology for detecting faint bursts in MeerKAT images. There are two main differences between the method presented in Andrianjafy et al. (2023) and what we present here. The first is the software used to perform flagging: here we use tricolour(Hugo et al., 2022) and keep observatory-placed flags, whereas Andrianjafy et al. (2023) removes all flags put in place by the observatory and then uses aoplagger and casa(Offringa et al., 2012; McMullin et al., 2007). Andrianjafy et al. (2023) explored using tricolour and found that the two methods resulted in different S/Ns for the detected bursts. The second difference is the use of peeling. Peeling is a method by which bright, contaminating sources are removed from the data in the \(uv\)-plane resulting in a lower rms noise in the final image (Noordam, 2004; Intema et al., 2009). For example in low frequency subband two-second images, we reach an rms noise of \(600\mu\)Jy whereas with peeling Andrianjafy et al. (2023) reaches \(400\mu\)Jy. In this work, we consider only the bursts that are bright enough to be detected in an image without peeling. We use this methodology to obtain the clearest understanding of the population of bursts that could be detected by commensal searches of MeerKAT fields. Commensal searches to look for new transients and variable sources in a number of ThunderKAT Large Survey Project (Fender et al., 2016) fields have already been performed (Driessen et al., 2020; Rowlinson et al., 2022; Driessen et al., 2022; Andrianjafy et al., 2023). These searches were performed by looking for variations in flux density of point sources from epoch to epoch. It is also possible to look for transients and variables, such as pulsars and FRBs, on timescales of seconds by imaging each integration of a whole observation (e.g. Caleb et al., 2022). Compared to long-duration transients and variables, it is harder to determine if an FRB i.e. a single bright detection, is real and not an artefact. Through the work performed on other data sets searching for commensal transients, Andersson et al. (2023) and Chastain et al (_submitted_) have developed a methodology which rules out imaging artefacts such as sidelobes associated with very bright sources, or satellites. As such, with any future FRB candidate, we are confident in our ability to identify it as an FRB and not a result of poor UV coverage or satellite contamination. The average rms noise in an two-second image is \(\sim 0.3\) mJy. Therefore, an FRB would have to have a flux density of at least 2.4 mJy in a two-second image to be detected in the TraP (the default threshold for a detection to be considered significant in the TraP is \(8\sigma\)). Bursts from FRBs usually only last milliseconds and the measured fluence that is usually reported will be smeared out (Trott et al., 2013). Figure 3 shows the luminosity-timescale parameter space for different transients considering both coherent (pulsars and FRBs) and incoherent transients (e.g. cataclysmic variables and supernovae, Pietka et al., 2015). Overlaid is a series of horizontal lines which correspond to the luminosities required for an FRB to be detected from a distance of z = 0.1, 0.7 and 1.5. The red line in Figure 3 is the average luminosity of the bursts reported in this work across the whole bandwidth at the distance of FRB 20121102A. The orange box corresponds to the luminosity distribution of bursts detected by the CHIME telescope (Cui et al., 2022). The bursts we detect from FRB 20121102A have very low luminosities compared to the CHIME population. The rms noise values we are able to reach in a two-second observation are low enough that we are confident in our ability to detect, and immediately localise, a large population of FRBs including all bursts with CHIME-like fluences. We note that despite being able to localise FRBs with redshifts as high as \(z=1.5\), the uncertainty on the FRB position is dependent on the S/N which means that for bursts at higher redshifts, which may be fainter, the localisation region on the sky could contain multiple possible host galaxies compared to their low-redshift counterparts. The localisation precision of a given source is given as \(\approx\mathrm{beamwidth}/\sqrt{S/N}\). For a given burst from FRB 121102, the localisation region is \(\sim 8"\times 2"\) (we note that the beam size is highly elliptical due to the high declination of FRB 121102's position with respect to MeerKAT's observing range), which, at low redshift (\(z\lesssim 0.1\)) is sufficient to confidently associate a burst with a host galaxy but that confidence rapidly decreases whilst moving to higher redshifts (Eflekhari et al., 2018). Looking forward to the SKA (Square Kilometer Array) era, if we are to discover the most complete sample of FRBs possible, shorter integration times are vital in order to find and localise the fainter population of FRBs with higher S/N and therefore better localisation precision. We find that the scientific benefit of shorter integration \begin{table} \begin{tabular}{l c c c c} \hline \hline Burst & Frequency of subband (GHz) & S\({}_{\nu}\) (mJy/beam) & Central frequency (GHz) & S\({}_{\nu}\) (mJy/beam) \\ \hline 3 & 1.1 & \(<1.8\) & 1.3 & \(1.4\pm 0.2\) \\ & 1.5 & \(1.32\pm 0.2\) & 1.3 & \(1.4\pm 0.2\) \\ \hline 5 & 1.1 & \(<1.8\) & 1.3 & \(<0.9\) \\ & 1.5 & \(2.4\pm 0.1\) & 1.3 & \(<0.9\) \\ \hline 7 & 1.1 & \(2.3\pm 0.2\) & 1.3 & \(<0.9\) \\ & 1.5 & \(<1.2\) & 1.3 & \(1.6\pm 0.2\) \\ \hline 8 & 1.1 & \(<1.8\) & 1.3 & \(1.6\pm 0.2\) \\ & 1.5 & \(1.8\pm 0.1\) & 1.3 & \(<0.9\) \\ \hline 9 & 1.1 & \(<1.8\) & 1.3 & \(<0.9\) \\ & 1.5 & \(1.9\pm 0.2\) & 1.3 & \(1.5\pm 0.2\) \\ \hline 10 & 1.1 & \(<1.8\) & 1.3 & \(<0.9\) \\ & 1.5 & \(1.46\pm 0.07\) & 1.3 & \(<0.9\) \\ \hline 11 & 1.1 & \(<1.8\) & 1.3 & \(<0.9\) \\ & 1.5 & \(1.03\pm 0.07\) & 1.3 & \(<0.9\) \\ \hline \hline \end{tabular} \end{table} Table 2: A table containing all the peak flux densities of the bursts detected in either subband along with \(3\sigma\) upper limits in the case where the burst is not detected in that half of the band. We also show the flux densities for the bursts that are detected across the whole band. Whilst we detect seven bursts in either subband, only three are detected when the full band is imaged. times is even stronger when considering the data rates involved, when searching for a fainter population of FRBs using backends such as MeerTRAP, higher time resolution is required and therefore to reduce the data rates, only a small fraction of sky can be observed (Santidas et al., 2017)2. Comparatively, in the image plane, the same field of view is observed regardless of the integration time. As result, we are able to search for a fainter population of fast transients over a larger fraction of the sky at a given time in the image plane compared to searching in beam-formed data. Footnote 2: [https://skaafrica.atlassian.net/wiki/spaces/ESDKB/pages/1591672833/User+Supplied+Equipment+USEF/TUSE](https://skaafrica.atlassian.net/wiki/spaces/ESDKB/pages/1591672833/User+Supplied+Equipment+USEF/TUSE) On the other hand, there is the possibility that the effects of large DM values will smear out the FRB signal such that the signal that is only a few milliseconds in length is smeared out over more than one integration. The benefit of the longer integrations times (e.g. four or eight seconds) in the image plane compared to time-series data is that it automatically collapses any smearing in the time domain that would be a result of large DMs such that we do not need to correct for DM unless the burst is dispersed over a period larger than integration time. ## 5 Conclusions We have presented MeerKAT observations of the PRS associated with FRB 20121102A as well as a number of image-plane detections of the bursts themselves. We find evidence for long-term (\(\sim\)years) flux density changes in the PRS, which, when combined with the very small source size constraints, may be a result of interaction between a highly magnetised PWN terminal shock with the surrounding supernova ejecta. Continued monitoring of the PRS associated with FRB 20121102A and others is required to determine whether significant changes in flux density on a timescale of years are a global property of PRSs. We also presented the results of two-second imaging, demonstrating MeerKAT's ability to find and localise fast transients in the image plane. We compare the luminosities of our FRB 20121102A detections, the upper limits of detect bursts at a series of redshifts and the current CHIME population and find that we expect to be able to find a large sample size of bursts through commensal searches. ## Acknowledgements The authors thank the anonymous referee for their useful comments. LR acknowledges the support given by the Science and Technology Facilities Council through an STFC post-doctoral research-associate position. LR also acknowledges the useful conversations had with A. Mitchell and S. Spencer. AA acknowledges the support given by the Science and Technology Facilities Council through a STFC studentship. BWS acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694745). MC acknowledges support of an Australian Research Council Discovery Early Career Research Award (project number DE220100819) funded by the Australian Government and the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE17/0100013. This paper uses MeerKAT observations obtained through a Director's Discretionary Time (DDT) proposal (Project ID: DDT-20190905-MC-01). The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation (DSI). The authors acknowledge the contribution of all those who designed and built the MeerKAT instrument. We acknowledge the use of the lifit cloud computing facility - www.ilifit.ac.za, a partnership between the University of Cape Town, the University of the Western Cape, the University of Stellenbosch, Sol Plaieti University, the Cape Peninsula University of Technology and the South African Radio Astronomy Observatory. The lifit facility is supported by contributions from the Inter-University Institute for Data Intensive Astronomy (IDIA - a partnership between the University of Cape Town, the University of Pretoria and the University of the Western Cape), the Computational Biology division at UCT and the Data Intensive Research Initiative of South Africa (DIRISA). ## Data Availability The data presented in this paper will be provided upon request.
2303.11281
Fixed Parameter Multi-Objective Evolutionary Algorithms for the W-Separator Problem
Parameterized analysis provides powerful mechanisms for obtaining fine-grained insights into different types of algorithms. In this work, we combine this field with evolutionary algorithms and provide parameterized complexity analysis of evolutionary multi-objective algorithms for the $W$-separator problem, which is a natural generalization of the vertex cover problem. The goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most $W$ vertices. We provide different multi-objective formulations involving two or three objectives that provably lead to fixed-parameter evolutionary algorithms with respect to the value of an optimal solution $OPT$ and $W$. Of particular interest are kernelizations and the reducible structures used for them. We show that in expectation the algorithms make incremental progress in finding such structures and beyond. The current best known kernelization of the $W$-separator uses linear programming methods and requires a non-trivial post-process to extract the reducible structures. We provide additional structural features to show that evolutionary algorithms with appropriate objectives are also capable of extracting them. Our results show that evolutionary algorithms with different objectives guide the search and admit fixed parameterized runtimes to solve or approximate (even arbitrarily close) the $W$-separator problem.
Samuel Baguley, Tobias Friedrich, Aneta Neumann, Frank Neumann, Marcus Pappik, Ziena Zeif
2023-03-20T17:16:02Z
http://arxiv.org/abs/2303.11281v2
# Fixed Parameter Multi-Objective Evolutionary Algorithms for the \(W\)-Separator Problem ###### Abstract Parameter analysis provides powerful mechanisms for obtaining fine-grained insights into different types of algorithms. In this work, we combine this field with evolutionary algorithms and provide parameterized complexity analysis of evolutionary multi-objective algorithms for the \(W\)-separator problem, which is a natural generalization of the vertex cover problem. The goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most \(W\) vertices. We provide different multi-objective formulations involving two or three objectives that provably lead to fixed-parameter evolutionary algorithms with respect to the value of an optimal solution \(OPT\) and \(W\). Of particular interest are kernelizations and the reducible structures used for them. We show that in expectation the algorithms make incremental progress in finding such structures and beyond. The current best known kernelization of the \(W\)-separator uses linear programming methods and requires a non-trivial post-process to extract the reducible structures. We provide additional structural features to show that evolutionary algorithms with appropriate objectives are also capable of extracting them. Our results show that evolutionary algorithms with different objectives guide the search and admit fixed parameterized runtimes to solve or approximate (even arbitrarily close) the \(W\)-separator problem. ## 1 Introduction Parameterized analysis of algorithms [3] provides a way of understanding the working behaviour of algorithms via their dependence on important structural parameters for NP-hard problems. This technique of fine-grained analysis allows for insights into which parameters make a problem hard. When analyzing heuristic search methods such as evolutionary algorithms, a parameterized runtime analysis allows for runtime bounds not just dependent on the given input size but also in terms of parameters that measure the difficulty of the problem. This is particularly helpfule for understanding heuristic search methods which are usually hard to analyze in a rigorous way. Parameterized analysis of evolutionary algorithms has been carried out for several important combinatorial optimization problems (see [12] for an overview). The first analysis was for the classical vertex cover problem [8] which is the prime problem in the area of parameterized complexity. Following that, problems such as the maximum leaf spanning problem [7], the Euclidean traveling salesperson problem [17] and parameterized settings of makespan scheduling [16] were considered. More recently, both the closest string problem [15] and jump and repair operators have been analyzed in the parameterized setting [1]. A crucial aspect of the parameterized analysis of evolutionary algorithms (and algorithms in general) is the ability of the considered approaches to obtain a kernelization for the considered problems. A kernel here refers to a smaller sub-problem whose size is polynomially bounded in the size of the given parameter(s). As the size is bounded, brute-force methods or random sampling can then be applied to obtain an optimal solution. A small subset of vertices that disconnect a graph is usually called a vertex separator. In terms of successful divide-and-conquer or parallel processing strategies, such separators are one of the most powerful tools for developing efficient graph algorithms. This generality and its broad applicability have made the study of separators a rich and active field of research, see for example the book by Rosenberg and Heath [13], or the line of research initiated by the seminal work of Lipton and Tarjan [11] on separators in planar graphs. Numerous different types of separator structures have emerged over the past couple of decades. In this paper, we address the problem of decomposing a graph into small pieces - with respect to a parameter \(W\) - by removing the smallest possible set of vertices. More formally, given a graph \(G=(V,E)\) and a parameter \(W\in\mathds{N}\), the goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most \(W\) vertices. The problem is called the _\(W\)-separator problem_ - also known in the literature as the _component order connectivity problem_ or _\(\alpha\)-balanced separator problem_, where \(\alpha\in(0,1)\) and \(W=\alpha|V|\). An equivalent view of this problem is to cover or hit every connected subgraph of size \(W+1\) with the minimum number of vertices. In particular, \(W=1\) corresponds to covering all edges, showing that the \(W\)-separator problem is a natural generalization of the vertex cover problem. In this paper, we generalize the results obtained in [8] for the vertex cover problem to the more general \(W\)-separator problem. Precisely, we investigate the \(W\)-separator problem on multi-objective evolutionary algorithms and show that in expectation they admit fixed parameter runtimes with respect to the value of an optimal solution OPT and \(W\). It is unlikely that such runtimes can be achieved by considering OPT or \(W\) alone. Indeed, \(W=1\) corresponds to a hard problem, which shows that \(W\) (alone) is not a suitable parameter. For the parameter OPT, the problem is \(W[1]\)-hard even when restricted to split graphs [4]. These lower bounds lead to the study of parameterization by \(W+\mathrm{OPT}\). The best known algorithm with respect to these parameters finds an optimal solution in time \(n^{O(1)}\cdot 2^{O(\log(W)\cdot\mathrm{OPT})}\)[4]. Unless the exponential time hypothesis fails, the authors prove that this running time is tight up to constant factors, i.e., there is no algorithm that solves the problem in time \(n^{\mathcal{O}(1)}\cdot 2^{\mathrm{o}(\mathrm{OPT}\cdot\log(W))}\). For kernelizations with respect to the parameters OPT and \(W\), the best known polynomial algorithm achieves a kernel of size \(3W\cdot\mathrm{OPT}\)[2]. A kernel of size \(2W\cdot\mathrm{OPT}\) is provided in [9] in a runtime of \(n^{\mathcal{O}(1)}\cdot 2^{\mathcal{O}(W)}\) by using linear programming methods. That is, for the cover problem, ie. \(W=1\), they obtain a \(2\cdot\mathrm{OPT}\) size-kernel, showing that under the assumption the unique games conjecture is true, \(2W\cdot\mathrm{OPT}\) is the best kernel we can hope for. Finally, the best known approximation algorithm also uses linear programming methods and has a gap guarantee of \(\mathcal{O}(\log(W))\) with a running time of \(2^{\mathcal{O}(W)}\cdot n^{\mathcal{O}(1)}\)[10]. They also showed that the superpolynomial dependence on \(W\) may be needed to achieve a polylogarithmic approximation. Our Contribution:Of particular interest in our work are kernelizations and the reducible structures used for them. We show that in expectation the algorithms make incremental progress in finding such structures and beyond. Compared to the vertex cover problem, kernelization algorithms that are linear in \(\mathrm{OPT}\) for the \(W\)-separator are more complicated (cf. [18, 2, 9]). The current best known kernelization of the \(W\)-separator uses linear programming methods and requires a non-trivial post-process to extract the reducible structures [9]. The challenge in this paper is to show that natural objectives and simple as well as problem-independent mutations are also capable of extracting them. To this end, we add additional structural features to the reducible structures used in [9]. Essentially, our results show that evolutionary algorithms with different objectives guide the search and admit fixed parameterized runtimes to solve or approximate (even arbitrarily close) the \(W\)-separator problem. The different runtimes are given in this paper in terms of the number of iterations, but the tractability with respect to the considered parameters also applies when we include search point evaluations. In the following, we roughly describe the runtimes achieved with respect to the search point evaluations for exact and approximate solutions, where all results are given in expectation. We consider simple and problem-independent evolutionary algorithms in combination with three different multi-objective fitness functions. The first consists of relatively simple calculations to evaluate the search points and allows us to achieve a running time of \(n^{\mathcal{O}(1)}\cdot 2^{\mathcal{O}(\mathrm{OPT}^{2}\cdot W^{2})}\) to find an optimal solution. For the second and third fitness functions, stronger objectives are used in the sense of applying linear programming methods. We prove that with such evaluations the optimal solution can be found in time \(n^{\mathcal{O}(1)}\cdot 2^{\mathcal{O}(\mathrm{OPT}\cdot W)}\). Moreover, depending on the choice of an \(\varepsilon\in[0,1)\) we obtain solutions arbitrary close to an optimal one, where the according algorithm is tractable with respect to the parameters \(\mathrm{OPT}\) and \(W\). As usual, the larger \(\varepsilon\), the worse the gap guarantee, but with better running time, where \(\varepsilon=0\) corresponds to the above running time in finding an optimal solution. This result shows that we can hope for a gradual progress until an optimal solution is reached. Finally, our results show that in expectation evolutionary algorithms are not far away from the problem-specific ones, with the literature review showing that up to constant factors the evolved algorithms are close to the lower bounds for the \(W\)-separator problem. ## 2 Preliminaries Graph TerminologyWe begin with a brief introduction to the graph terminology we use in this paper. Let \(G=(V,E)\) be a graph. For a subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) of \(G\) we use \(V(G^{\prime})\) and \(E(G^{\prime})\) to denote \(V^{\prime}\) and \(E^{\prime}\), respectively. We define the _size_ of a subgraph \(G^{\prime}\subseteq G\) as the number of its vertices, where we denote the size of \(G\) by \(n\). For \(v\in V\) we define \(N(v)\) as its neighborhood, and \(d(v)\) as the degree of \(v\). For a vertex subset \(V^{\prime}\subseteq V\) we define \(G[V^{\prime}]\) as the induced subgraph of \(V^{\prime}\), \(G-V^{\prime}:=G[V\setminus V^{\prime}]\) and \(N(V^{\prime}):=\left(\bigcup_{v\in V^{\prime}}N(v)\right)\setminus V^{\prime}\). Finally, in the context of this work, we also use directed graphs in the sense of flow networks, where we move the corresponding terminology to the appendix next to the proofs. Parameterized TerminologyWe use the standard terminology for parameterized complexity, which is also used, for example, in [3, 5]. A _parameterized problem_ is a decision problem with respect to certain instance parameters. Let \(I\) be an instance of a parameterized problem with an instance parameter \(k\), usually given as a pair \((I,k)\). If for each pair \((I,k)\) there exists an algorithm that solves the decision problem in time \(f(k)\cdot|I|^{c}\), where \(f\) is a computable function and \(c\) is a constant, then the parameterized problem is _fixed-parameter tractable_. We say \((I,k)\) is a _yes-instance_ if the answer to the decision problem is positive, otherwise we say \((I,k)\) is a _no-instance_. Of particular interest in this work are kernelizations, which can be roughly described as formalized preprocessings. More formally, given an instance \((I,k)\) of a parameterized problem, a polynomial algorithm is called a kernelization if it maps any \((I,k)\) to an instance \((I^{\prime},k^{\prime})\) such that \((I^{\prime},k^{\prime})\) is a yes-instance if and only if \((I,k)\) is a yes-instance, \(|I^{\prime}|\leq g(k)\), and \(k^{\prime}\leq g^{\prime}(k)\) for computable functions \(g,g^{\prime}\). The idea of parameterized complexity can be extended by combining multiple parameters. That is, if we consider an instance \(I\) with parameters \(k_{1},\ldots,k_{m}\), then we are interested in algorithms that solve the corresponding decision problem in a runtime of \(f(k_{1},\ldots,k_{m})\cdot|I|^{c}\), where \(f\) is a computable function and \(c\) is a constant. We refer to runtimes that satisfy this type of form as _FPT-times_. Problem Statement and ObjectivesFirst we introduce the _\(W\)-separator problem_. Given is a graph \(G=(V,E)\) and two positive integers \(k\) and \(W\). The challenge is to find a vertex subset \(V^{\prime}\subseteq V\), such that \(V^{\prime}\) has cardinality at most \(k\) and the removal of \(V^{\prime}\) in \(G\) leads to a graph that contains only connected components of size at most \(W\). The minimization problem is to find \(V^{\prime}\) with the smallest cardinality, where we denote the optimal objective value by OPT. Note that we can reformulate the problem statement by demanding that \(V^{\prime}\) intersects with each connected subgraph of \(G\) of size \(W+1\). In the case \(W=1\) a separator needs to cover each edge, which shows that the \(W\)-separator problem is a natural generalization of the well-known vertex cover problem. In terms of evolutionary algorithms, a solution to the \(W\)-separator problem can be represented in a bit sequence of length \(n\). Each vertex has value zero or one, where one stands for the vertex being part of the \(W\)-separator. Let \(\{0,1\}^{n}\) be our solution space. We work with multi-objective evolutionary algorithms, which evaluate each search point \(X\in\{0,1\}^{n}\) using a fitness function \(f\colon\{0,1\}^{n}\to\mathds{R}^{m}\) with \(m\) different objectives. The goal is to minimize each of the objectives. Denote by \(f^{i}(X)\) the \(i\)-th objective, evaluated at a search point \(X\). For two search points \(X_{1}\) and \(X_{2}\), we say \(X_{1}\)_weakly dominates_\(X_{2}\) if \(f^{i}(X_{1})\leq f^{i}(X_{2})\) for every \(i\in[m]\), where \([m]\) is defined as the set \(\{1,\ldots,m\}\). In this case, we simply write \(f(X_{1})\leq f(X_{2})\). If additionally \(f(X_{1})\neq f(X_{2})\), then we say that \(X_{1}\)_dominates_\(X_{2}\). We distinguish between pareto optimal search points \(X\) and vectors \(f(X)\). A pareto optimal search point is a search point that is not even weakly dominated by any other search point, whereas a pareto optimal vector is not dominated by any other vector. That is, if \(f(X_{1})\) is a pareto optimal vector, then there can be a vector \(X_{2}\neq X_{1}\) with \(f(X_{2})=f(X_{2})\), whereas if \(X_{1}\) is a pareto optimal search point, then there is no search point \(X_{2}\neq X_{1}\) with \(f(X_{1})=f(X_{2})\). For some fitness functions we investigate, we use a linear program to evaluate the search points. Let \(G=(V,E)\) be an instance of the \(W\)-separator and let \(y_{v}\in\{0,1\}\) be a variable for each \(v\in V\). An integer program (IP) that solves the \(W\)-separator problem can be formulated as follows: \[\min \sum_{v\in V}y_{v}\] \[\sum_{v\in S}y_{v}\geq 1,\forall S\subseteq V\colon|S|=W+1\text{ and }G[S] \text{ is connected}\] We will consider the relaxed version of the IP by allowing fractional solutions and consider the corresponding linear program (LP). That is, instead of \(y_{v}\in\{0,1\}\) we have \(y_{v}\geq 0\) for all \(v\in V\). In the rest of this paper we will call it the _\(W\)-separator LP_. We define \(\operatorname{LP}(G^{\prime})\) for a subgraph \(G^{\prime}\subseteq G\) as the objective of the \(W\)-separator LP with \(G^{\prime}\) as input graph. If we put every connected subgraph of size \(W+1\) as constraint in the LP formulation of the \(W\)-separator, then we end up with a running time of \(n^{\mathcal{O}(W)}\). However, as mentioned already in Fomin et. al. [5] (Section 6.4.2) finding an optimal solution for the LP can be sped up to a running time of \(2^{\mathcal{O}(W)}n^{\mathcal{O}(1)}\). Roughly speaking, the idea is to use the ellipsoid method with separation oracles to solve the linear program, where the separation oracle uses a method called color coding that makes it tractable in \(W\). Next, we define few additional terms before we get to the multiobjective fitness functions. Let \(X\in\{0,1\}^{n}\) be a search point. For \(v\in V\) we define \(x_{v}\in\{0,1\}\) as the corresponding value in the bit-string \(X\). We denote by \(X_{1}\subseteq V\) the vertices with value one. We define \(u(X)\) as the set of vertices that are in components of size at least \(W+1\) after the removal of \(X_{1}\) in \(G\). The function \(u(X)\) can be interpreted as the uncovered portion of the graph with respect to the vertices \(X_{1}\). The fitness functions we work with are the following: * \(f_{1}(X):=\left(|X_{1}|,|u(X)|,-\sum_{v\in X_{1}}d(v)\right)\) * \(f_{2}(X):=\left(|X_{1}|,|u(X)|,\operatorname{LP}(G[u(X)])\right)\) * \(f_{3}(X):=\left(|X_{1}|,\operatorname{LP}(G[u(X)])\right)\) As the names suggest, we use _one-objective_, _uncovered-objective_, _degree-objective_ and _LP-objective_ to denote \(|X_{1}|,|u(X)|,-\sum_{v\in X_{1}}d(v)\) and \(\operatorname{LP}(G[u(X)])\) respectively. Note that the fitness \(f_{3}\) is same as \(f_{2}\) without the uncovered-objective. Furthermore, we use \(*\) to denote that an objective can be chosen arbitrarily, for instance in \((|X_{1}|,*,-\sum_{v\in X_{1}}d(v))\) the uncovered-objective \(u(X)\) is arbitrarily. AlgorithmsWe proceed by presenting the algorithms that we study. All of them are based on Global Semo (see Algorithm 1), which maintains a population \(\mathcal{P}\subseteq\{0,1\}^{n}\) of \(n\)-dimensional bit strings. We define the Algorithm Global Semo Alt similarly to the Algorithm Global Semo (see Algorithm 1) with the difference that the mutation in line 5 is exchanged by Alternative Mutation Operator (see Algorithm 2). The following two lemmas will be useful throughout the whole paper. Their proofs are similar to some appearing in [8], and due to space constraints we have moved them to the appendix (Section A). **Lemma 1**.: _Let \(\mathcal{P}\neq\varnothing\) be a population for the fitness functions \(f_{1}\) and \(f_{2}\). In the Algorithms Global Semo and Global Semo Alt, selecting a certain search point \(X\in\mathcal{P}\) has probability \(\Omega(1/n^{2})\), and additionally flipping only one single bit in it has probability \(\Omega(1/n^{3})\)._ Let \(0^{n}\) be the search point that contains only zeroes. Note that once \(0^{n}\) is in the population it is pareto optimal for all fitness functions because of the one-objective. **Lemma 2**.: _Using the the fitness functions \(f_{1}\) or \(f_{2}\), the expected number of iterations of Global Semo or Global Semo Alt until the population \(\mathcal{P}\) contains the search point \(0^{n}\) is \(\mathcal{O}(n^{3}\log n)\)._ The following lemmas are proven analogously to Lemmas 1 and 2 by observing that the worst-case bounds on the population size decrease by a factor of \(n\) when using fitness function \(f_{3}\) instead of \(f_{1}\) or \(f_{2}\). **Lemma 3**.: _Let \(\mathcal{P}\neq\varnothing\) be a population for the fitness function \(f_{3}\). In the Algorithms Global Semo and Global Semo Alt, selecting a certain search point \(X\in\mathcal{P}\) has probability \(\Omega(1/n)\), and additionally flipping only one single bit in it has probability \(\Omega(1/n^{2})\)._ **Lemma 4**.: _Using the the fitness function \(f_{3}\), the expected number of iterations of Global Semo or Global Semo Alt until the population \(\mathcal{P}\) contains the search point \(0^{n}\) is \(\mathcal{O}(n^{2}\log n)\)._ ## 3 Analysis for degree-based fitness function In this section we investigate the fitness \(f_{1}\) on Global Semo Alt. We will prove that the algorithm finds an optimal \(W\)-separator in expectation in FPT-runtime with the parameters OPT and \(W\). Recall that the parameter \(k\) in the decision variant of the \(W\)-separator asks for a \(W\)-separator of size at most \(k\). A more general variant, known as _weighted component order connectivity problem_, was studied in [4] by Drange et al. They achieve a \(\mathcal{O}(k^{2}W+W^{2}k)\) vertex-kernel, which also holds for the \(W\)-separator problem. **Theorem 5** ([4], Theorem 15).: _The \(W\)-separator admits a kernel with at most \(kW(k+W)+k\) vertices, where \(k\) is the solution size._ Essentially, they use the following _reduction rule_: as long as there is a vertex with degree greater than \(k+W\), the vertex is included in the solution set and may be removed from the instance. It is not difficult to see that this vertex must be included in the solution, since otherwise we would have to take more than \(k\) vertices from its neighborhood to get a feasible solution. After using this reduction rule exhaustively each vertex in the reduced instance has degree at most \(k+W\). Consequently, in the reduced instance, each vertex of a \(W\)-separator is connected to at most \(k+W\) connected components after its removal, where each of those components has size at most \(W\). A simple calculation provides finally the vertex-kernel stated in Theorem 5. Now, we make use of the degree-objective from \(f_{1}\) to find a search point that selects those vertices which can be safely added to an optimal solution according to the reduction rule. **Lemma 6**.: _Using the fitness function \(f_{1}\), the expected number of iterations of Global Semo Alt where the population \(\mathcal{P}\) contains a solution \(X\) in which for all \(u\in u(X)\) and for all \(v\in X_{1}\) we have \(d(u)\leq\text{OPT}+W\) and \(d(v)>\text{OPT}+W\) is bounded by \(\mathcal{O}(n^{3}(\text{OPT}+\log n))\)._ With Lemma 6 in hand we can upper bound the expected number of iterations that Global Semo Alt takes to find an optimal \(W\)-separator with respect to the fitness \(f_{1}\). Note that the uncovered-objective of \(f_{1}\) ensures that the algorithms Global Semo Alt converge to a feasible solution and that a search point \(X\) with \(f_{1}(X)=(\text{OPT},0,*)\) corresponds to an optimal \(W\)-separator. **Theorem 7**.: _Using the fitness function \(f_{1}\), the expected number of iterations of Global Semo Alt until it finds a minimum \(W\)-separator in \(G=(V,E)\) is upper bounded by \(\mathcal{O}\left(n^{3}(\text{OPT}+\log n)+n^{2}\cdot 2^{q}\right)\), where \(q=\text{OPT}\cdot W(\text{OPT}+W)+\text{OPT}\)._ ## 4 Analysis for LP-based fitness function In this section we investigate \(f_{2}\) on Global Semo Alt. The main result of this section is the following theorem. **Theorem 8**.: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem. Using the fitness function \(f_{2}\), the expected number of iterations of Global Semo Alt until an optimal solution is sampled is upper bounded by \(\mathcal{O}(n^{3}(\log n+\text{OPT})+n^{2}\cdot 4^{\text{OPT}:W})\)._ First we give a brief overview of a reducible structure concerning the \(W\)-separator problem associated with the objectives in the fitness function \(f_{2}\). The structure we will use is commonly known as crown decomposition. Roughly speaking, it is a division of the set of vertices into three parts consisting of a crown, a head, and a body, with the head separating the crown from the body. Under certain conditions concerning the crown and head vertices, which we will clarify in a moment, it is possible to show that there exists an optimal \(W\)-separator which contains the head vertices and reduces the given instance by removing the crown vertices. Recall that the parameter \(k\) in the decision variant of the \(W\)-separator asks for a \(W\)-separator of size at most \(k\). Kumar and Lokshtanov [9] provide such a reducible structure and state that it is in a graph as long as the size of it is greater than \(2kW\). The structure is called a (strictly) reducible pair and consists of crown and head vertices. For an instance \(G=(V,E)\) of the \(W\)-separator problem we say that \(Y=\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in V}\) is a _fractional \(W\)-separator_ of \(G\) if \(Y\) is a feasible solution according to the LP formulation of the \(W\)-separator problem. It is not difficult to see that the objective of any optimal fractional \(W\)-separator is smaller than \(\mathrm{OPT}\), i.e. \(\mathrm{LP}(G)\leq\mathrm{OPT}\). In principle, the LP objective is useful for finding a strictly reducible pair, since the head vertices in an optimal fractional W separator must have value one. Unfortunately, it is unknown whether each vertex that has value one in an optimal fractional \(W\)-separator is part of an optimal solution. This leads to the challenge of filtering out the right vertices, where the uncovered-objective - and in particular the structural properties of a strictly reducible pair - come into play. Reducible Structure of the \(W\)-Separator ProblemIn the following, we briefly summarize the definitions and theorems of [2, 5, 9]. For a vertex set \(B\subseteq V\), denote by \(\mathcal{B}\) the partitioning of \(B\) according to the connected components of \(G[B]\). **Definition 9** ((strictly) reducible pair).: _For a graph \(G=(V,E)\), a pair \((A,B)\) of vertex disjoint subsets of \(V\) is a reducible pair if the following conditions are satisfied:_ * \(N(B)\subseteq A\)_._ * _The size of each_ \(C\in\mathcal{B}\) _is at most_ \(W\)_._ * _There is an assignment function_ \(g\colon\mathcal{B}\times A\to\mathds{N}_{0}\)_, such that_ * _for all_ \(C\in\mathcal{B}\) _and_ \(a\in A\)_, if_ \(g(C,a)\neq 0\)_, then_ \(a\in N(C)\)__ * _for all_ \(a\in A\) _we have_ \(\sum_{C\in\mathcal{B}}g(C,a)\geq 2W-1\)_,_ * _for all_ \(C\in\mathcal{B}\) _we have_ \(\sum_{a\in A}g(C,a)\leq|C|\)_,_ _In addition, if there exists an \(a\in A\) such that \(\sum_{Q\in\mathcal{B}}g(C,a)\geq 2W\), then \((A,B)\) is a strictly reducible pair._ Next, we explain roughly the idea behind a reducible pair \((A,B)\). The head and crown vertices correspond to \(A\) and \(B\), respectively. That is, we want \(A\) to be part of our \(W\)-separator, and if that is the case, then no additional vertex from \(B\) is required to be in the solution since the components \(C\in\mathcal{B}\) are isolated after removing \(A\) from \(G\) with \(|C|\leq W\). Let \(G=(V,E)\) be a graph. We say that \(P_{1},\ldots,P_{m}\subseteq V\) is a _(\(W+1\))-packing_ if for all \(i,j\in[m]\) with \(i\neq j\) the induced subgraph \(G[P_{i}]\) is connected, \(|P_{i}|\geq W+1\), and \(P_{i}\cap P_{j}=\varnothing\). Note that for a \(W\)-separator \(S\subseteq V\), it holds that \(S\cap P_{i}\neq\varnothing\) for all \(i\in[m]\). Thus, the size of a \((W+1)\)-packing is a lower bound on the number of vertices needed for a \(W\)-separator. **Lemma 10** ([9], Lemma 17).: _Let \((A,B)\) be a reducible pair in \(G\). There is a (\(W+1\))-packing \(P_{1},\ldots,P_{|A|}\) in \(G[A\cup B]\), such that \(|P_{i}\cap A|=1\) for all \(i\in[|A|]\)._ Essentially, Lemma 10 provides a lower bound of \(|A|\) vertices for a \(W\)-separator in \(G[A\cup B]\). On the other hand, \(A\) is a \(W\)-separator of \(G[A\cup B]\) while \(A\) separates \(B\) from the rest of the graph. This properties basically admits the following theorem. **Theorem 11** ([9], Lemma 18).: _Let \((G,k)\) be an instance of the \(W\)-separator problem, and \((A,B)\) be a reducible pair in \(G\). \((G,k)\) is a yes-instance if and only if \((G-(A\cup B),k-|A|)\) is a yes-instance._ Finally, we clarify why a strictly reducible pair exists if the size of \(G\) is larger than \(2kW\). To do so, we make use of a lemma derivable from [2, 9]. A proof is given in the appendix (Appendix C). **Lemma 12**.: _Let \(G=(A\cup B,E)\) be a graph and \(W\in\mathds{N}_{0}\). Let \(\mathcal{B}\) be the connected components given as vertex sets of \(G[B]\), where for each \(C\in\mathcal{B}\) we have \(|C|\leq W\) and no \(C\in\mathcal{B}\) is isolated, i.e. \(N(C)\neq\varnothing\). If \(|B|\geq(2W-1)|A|+1\), then there exists a non-empty strictly reducible pair \((A^{\prime},B^{\prime})\), where \(A^{\prime}\subseteq A\) and \(B^{\prime}\subseteq B\)._ We conclude the preliminary section with a lemma that connects strictly reducible pairs with the size of the graph. **Lemma 13** ([5], Lemma 6.14).: _Let \((G,k)\) be an instance of the \(W\)-separator problem, such that each component in \(G\) has size at least \(W+1\). If \(|V|>2Wk\) and \((G,k)\) is a yes-instance, then there exists a strictly reducible pair \((A,B)\) in \(G\)._ Running time analysisLet \((A,B)\) be a strictly reducible pair. We say \((A,B)\) is a minimal strictly reducible pair if there does not exist a strictly reducible pair \((A^{\prime},B^{\prime})\) with \(A^{\prime}\subset A\) and \(B^{\prime}\subseteq B\). Clearly, it can happen that reducible pairs arises after a reduction is executed. Therefore, we say \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) is a _sequence of minimal strictly reducible pairs_ if for all \(i\in[m]\) the tuple \((A_{i},B_{i})\) is a minimal strictly reducible pair in \(G-\bigcup_{j=1}^{i-1}A_{j}\). Note that the definition of such a sequence implies that those tuples are pairwise disjoint, i.e., \((A_{i}\cup B_{i})\cap(A_{j}\cup B_{j})=\varnothing\) for all \(i,j\in[m]\) with \(i\neq j\). The proof of Theorem 8 can essentially be divided into three phases: 1. Let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs in \(G\), such that \(G-\bigcup_{i\in[m]}A_{i}\) contains no minimal strictly reducible pair. The first phase is to show that after a polynomial number of iterations of Global Semo Alt with fitness \(f_{2}\), a search point \(X\in\{0,1\}^{n}\) exists in the population \(\mathcal{P}\), such that \(\mathrm{LP}(G)=|X_{1}|+\mathrm{LP}(G[u(X)])\) and there is a fractional optimal \(W\)-separator \(Y=\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in u(X)}\) with \(y_{v}<1\) for each \(v\in V\). We will prove that in this case \(G[u(X)]\) contains no strictly reducible pair, and that because of the equality relation \(\mathrm{LP}(G)=|X_{1}|+\mathrm{LP}(G[u(X)])\) all the head vertices \(A_{i}\) for \(i\in[m]\) are in \(X_{1}\). That is, there is an optimal \(W\)-separator which contains a subset of \(X_{1}\). 2. The second phase is to filter \(\bigcup_{i=1}A_{i}\) from \(|X_{1}|\) so that we obtain a search point \(X^{\prime}\) that selects only those as 1-bits. Once an \(X\) as described in Phase (i) is guaranteed to be in the population, the algorithm Global Semo Alt takes in expectation FPT-time to reach \(X^{\prime}\). Finally, it is important that \(X^{\prime}\) remains in the population once we have found it. We show this by taking advantage of the structural properties of a reducible pair in combination with the uncovered-objective. 3. For the last phase, we know by Lemma 13 already that \(u(X^{\prime})\) has size at most \(2\cdot\mathrm{OPT}\cdot W\). Once we ensure that \(X^{\prime}\) is in \(\mathcal{P}\) and stays there, we prove that Global Semo Alt finds in expectation an optimal solution in FPT-time. In phase (i), we essentially make use of the LP objective. To prove that it works successfully, we will show the following two lemmas. **Lemma 14**.: _Using the fitness function \(f_{2}\), the expected number of iterations of Global Semo Alt where the population \(\mathcal{P}\) contains a search point \(X\in\{0,1\}^{n}\) such that \(\mathrm{LP}(G)=\mathrm{LP}(G[u(X)])+|X_{1}|\) and there is an optimal fractional \(W\)-separator \(\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in u(X)}\) of \(G[u(X)]\) with \(y_{v}<1\) for every \(v\in u(X)\) is upper bounded by \(\mathcal{O}(n^{3}(\log n+\text{OPT}))\). Moreover, once \(\mathcal{P}\) contains such a search point at any iteration, the same holds for all future iterations._ **Lemma 15**.: _Let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs in \(G\), such that \(G-\bigcup_{i=1}A_{i}\) contains no minimal strictly reducible pair. Let \(X\in\{0,1\}^{n}\) be a sample, such that there is a an optimal fractional \(W\)-separator \(\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in u(X)}\) of \(G[u(X)]\) with \(y_{v}<1\) for each \(v\in u(X)\). If \(|X_{1}|+\text{LP}(G[u(X)])=\text{LP}(G)\), then \(A_{i}\subseteq X_{1}\) and \(B_{i}\cap X_{1}=\varnothing\) for all \(i\in[m]\)._ We guide the rest of this section by using \(X\in\{0,1\}^{n}\) to denote a search point and \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) as a sequence of minimal strictly reducible pairs, where \(A_{\mathtt{all}}:=\bigcup_{i=1}^{m}A_{i}\) and \(B_{\mathtt{all}}:=\bigcup_{i=1}^{m}B_{i}\). For the Algorithm Global Semo Alt it is unlikely to jump from a uniformly random search point immediately to a search point satisfying Lemma 15. To guarantee a stepwise progress, we want that under the condition \(\text{LP}(G)=\text{LP}(G[u(X)])+|X_{1}|\) at each time \(A_{\mathtt{all}}\subsetneq X_{1}\) there exists a vertex of \(v\in A_{\mathtt{all}}\setminus X_{1}\) in an optimal fractional \(W\)-separator of \(G[u(X)]\) which must have value one. For this purpose, the characterization of minimal strictly reducible pairs by optimal fractional \(W\)-separators is useful. **Lemma 16** ([5], Corollary 6.19 and Lemma 6.20).: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem and let \(\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in V}\) be an optimal fractional \(W\)-separator of \(G\). If \(G\) contains a minimal strictly reducible pair \((A,B)\), then \(y_{v}=1\) for all \(v\in A\) and \(y_{u}=0\) for all \(u\in B\)._ From Lemma 16 we can derive that if \((A_{\mathtt{all}}\cup B_{\mathtt{all}})\cap X_{1}=\varnothing\), such a vertex \(v\) must exist, but the question is what happens if the intersection is not empty. In particular, we want to avoid vertices of \(B_{\mathtt{all}}\) being in \(X_{1}\), since reducible pairs in \(G\) may then no longer exist in \(G[u(X)]\). We start with the proof of Lemma 14 and show later how it is related to a sequence of minimal strictly reducible pairs. The first lemma is a simple but useful observation. **Lemma 17**.: _For every \(X\in\{0,1\}^{n}\) it holds that \(\text{LP}(G)\leq|X_{1}|+\text{LP}(G[u(X)])\)._ If Lemma 17 is true, it is not difficult to derive that if we have that it holds with equality for a search point \(X\), then \(f_{2}(X)\) is a pareto optimal vector of the fitness function \(f_{2}\), as given below as a corollary. **Corollary 18**.: _If a search point \(X\in\{0,1\}^{n}\) satisfy \(|X_{1}|+\text{LP}(G[u(X)])=\text{LP}(G)\), then the vector \((|X_{1}|,*,\text{LP}(G[u(X)])\) is a pareto optimal vector of the fitness function \(f_{2}\)._ The next lemma ensures that removing vertices with value one in an optimal fractional \(W\)-separator does not affect the objective of a fractional \(W\)-separator of the remaining graph. **Lemma 19** ([5], Corollary 6.17).: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem and let \(\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in V}\) be an optimal fractional \(W\)-separator of \(G\). Let \(V^{\prime}\subseteq V(G)\), such that \(y_{v}=1\) for all \(v\in V^{\prime}\). Then, \(\{y_{v}\ |\ v\in V\setminus V^{\prime}\}\) is an optimal fractional \(W\)-separator of \(G-V^{\prime}\), i.e., \(\sum_{v\in V\setminus V^{\prime}}y_{v}=\text{LP}(G-V^{\prime})\)._ Corollary 18 and Lemma 19 allow incremental progress in the set of 1-bits with respect to search points \(X\in\mathcal{P}\) that satisfy \(|X_{1}|+\text{LP}(G[u(X)])=\text{LP}(G)\) without backstepping. With this ingredient we can prove Lemma 14 (see Section C for a proof). Since \(f_{3}\) has one less objective than \(f_{2}\), one can derive the following lemma. **Lemma 20**.: _Using the fitness function \(f_{3}\), the expected number of iterations of Global Semo Alt where the population \(\mathcal{P}\) contains no search point \(X\in\{0,1\}^{n}\) such that \(\text{LP}(G)=\text{LP}(G[u(X)])+|X_{1}|\) and there is an optimal fractional \(W\)-separator \(\{y_{v}\in\mathds{R}_{\geq 0}\}_{v\in u(X)}\) of \(G[u(X)]\) with \(y_{v}<1\) for every \(v\in u(X)\) is upper bounded by \(\mathcal{O}(n^{2}(\log n+\text{OPT}))\)._ Our next goal is to prove Lemma 15. To identify the head vertices \(A_{\mathtt{all}}\) with respect to an optimal fractional \(W\)-separator, we want to ensure that the selection of the vertices of \(B_{\mathtt{all}}\) are distinguishable so that it cannot come to a conflict with Lemma 14. To do this, we will make use of the LP-objective and show that for a search point \(X\) with \(X_{1}\cap B_{\mathtt{all}}\neq\varnothing\) we have \(\text{LP}(G)<\text{LP}(G[u(X)])+|X_{1}|\). Let \((A,B)\) be a minimal strictly reducible pair in \(G\). The essential idea is to use \((W+1)\)-packings in \(G[A\cup B]\), since they provide lower bounds for \(W\)-separators. From Lemma 10 one can deduce that \(G[A\cup B]\) contains a maximum \((W+1)\)-packing \(\mathcal{Q}\) of size \(|A|\), since every vertex of \(A\) is contained exactly in one element of \(\mathcal{Q}\). Inspired by ideas on how to find crown decompositions in weighted bipartite graphs from [2, 9], we prove that removing vertices from \(B\) only partially affects the size of the \((W+1)\)-packing in \(G[A\cup B]\), as stated in the following lemma. **Lemma 21**.: _Let \((A,B)\) be a minimal strictly reducible pair in \(G=(V,E)\) and let \(S\subset A\cup B\) with \(|S|\leq|A|\). If \(S\cap B\neq\varnothing\), then \(G[(A\cup B)]-S\) contains a packing of size \(|A|-|S|+1\)._ In contrast, note that removing vertices \(S\subseteq A\) from \(G[A\cup B]\) would decrease the size of a \((W+1)\)-packing by \(|S|\), i.e., a maximum \((W+1)\)-packing in \(G[A\cup B]-S\) has size \(|A|-|S|\). We moved the proof of Lemma 21 to the appendix (Section C), since it is more technical and too long given the space constraints. Essentially, we make use of the following two lemmas and properties of network flows. In particular, these lemmas describe the new properties we have found for minimal strictly reducible pairs and may be of independent interest. **Lemma 22**.: _Let \((A,B)\) be a minimal strictly reducible pair in \(G\) with parameter \(W\). Then, for every \(a^{*}\in A\) there is an assignment function \(g\colon\mathcal{B}\times A\to\mathds{N}_{0}\) like in Definition 9 that satisfies \(\sum_{C\in\mathcal{B}}g(C,a^{*})\geq 2W\) and \(\sum_{C\in\mathcal{B}}g(C,a)\geq 2W-1\) for every \(a\in A\setminus\{a^{*}\}\)._ Concerning Lemma 22, we remark that the new feature to before is that the particular vertex (in the lemma \(a^{*}\)) can be chosen arbitrarily. **Lemma 23**.: _Let \((A,B)\) be a minimal strictly reducible pair in \(G\) with parameter \(W\). Then, for every \(A^{\prime}\subseteq A\) we have \(|V(\mathcal{B}_{A^{\prime}})|\geq|A^{\prime}|(2W-1)+1\)._ To conclude the Phase (i) we need to prove Lemma 15. Equipped with Lemma 21 we may prove statements about the LP-objective if \(X_{1}\cap B_{\mathtt{all}}\neq\varnothing\). In doing so, we prove another relation with respect to such a sequence, which fits the proof and will be useful later. **Lemma 24**.: _Let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs and let \(X_{1}\in\{0,1\}^{n}\) be a sample._ 1. _If_ \(X_{1}=\bigcup_{i=1}^{m}A_{i}\)_, then_ \(\text{LP}(G)=\text{LP}(G[u(X)])+|X_{1}|\)_._ 2. _If_ \(X_{1}\cap B_{\ell}\neq\varnothing\) _for an_ \(\ell\in[m]\)_, then_ \(\text{LP}(G(u[X]))+|X_{1}|>\text{LP}(G)\) Suitable for Lemma 14 we have characterized the case \(X_{1}\cap B_{\mathtt{all}}\neq\varnothing\). It remains to give a relation to this lemma when \(A_{\mathtt{all}}\cap X_{1}\neq\varnothing\) and \(A_{\mathtt{all}}\subsetneq X_{1}\). In particular, we want to ensure that in this case at least one vertex of \(A_{\mathtt{all}}\setminus X_{1}\) must be one in an optimal fractional \(W\)-separator of \(G[u(X)]\). **Lemma 25**.: _Let \((A,B)\) be a minimal strictly reducible pair in \(G\) and let \(\hat{A}\subset A\). Then, there is a partition \(A_{1},\ldots,A_{m}\) of \(A\setminus\hat{A}\) with disjoint vertex sets \(B_{1},\ldots,B_{m}\subseteq B\), such that for each \(i\in[k]\) the tuple \((A_{i},B_{i})\) is a minimal strictly reducible pair in \(G-\hat{A}\)._ By Lemma 16 we already know that the head vertices of a minimal strictly reducible pair in an optimal fractional \(W\)-separator have value one. Lemma 25 ensures that if some of the head vertices are removed, the value of the remaining head vertices in the respective optimal fractional solution remain one. The proof of Lemma 15 can be found in the appendix (Section C) and concludes Phase (i). Next, we prove that Phase (ii) works successfully. After Phase (i), we have a search point \(X\) in the population \(\mathcal{P}\) with \(A_{\mathtt{all}}\subseteq X_{1}\) such that \(\mathrm{LP}(G)=\mathrm{LP}(G[u(X)])+|X_{1}|\). Consequently, \(|X_{1}|\leq\mathrm{OPT}\) and therefore we can prove that Global Semo Alt reaches a search point \(X^{\prime}\) with \(X^{\prime}_{1}=A_{\mathtt{all}}\) from \(X\) in FPT-time. **Lemma 26**.: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem, and let \((A_{1},B_{1})\),..., \((A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs in \(G\), such that \(G-\bigcup_{i=1}A_{i}\) contains no strictly reducible pair. Using the fitness function \(f_{2}\), the expected number of iterations of Global Semo Alt until the population \(\mathcal{P}\) contains a search point \(X\) with \(X_{1}=\bigcup_{i=1}^{m}A_{i}\) is upper bounded by \(\mathcal{O}\left(n^{3}(\log n+\text{OPT})+2^{\text{OPT}}\right)\)._ The question that remains is whether we keep \(X^{\prime}\) in the population once we find it. This is where the uncovered-objective and the structural properties of minimal strictly reducible pairs come into play. **Lemma 27**.: _Let \(X\in\{0,1\}^{n}\) and let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs in \(G\), such that \(G-\bigcup_{i=1}A_{i}\) contains no strictly reducible pair. If \(X_{1}=\bigcup_{i=1}^{m}A_{i}\), then \(X\) is a pareto optimal solution._ Proof.: Let \(A=\bigcup_{i=1}^{m}A_{i}\) and \(B=\bigcup_{i=1}^{m}B_{i}\). We will prove that if there is a search point \(X^{\prime}\) that dominates \(X\), then \(G-A\) contains a minimal strictly reducible contradicting the precondition of the lemma. Note that a minimal strictly reducible pair in \(G-A\) have to be in \(G[u(X)]\) as the other components in \(G-A\) have size at most \(W\). If \(X_{1}=\bigcup_{i=1}^{m}A_{i}\), then by Lemma 24\(\mathrm{LP}(G)=\mathrm{LP}(G[u(X)])+|X_{1}|\). That is, we might restrict to solutions \(X^{\prime}\in\{0,1\}^{n}\) with \(\mathrm{LP}(G)=\mathrm{LP}(G[u(X^{\prime})])+|X^{\prime}_{1}|\) as well as \(|X^{\prime}_{1}|=|X_{1}|\) and can focus on the objective \(u(X)\) and \(u(X^{\prime})\), respectively. Note that it cannot happen that \(|X^{\prime}_{1}|<|X_{1}|\) and \(\mathrm{LP}(G[u(X^{\prime})])\leq\mathrm{LP}(G[u(X)])\) as \(\mathrm{LP}(G)=\mathrm{LP}(G[u(X)])+|X_{1}|>\mathrm{LP}(G[u(X^{\prime})])+|X^{ \prime}_{1}|\) contradicting Lemma 17. W.l.o.g. we can assume that every connected component of \(G-X_{1}=G-A\) of size at most \(W\) is also in \(G[B]\), i.e., \(V\setminus u(X)=A\cup B\). Let \(g_{1}\colon\mathcal{B}_{1}\times A_{1}\to\mathds{N}_{0},\ldots,g_{m}\colon \mathcal{B}_{m}\times A_{m}\to\mathds{N}_{0}\) be the according assignment functions of \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) and let \(g\colon\mathcal{B}\times A\to\mathds{N}_{0}\) be defined as \(g(a,C)=g_{i}(a,C)\) if \(a\in A_{i}\), \(C\in\mathcal{B}_{i}\) and otherwise \(g(a,C)=0\). Suppose there is such an \(X^{\prime}\neq X\) as described above with \(|u(X^{\prime})|\leq|u(X)|\), or equivalently \(|V\setminus u(X^{\prime})|\geq|V\setminus u(X)|\). Note that \(|V\setminus u(X)|-|A|\geq|A|(2W-1)+1\), since for each \(a\in A\) and at least for one \(a^{\prime}\in A\) we have \(\sum_{C\in\mathcal{B}}g(C,a)\geq 2W-1\) and \(\sum_{C\in\mathcal{B}}g(C,a^{\prime})\geq 2W\). We define \(V(\mathcal{B}_{\tilde{A}})\subseteq B\) for \(\tilde{A}\subseteq A\) as the vertices in the components \(\{C\in\mathcal{B}\mid N(C)\cap\tilde{A}\neq\varnothing\}\). Let \(X_{\mathtt{dif}}=X^{\prime}_{1}\setminus X_{1}=X^{\prime}_{1}\setminus A\) and let \(V_{\mathtt{dif}}=(V\setminus u(X^{\prime}))\setminus(V\setminus u(X))\). Note that \(V_{\mathtt{dif}}\subseteq u(X)\) and \(|X_{\mathtt{dif}}|=|A\setminus X^{\prime}_{1}|\) by \(|A|=|X^{\prime}_{1}|\). From \(\mathrm{LP}(G)=\mathrm{LP}(G[u(X^{\prime})])+|X^{\prime}_{1}|\), we obtain by Lemma 15 that \(X_{\texttt{{dif}}}\cap B=\varnothing\) and therefore \(X_{\texttt{{dif}}}\subseteq V_{\texttt{{dif}}}\) as no vertex of \(X_{\texttt{{dif}}}\) is in \(A\cup B=V\setminus u(X)\). Thus, by the assignment function \(g\) each vertex in \(A^{\prime}=A\setminus X^{\prime}_{1}\) is in a connected component of size at least \(W+1\) of \(G[u(X^{\prime})]\) and therefore \(A^{\prime}\) as well as \(V(\mathcal{B}_{A^{\prime}})\) are not in \(V\setminus u(X^{\prime})\). Furthermore, for at least one \(j\in[m]\) we have \(A_{j}\subsetneq X^{\prime}_{1}\) and by Lemma 23 we obtain that \(|V(\mathcal{B}_{A_{j}\setminus X^{\prime}_{1}})|\geq|A_{j}\setminus X^{\prime} _{1}|(2W-1)+1\). That is, to satisfy now \(|V\setminus u(X^{\prime})|\geq|V\setminus u(X)|\) we must have \(|V_{\texttt{{dif}}}|-|X_{\texttt{{dif}}}|\geq|X_{\texttt{{dif}}}|(2W-1)+1\) as at least \(|A\setminus X^{\prime}_{1}|(2W-1)+1+|A\setminus X^{\prime}_{1}|\) vertices are in \(V\setminus u(X)\) that are not in \(V\setminus u(X^{\prime})\). Observe that each connected component \(C\) of \(G[V_{\texttt{{dif}}}-X_{\texttt{{dif}}}]\) satisfy \(N(C)\cap X_{\texttt{{dif}}}\neq\varnothing\) and \(|C|\leq W\). From this, we obtain that \(G[V_{\texttt{{dif}}}]\) contains a strictly reducible pair \((\hat{A},\hat{B})\) with \(\hat{A}\subseteq X_{\texttt{{dif}}}\) and \(\hat{B}\subseteq V_{\texttt{{dif}}}\setminus X_{\texttt{{dif}}}\) by Lemma 12 and therefore also a minimal strictly reducible pair \((\hat{A}^{\prime},\hat{B}^{\prime})\) with \(\hat{A}^{\prime}\subseteq\hat{A}\) and \(\hat{B}^{\prime}\subseteq\hat{B}\). In particular, \((\hat{A}^{\prime},\hat{B}^{\prime})\) is a minimal strictly reducible in \(G-A\), since \(V_{\texttt{{dif}}}\subseteq u(X)\) and the connected components in \(G[V_{\texttt{{dif}}}\setminus X_{\texttt{{dif}}}]\) exist identically in \(G[u(X)]-X_{\texttt{{dif}}}\). We are ready for the final theorem of this section, which shows that Phase (iii) also works successfully. Proof of Theorem 8:Let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs, such that \(G-\bigcup_{i=1}^{m}A_{i}\) contains no strictly reducible pair. Furthermore, let \(\mathcal{P}\) be a population with respect to \(f_{2}\) in the algorithm Global Semo Alt. By Lemma 26 we have a search point \(X\in\mathcal{P}\) with \(X_{1}=\bigcup_{i=1}^{m}A_{i}\) after \(\mathcal{O}\left(n^{3}(\log n+\textsc{OPT})+2^{\textsc{OPT}}\right)\) iterations in expectation. Moreover, \(X\) is a pareto optimal solution by Lemma 27. Since \(G[u(X)]\) contains no strictly reducible pair, we can derive from Lemma 13 that \(|V(G[u(X)])|\leq 2\cdot\textsc{OPT}\cdot W\). The algorithm Global Semo Alt calls with \(1/3\) probability the mutation that flips every vertex \(u(X)\) with \(1/2\) probability in \(X\). That is, reaching a state \(X^{\prime}\) from \(X\), such that \(X^{\prime}_{1}=V^{*}\) has a probability of at least \(\Omega\left(2^{-2\textsc{OPT}\cdot W}\right)\), where selecting \(X^{\prime}\) in \(\mathcal{P}\) has probability \(\Omega(1/n^{2})\) (cf. Lemma 1). Thus, once \(X\) is contained in \(\mathcal{P}\) it takes in expectation \(\mathcal{O}\left(n^{2}\cdot 4^{\textsc{OPT}\cdot W}\right)\) iterations reaching \(X^{\prime}\). As a result, the algorithm needs in total \(\mathcal{O}\left(n^{3}(\log n+\textsc{OPT})+n^{2}\cdot 4^{\textsc{OPT}\cdot W}\right)\) iterations finding an optimal \(W\)-separator in expectation. ## 5 Approximations In this section we consider the \(W\)-separator problem with the fitness \(f_{2}\) and \(f_{3}\) associated with Global Semo and Global Semo Alt. We show that the algorithms find approximate solutions when we reduce their overhead. In particular, we prove the following theorems. **Theorem 28**.: _Using the the fitness function \(f_{3}\), the expected number of iterations of Global Semo until it finds a \((W+1)\)-approximation in \(G=(V,E)\) is upper bounded by \(\mathcal{O}\left(n^{2}(\log n+W\cdot\textsc{OPT})\right)\)._ **Theorem 29**.: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem and let \(\varepsilon\in[0,1)\)._ 1. _Using the fitness function_ \(f_{2}\)_, the expected number of iterations of Global Semo Alt until an_ \((1+\varepsilon(3/2W-1/2))\)_-approximation is sampled is upper bounded by_ \(\mathcal{O}\left(n^{3}(\log n+W\cdot\textsc{OPT})+2^{\textsc{OPT}}+n^{2} \cdot 4^{(1-\varepsilon)\textsc{OPT}\cdot W}\right)\)_._ 2. _Using the fitness function_ \(f_{3}\)_, the expected number of iterations of Global Semo Alt until a_ \((2+\varepsilon(3/2W-1/2))\)_-approximation is sampled is upper bounded by_ \(\mathcal{O}\left(n^{2}(\log n+W\cdot\textsc{OPT})+n\cdot 4^{(1-\varepsilon) \textsc{OPT}\cdot W}\right)\)_._ Note that Theorem 29 implies that we can hope for incremental progress towards an optimal solution if we compare it to Theorem 8. Note also that Theorem 29 ((i)) has a running time of \(\mathcal{O}\left(n^{3}(\log n+W\cdot\text{OPT})+n^{2}\cdot 4^{(1-\varepsilon) \text{OPT}\cdot W}\right)\) if \(\varepsilon<1/2\). To prove our theorems, we show that once there is a search point in the population that has a desired target value with respect to the LP-objective and the one-objective, then the algorithms find in polynomial time a \(W\)-separator that does not exceed this target value. That is, the \(1\)-bits of this search point do not necessarily have to form a \(W\)-separator. **Lemma 30**.: _Let \(G=(V,E)\) be an instance of the \(W\)-separator problem, \(\mathcal{P}\) a population with respect to the fitness function \(f_{2}\) or \(f_{3}\), \(c>\text{OPT}\), and \(X\in\mathcal{P}\) a search point satisfying \(|X_{1}|+(W+1)\cdot\text{LP}(G[u(X)])\leq c\). Using the the fitness function \(f_{2}\) or \(f_{3}\), the expected number of iterations of Global Semo until it finds a \(W\)-separator \(S\) in \(G\) with \(|S|\leq c\) is upper bounded by \(\mathcal{O}\left(n^{2}W\cdot\text{OPT}\right)\) or \(\mathcal{O}\left(n^{3}W\cdot\text{OPT}\right)\), respectively._ We conclude this section with the prove of Theorem 29 ((i)). Thereby, we basically need to show that we reach in the stated runtime a search point \(X\) that satisfies the precondition of Lemma 30 with the desired approximation value. Proof of Theorem 29 ((i)):Let \(X\in\{0,1\}^{n}\) be a search point such that \(G[u(X)]\) contains no minimal strictly reducible pair (_irreducible-condition_). Furthermore, let \(S\) be an optimal solution of \(G[u(X)]\) and let \(U=u(X)\backslash S\). Note that \(|S|\leq\text{OPT}\). Since \(G[u(X)]\) contains no minimal strictly reducible pair, we have \(|U|=|u(X)|-|S|\leq 2W|S|-|S|=|S|(2W-1)\) by Lemma 13. Recall that Global Semo Alt chooses with \(1/3\) probability the mutation that flips every bit corresponding to the vertices in \(u(X)\) with \(1/2\) probability. From this, the search point \(X\) has a probability of \(\Omega\left(2^{-(1-\varepsilon)|S|-(1-\varepsilon)|S|\cdot(2W-1)}\right)= \Omega\left(4^{-(1-\varepsilon)|S|\cdot W}\right)\) to flip \((1-\varepsilon)|S|\) fixed vertices of \(S\) and to not flip \((1-\varepsilon)|S|\cdot(2W-1)\) fixed vertices of \(U\) in one iteration. Independently from this, half of the remaining vertices of \(S\) and \(U\) are additionally flipped in this iteration, i.e., \(\frac{1}{2}\varepsilon|S|\) of \(S\) and \(\frac{1}{2}\varepsilon|U|\) of \(U\). Let \(S^{\prime}\) and \(U^{\prime}\) be the flipped vertices in this iteration and let \(X^{\prime}\) be the according search point. Note that \(\text{LP}(X^{\prime})\leq|S|-|S^{\prime}|\) simply because there is a \(W\)-separator of \(G[u(X)]-S^{\prime}\) of size \(|S|-|S^{\prime}|\). Hence, we have \[|X^{\prime}_{1}|+(W+1)\cdot\text{LP}(G[u(X^{\prime})]) =|X_{1}|+|S^{\prime}|+|U^{\prime}|+(W+1)\cdot\text{LP}(G[u(X^{ \prime})])\] \[\leq|X_{1}|+|S^{\prime}|+|U^{\prime}|+(W+1)\cdot(|S|-|S^{\prime}|)\] \[=|X_{1}|+|S|(W+1)-|S^{\prime}|W+|U^{\prime}|.\] Next, we upper bound \(|S^{\prime}|\) and \(|U^{\prime}|\) in terms of \(|S|\leq\text{OPT}\). Using the fact \(|U|\leq|S|(2W-1)\), we obtain \(|U^{\prime}|=\frac{1}{2}\varepsilon|U|\leq\frac{1}{2}\varepsilon|S|(2W-1)= \varepsilon|S|W-\frac{1}{2}\varepsilon|S|\). Regarding \(S^{\prime}\) we have \(|S^{\prime}|=(1-\varepsilon)|S|+\frac{1}{2}\varepsilon|S|=|S|-\varepsilon|S |+\frac{1}{2}\varepsilon|S|\). As a result, we obtain \[|X^{\prime}_{1}|+(W+1)\cdot\text{LP}(G[u(X^{\prime})])\] \[\leq|X_{1}|+|S|(W+1)-|S^{\prime}|W+|U^{\prime}|\] \[\leq|X_{1}|+|S|(W+1)-(|S|-\varepsilon|S|+\frac{1}{2}\varepsilon| S|)W+\varepsilon|S|W-\frac{1}{2}\varepsilon|S|\] \[\leq|X_{1}|+|S|(W+1)-|S|W+\varepsilon|S|W-\frac{1}{2}\varepsilon| S|W+\varepsilon|S|W-\frac{1}{2}\varepsilon|S|\] \[=|X_{1}|+|S|+|S|\left(2\varepsilon W-\frac{1}{2}\varepsilon W- \frac{1}{2}\varepsilon\right).\] Observe that once a desired \(X\) is guaranteed to be in the population, an event described above occurs after \(\mathcal{O}\left(n^{2}\cdot 4^{(1-\varepsilon)|S|\cdot W}\right)\) iterations in expectation for the fitness functions \(f_{2}\), where the factor \(n^{2}\) comes from selecting \(X\) (cf. Lemma 1). Let \((A_{1},B_{1}),\ldots,(A_{m},B_{m})\) be a sequence of minimal strictly reducible pairs, such that \(G-\bigcup_{i=1}^{m}A_{i}\) contains no strictly reducible pair. By Lemma 26 we have a search point \(X\) in the population \(\mathcal{P}\) with \(X_{1}=\bigcup_{i=1}^{m}A_{i}\) after \(\mathcal{O}\left(n^{3}(\log n+\mathrm{OPT})+2^{\mathrm{OPT}}\right)\) iterations in expectation. Note that \(X\) satisfies the irreducible-condition. Moreover, \(X\) is a pareto optimal solution by Lemma 27. By Theorem 11 we have \(|X_{1}|=\mathrm{OPT}-|S|\). Using that \(|S|\leq\mathrm{OPT}\), we obtain \[|X_{1}^{\prime}|+(W+1)\cdot\mathrm{LP}(G[u(X^{\prime})]) \leq|X_{1}|+|S|+|S|\left(2\varepsilon W-\frac{1}{2}\varepsilon W- \frac{1}{2}\varepsilon\right)\] \[=\mathrm{OPT}-|S|+|S|+|S|\left(2\varepsilon W-\frac{1}{2} \varepsilon W-\frac{1}{2}\varepsilon\right)\] \[\leq\mathrm{OPT}\left(1+\varepsilon\left(\frac{3}{2}W-\frac{1}{ 2}\right)\right).\] As a result, by the choice of \(X\) the resulting search point \(X^{\prime}\) satisfies the precondition of Lemma 30 with \(c=\mathrm{OPT}\left(1+\varepsilon\left(\frac{3}{2}W-\frac{1}{2}\right)\right)\). That is, once \(X^{\prime}\) is in the population \(\mathcal{P}\), the algorithm Global Semo Alt need in expectation \(\mathcal{O}(n^{3}W\cdot\mathrm{OPT})\) iterations having a search point in \(\mathcal{P}\) which is a \(\left(1+\varepsilon\left(\frac{3}{2}W-\frac{1}{2}\right)\right)\)-approximation. In summary, in expectation the desired search point \(X^{\prime}\) is in \(\mathcal{P}\) after \(\mathcal{O}\left(n^{3}(\log n+W\cdot\mathrm{OPT})+2^{\mathrm{OPT}}+n^{2}\cdot 4 ^{(1-\varepsilon)\mathrm{OPT}\cdot W}\right)\) iterations. ## 6 Conclusion In this work, we studied the behavior of evolutionary algorithms with different multi-objective fitness functions for the \(W\)-separator problem from the perspective of parameterized complexity. More precisely, we investigated the running time of such evolutionary algorithms depending on the problem parameter \(\mathrm{OPT}+W\). Our analysis was based on properties of reducible structures, showing that, given a suitable fitness function, the evolutionary algorithm tends to reduce the given instance along these structures. Once this is done, the running time for either obtaining an arbitrarily close approximation or an exact solution is tractable with respect to the problem parameter. In particular, this shows that evolutionary algorithms solve the \(W\)-separator problem in expectation in FPT-time for the parameter \(\mathrm{OPT}+W\).
2307.03372
Triangle singularity in the $J/ψ\to γ\bar{p} Δ$ decay
In this work, we study the role of triangle singularity in the $J/\psi \to \gamma \bar{p} \Delta$ decay. We find that through a triangle mechanism, involving a triangle loop composed by $\omega$, $\pi$ and $p$, this decay may develop a triangle singularity and produce a visible peak in the invariant mass $M_{\gamma\Delta}$ around 1.73 GeV with a width of 0.02 GeV. Such a triangle mechanism may also cause significant spin effects on the final $\Delta$, which can be detected by measuring its spin density matrix elements. Our calculations show that the branching ratios due to the triangle mechanism is Br($J/\psi\to \gamma \bar p\Delta,\Delta\to \pi p$)=$1.058\times 10^{-6}$. Hopefully, this reaction can be investigated at BESIII and future experiments, e.g. Super Tau-Charm Facility, and the narrow width of the induced structure, the moving TS position and the distinct features of the spin density matrix elements of the $\Delta$ may serve as signals for the triangle singularity mechanism.
Ke Wang, Rong Li, Bo-Chao Liu
2023-07-07T03:40:52Z
http://arxiv.org/abs/2307.03372v1
# Triangle singularity in the \(J/\psi\to\gamma\bar{p}\Delta\) decay ###### Abstract In this work, we study the role of triangle singularity in the \(J/\psi\to\gamma\bar{p}\Delta\) decay. We find that through a triangle mechanism, involving a triangle loop composed by \(\omega\), \(\pi\) and \(p\), this decay may develop a triangle singularity and produce a visible peak in the invariant mass \(M_{\gamma\Delta}\) around 1.73 GeV with a width of 0.02 GeV. Such a triangle mechanism may also cause significant spin effects on the final \(\Delta\), which can be detected by measuring its spin density matrix elements. Our calculations show that the branching ratios due to the triangle mechanism is Br\((J/\psi\to\gamma\bar{p}\Delta,\Delta\to\pi p)\)=1.058 \(\times\) 10\({}^{-6}\). Hopefully, this reaction can be investigated at BESIII and future experiments, e.g. Super Taub-Charm Facility, and the narrow width of the induced structure, the moving TS position and the distinct features of the spin density matrix elements of the \(\Delta\) may serve as signals for the triangle singularity mechanism. ## I Introduction Triangle singularity(TS) as one kind of kinematical singularities in the scattering amplitude was first studied by Landau in 1956Landau (1956). Later, the corresponding physical picture of the special kinematic conditions needed to produce TS, known as the Coleman-Norton theorem, was described in Ref.B. (1957). Specifically, for the decay process \(A\to B+C\) proceeding through a triangle loop composed by internal particles 1, 2 and 3, the particle \(A\) first decays into particles 1 and 2, then particle 1 decays into the particle 3 and \(B\), finally the particle 2 and 3 merge into the particle \(C\). TS occurs in the amplitude only when these sub-processes take place in a classical manner. It corresponds to the case that all three intermediate particles are on shell simultaneously and their three momenta are collinear in the rest frame of particle \(A\). Besides, particle 3 must move fast enough to catch up with particle 2 and merge into particle \(C\). In recent years, TS has attracted a lot of attentions of researchers and has been suggested to play an essential role for understanding the nature of some observed structures and clarifying some important puzzles (1979); (2000); (2001); (2002); (2003); (2004); (2005); (2006); (2007); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2008); (2009); (2009); (2008); (2009); ( expectation is quite simple. When incident particles are moving along some fixed direction, the produced intermediate state may have spin alignment due to angular moment conservation. For example, considering the \(\Delta\) resonance produced in \(\pi N\) elastic scattering process in the center of mass frame, the spin projection on the z-axis of the produced \(\Delta\) can only be \(\pm\frac{1}{2}\) if we take z-axis along the beam direction. Therefore, the spin of the \(\Delta\) is aligned and the angular distribution of its decay products is anisotropic. The spin status of the \(\Delta\) can be described by the spin density matrix elements(SDMEs) and measured through the analysis of the angular distribution of the \(\Delta\rightarrow\pi N\) decay. In the \(J/\psi\rightarrow\gamma\bar{p}\Delta\) process through the triangle diagram, according to the Coleman-Norton theorem, the \(\pi\) and \(N\) in the loop should move along the direction of the momentum of the \(\Delta\) at TS in the \(\gamma\Delta\) rest frame. It means, if we consider the helicity states of the \(\Delta\), i.e. choosing the quantization axis along the direction of the momentum of the \(\Delta\), the helicity should be \(\pm\frac{1}{2}\) similar as the case of the \(\Delta\) production in the \(\pi N\) elastic scattering process mentioned above. In other words, the special kinematic conditions required by the TS constrain the helicity of the \(\Delta\) in the \(\gamma\Delta\) rest frame, which is absent for other mechanisms. Therefore, if the TS mechanism indeed plays an important role in this reaction, we expect a peak structure in the \(\gamma\Delta\) invariant mass spectrum and the production of the \(\Delta\) with helicity \(\pm\frac{1}{2}\) should be enhanced near TS. This paper is organized as follows. In Sec.II, we present the theoretical framework and amplitudes for the reaction \(J/\psi\rightarrow\gamma\bar{p}\Delta\). In Sec.III, we show the numerical results and discuss their implications. Finally, we summarize our findings and conclusions in Sec.IV. ## II Model and ingredients In this work, we shall introduce the TS mechanism in the radiative decay process \(J/\psi\rightarrow\gamma\bar{p}\Delta\) within an effective Lagrangian approach. The Feynman diagram for the process that may produce TS is shown in Fig.1. In this process, the \(J/\psi\) first decays into \(p\bar{p}\omega\), then \(\omega\) decays to a photon and a \(\pi\) meson. In the \(\gamma\Delta\) rest frame, if the \(\pi\) meson travels along the momentum of the proton produced in \(J/\psi\) decay and moves faster than it, the \(\pi\) may catch up with the proton and they can finally merge into the final \(\Delta\). According to the results in Ref.[45], TS exists in this decay process only when the special kinematic conditions are satisfied. Using the method in Ref.[45], if we adopt the nominal masses in PDG[44] for the involved particles in Fig.1, it turns out that the TS should occur at \(M_{\gamma\Delta}=1.731\mathrm{GeV}\). To calculate the decay amplitude for the Feynman diagram in Fig.1, we need the Lagrangian densities for the various vertices. For the \(J/\psi\to p\bar{p}\omega\) vertex, we adopt a contact interaction \[\mathcal{L}_{\psi\omega N\bar{N}}=g_{c}\bar{N}\psi^{\mu}\omega_{\mu}N, \tag{1}\] where \(g_{c}\) is the coupling constant and can be determined through the \(J/\psi\to p\bar{p}\omega\) partial decay width in PDG[44]. Note that up to now there is no evidence that resonance productions play an important role in the \(J/\psi\to p\bar{p}\omega\) decay. For the \(\omega\gamma\pi\) and \(\Delta\pi N\) vertices, we adopt the effective Lagrangians[46; 47; 48; 49]. \[\mathcal{L}_{\omega\gamma\pi} = \frac{eg_{\omega\gamma\pi}}{m_{\omega}}\varepsilon^{\mu\nu\alpha \beta}\partial_{\mu}\omega_{\nu}\partial_{\alpha}A_{\beta}\pi, \tag{2}\] \[\mathcal{L}_{\Delta\pi N} = \frac{g_{\Delta\pi N}}{m_{\pi}}\bar{\Delta}^{\mu}\left(\vec{\tau} \cdot\partial_{\mu}\vec{\pi}\right)N+\mathrm{h.c.}, \tag{3}\] where \(A\) represents the photon field and \(e\) is taken as \(\sqrt{4\pi/137}\). The coupling constants \(g_{\omega\gamma\pi}\) and \(g_{\Delta\pi N}\) appearing in the above Lagrangian densities can be determined through the corresponding partial decay width using \[\Gamma_{\omega\rightarrow\pi\gamma} = \frac{e^{2}g_{\omega\gamma\pi}^{2}}{12\pi}\frac{|\mathbf{p}_{\pi}|^{3} }{m_{\omega}^{2}}, \tag{4}\] \[\Gamma_{\Delta\rightarrow\pi N} = \frac{g_{\Delta\pi N}^{2}}{12\pi}\frac{E_{N}+m_{N}}{m_{\Delta}m_{ \pi}^{2}}|\mathbf{p}_{\pi}|^{3}, \tag{5}\] where \(|\mathbf{p}_{\pi}|\) and \(E_{N}\) denote the magnitude of the three momentum of the \(\pi\) and the nucleon energy in the rest frame of the mother particles, respectively. The obtained coupling constants are listed in Table 1. With the above Lagrangian densities for various vertices, we can straightforwardly obtain the amplitude for the triangle loop diagram in Fig.1 as \begin{table} \begin{tabular}{c c c c c} State & Width & Decay & Adopted & \(g\) \\ & (MeV) & channel & branching ratio & \\ \hline \(J/\psi\) & \(9.26\times 10^{-2}\) & \(p\bar{p}\omega\) & \(9.80\times 10^{-4}\) & \(7.30\times 10^{-2}\) \\ \(\omega\) & \(8.68\) & \(\pi^{0}\gamma\) & \(8.35\times 10^{-2}\) & \(1.83\) \\ \(\Delta\) & \(117\) & \(N\pi\) & \(0.994\) & \(2.07\) \\ \end{tabular} \end{table} Table 1: Coupling constants used in this work. The experimental decay widths are taken from Ref.[44]. \[{\cal M}^{T} = -i\frac{eg_{\varphi}g_{\omega\gamma\pi}g_{\Delta\pi N}}{m_{\pi}m_{ \omega}}\bar{u}^{\mu}_{\Delta}\varepsilon^{\nu}_{\psi}\varepsilon^{*\alpha}_{ \gamma}\int\frac{{\rm d}^{4}q}{(2\pi)^{4}p_{\pi,\mu}}G^{\frac{1}{2}}(p_{p}) \epsilon_{\beta\rho\lambda\alpha}p^{\beta}_{\omega}G^{1,\rho}_{\nu}(p_{\omega })p^{\lambda}_{\gamma}G^{0}(p_{\pi})F(p_{\pi})v_{\bar{p}} \tag{6}\] \[\equiv g\bar{u}^{\mu}_{\Delta}\varepsilon^{\nu}_{\psi}\varepsilon^{* \alpha}_{\gamma}{\cal M}_{\mu\nu\alpha}v_{\bar{p}},\] where \(u_{\Delta}\),\(v_{\bar{p}}\), \(\varepsilon_{\gamma}\) and \(\varepsilon_{\psi}\) are the spin functions of the \(\Delta\), \(\bar{p}\), photon and \(J/\psi\), respectively. \(G^{J}\)s denote the propagators of the intermediate particles with spin \(J\), which are defined as[49; 50; 51] \[G^{0}(q) = \frac{i}{q^{2}-m^{2}}, \tag{7}\] \[G^{1}_{\mu\nu}(q) = -\frac{i(g_{\mu\nu}-q_{\mu}q_{\nu}/m^{2})}{q^{2}-m^{2}},\] (8) \[G^{\frac{1}{2}}(q) = \frac{i(\not{q}+m)}{q^{2}-m^{2}}, \tag{9}\] where \(q\) and \(m\) are the four momentum and the mass of the intermediate state. In the above amplitude, we have introduced a monopole form factor \(F(p_{\pi})\) for the intermediate \(\pi\) meson in order to make the loop integral convergent, which is taken as[52; 53; 54] \[F(p_{\pi})=\frac{m_{\pi}^{2}-\Lambda_{\pi}^{2}}{p_{\pi}^{2}-\Lambda_{\pi}^{2}}. \tag{10}\] Here we note that near TS the off-shell effects of the intermediate states in the loop are small, so we do not need to consider the form factors for other particles. Furthermore, the possible problem of an artificial pole introduced by the form factor should not be worried here as discussed in Ref.[12]. The cutoff \(\Lambda_{\pi}\) can be determined through an empirical formula \(\Lambda_{\pi}=m_{\pi}+\alpha\Lambda_{\rm QCD}\)[55; 41; 56], where \(\alpha\) is a dimensionless free parameter and \(\Lambda_{\rm QCD}=0.22\) GeV is the scale parameter of QCD. The \(\alpha\) is usually taken to be about unity, and in this work we take \(\alpha=1\) in the calculations. For the quasi three-body decay process, i.e. ignoring the decay of the \(\Delta\), the invariant mass distribution of the \(\gamma\Delta\) system can be obtained through the following formula[44; 14] \[\frac{{\rm d}\Gamma}{{\rm d}M_{\gamma\Delta}}\!=\!\frac{4m_{N}m_{\Delta}}{(2 \pi)^{52}4^{m}_{\psi}}\frac{\left|\mathbf{p}_{\bar{p}}\right|\!\left|\mathbf{p}_{\gamma }^{*}\right|}{3}\!\int\!{\rm d}\Omega_{\bar{p}}{\rm d}\Omega_{\gamma}^{*}\sum _{\rm spin}\!\left|{\cal M}^{T}\right|^{2}, \tag{11}\] where the quantities with or without \(*\) represent that they are defined in the center of mass frame of the \(\gamma\Delta\) system or the \(J/\psi\) rest frame, respectively. To further consider the influences of the finite width effects of the \(\Delta\) due to the \(\Delta\) decay as shown in Fig.2, we follow the approach used in Ref.[29] by introducing a mass distribution function for the \(\Delta\) in Eq.(11). Then we obtain \[\frac{{\rm d}\Gamma}{{\rm d}M_{\gamma\pi N}} = \int\frac{4m_{N}M_{\pi N}}{(2\pi)^{5}2^{4}m_{\psi}^{2}}{\rm d} \Omega_{\bar{p}}{\rm d}\Omega_{\gamma}^{*}{\rm d}M_{\pi N}^{2}\frac{\left|\bm {p}_{\bar{p}}\right|\!\left|\mathbf{p}_{\gamma}^{*}\right|}{3\pi} \tag{12}\] \[\times\frac{m_{\Delta}\Gamma_{\Delta}\cdot\sum\limits_{\rm spin} \left|{\cal M}^{T}\right|^{2}}{(M_{\pi N}^{2}-m_{\Delta}^{2})^{2}+(m_{\Delta} \Gamma_{\Delta})^{2}},\] where \(M_{\pi N}\) stands for the invariant mass of its decay products \(\pi N\) or the varying mass of the \(\Delta\). In this work, we will also discuss the spin effects due to the triangle singularity as studied in Ref.[43]. Here we shall study the SDMEs of the \(\Delta\), which will be calculated in the quasi three-body decay process with taking the \(M_{\Delta}\) at some fixed values and using the formula presented above. We shall consider the helicity states of the \(\Delta\) in the c.m. frame of the \(\gamma\Delta\) system. The spin density matrix element \(\rho_{\lambda\lambda^{\prime}}\) of the \(\Delta\) as a function of the \(\gamma\Delta\) invariant mass in the \(\gamma\Delta\) rest frame is defined as: \[\rho_{\lambda\lambda^{\prime}}(m_{\gamma\Delta})=\frac{\int{\rm d}\Omega_{\bar{p }}{\rm d}\Omega_{\gamma}^{*}\sum\limits_{\rm spin}{}^{\prime}{\cal M}_{ \lambda}^{T}{\cal M}_{\lambda^{\prime}}^{T*}}{\int{\rm d}\Omega_{\bar{p}}{\rm d }\Omega_{\gamma}^{*}\sum\limits_{\rm spin}\left|{\cal M}^{T}\right|^{2}}, \tag{13}\] where \(\sum\nolimits^{\prime}\) represents the summing of all the spins apart from the \(\Delta\)'s, and \(\lambda\) and \(\lambda^{\prime}\) are the helicities of the final \(\Delta\). In this work, we will concentrate on the observable \(P_{\Delta}\) defined as \[P_{\Delta}=\frac{\rho_{11}-\rho_{33}}{\rho_{11}+\rho_{33}}, \tag{14}\] where \(\rho_{11}\) and \(\rho_{33}\) are the diagonal SDMEs of the \(\Delta\) and corresponding to the probability of finding the \(\Delta\) in the helicity \(\frac{1}{2}\) and \(\frac{3}{2}\), respectively. Therefore, the \(P_{\Delta}\) describes the asymmetry of the probabilities of the \(\Delta\) having the helicities \(\frac{1}{2}\) and \(\frac{3}{2}\). Here we want to study the \(M_{\gamma\Delta}\) dependence of the \(P_{\Delta}\), so the angular dependence has been integrated(see Eq.13). According to the definition, the value of the \(P_{\Delta}\) can vary from \(-1\) to 1. If TS mechanism dominates this reaction, we expect the \(P_{\Delta}\) should approach 1 near TS. The \(\rho_{33}^{\Delta}\) can be extracted from the angular distribution of its decay products, i.e. \(\pi\) or \(N\), in its rest frame through[57] \[W(\cos\theta)=\frac{1}{4}\left[(1+4\rho_{33})+(3-12\rho_{33})\cos^{2}\theta \right], \tag{15}\] and the \(\rho_{11}\) can be deduced from the relation \(\rho_{11}+\rho_{33}=\frac{1}{2}\). ## III Results and Discussion In this section, we shall study the TS mechanism in the reaction \(J/\psi\rightarrow\gamma\bar{p}\Delta\) and discuss its effects on both the invariant mass spectrums of final particles and the \(P_{\Delta}\). With using the package LoopTools[58], the loop integral in Eq.(6) can be evaluated numerically. Through Eq.(11), we can obtain the distribution of the differential decay width versus the invariant mass \(M_{\gamma\Delta}\) by taking \(M_{\Delta}=1.182\), 1.232 and 1.282 GeV individually. The corresponding results are depicted in Fig.3. As can be seen in the figure, the position of the peak caused by triangle singularity depends on the adopted mass of the \(\Delta\). Therefore, by selecting the events in different region of the \(M_{\pi N}\), the peak position in the invariant mass spectrum will change if TS mechanism indeed plays an important role here. As discussed in Ref.[45], the moving peak observed here is mainly attributed to the reason that the position of TS is determined by kinematic conditions and dependent on the invariant mass of the external particles of the triangle loop. Following the method in Ref.[45], by adopting the value of \(M_{\Delta}\) from 1.081 to 1.286 GeV, the position of the TS in \(M_{\gamma\Delta}\) can vary from 1.721 to 2.159 GeV. In fact, there are two kinds of singularities which are relevant here[45]. One is the normal two-body threshold cusp (TBTC), and the other is the TS. In the case of \(M_{\Delta}=1.182\) GeV(the red dashed line in 3), the small bump around 1.73 GeV is caused by the TBTC. While, in other cases there is only one peak structure since the TS and TBTC are close to each other and their effects are overlapped. Here it is also worth noting that the width of the structure is rather narrow(\(\sim\)20 MeV), which is mainly ascribed to the narrow width of the intermediate states in the loop. The feature of the moving peak and the rather narrow width of the peak structure caused by the TS mechanism therefore offer the clues for identifying the TS mechanism in experiment. Since the \(\Delta\) is unstable and has a relatively large width, it is also necessary to further discuss the effects of its finite width on the invariant spectrum. Based on the differential mass distribution formula in Eq.(12), we present the mass distribution as a function of \(M_{\gamma\pi N}\) in Fig.4 with considering the finite width effect explicitly. It can be found that with including the width effects of the \(\Delta\) the peak structure become wider due to an average of the effects of the moving TS. While, even in this case the width of the structure is only about 30 MeV, which is significantly smaller than the width of the \(N^{*}\) or \(\Delta^{*}\) in this energy region and makes it distinguishable from ordinary resonance contributions. We can also calculate the decay branching ratio of \(J/\psi\rightarrow\gamma\bar{p}\Delta\) using Eq.(11) with adopting \(m_{\Delta}=1.232\) GeV, and we obtain \[\mathrm{Br}\left(J/\psi\rightarrow\gamma\bar{p}\Delta\right)=1.506\times 10^{-6}. \tag{16}\] When futher considering the finite width of the \(\Delta\) with taking \(\Gamma_{\Delta}=0.117\) GeV, the decay branching ratio can be obtained through Eq.(12), then we get \[\mathrm{Br}\left(J/\psi\rightarrow\gamma\bar{p}\Delta(\rightarrow\pi N)\right) =1.058\times 10^{-6}. \tag{17}\] The production rate of this decay is within the measurable range at BESIII and also suitable to be explored at the Super Tau-Charm Facility. Next, let's focus on the spin effects induced by the TS mechanism on the \(\Delta\). According to the Coleman-Norton theorem[2], TS occurs when the triangle loop process depicted in Fig.1 takes place in a classical manner. Specifically, in the rest frame of the \(\gamma\Delta\) system, if the internal \(\omega\), \(\pi\) and \(p\) are on-shell simultaneously, their three-momenta are collinear, and the \(\pi\) moves in the same direction as the proton and can catch up with it to fuse to the \(\Delta\), then the TS develops. Therefore, at TS the final \(\Delta\) is predominantly produced by the intermediate \(\pi\) and proton moving in the same direction as the final \(\Delta\) in the \(\gamma\Delta\) Figure 3: The distribution of the differential decay width versus the invariant mass \(M_{\gamma\Delta}\) in \(J/\psi\rightarrow\gamma\bar{p}\Delta\). The red dashed, black solid and blue dot-dashed lines denote the results with taking \(M_{\gamma\Delta}\) = 1.182, 1.232 and 1.282 GeV, respectively. rest frame. In such a special condition, the \(\Delta\) should be exclusively produced with helicity \(\pm\frac{1}{2}\). To understand this result, it is helpful to consider the \(\pi p\) elastic scattering in \(s\)-channel in the center of mass frame. In this process, even if the spin of the initial nucleon is unpolarized, the spin of the intermediate resonance is necessarily aligned when the spin of the intermediate resonance is larger than \(\frac{1}{2}\)1. In the \(J/\psi\to\gamma\bar{p}\Delta\) decay, since helicity is invariant under a boost from the \(\Delta\) rest frame to the \(\gamma\Delta\) rest frame, the above arguments also hold in the \(\gamma\Delta\) rest frame. On the other hand, when the special kinematic conditions are not satisfied, i.e. departing the postion of the TS, the helicity of the \(\Delta\) will not necessarily be \(\pm\frac{1}{2}\) anymore. These expectations can be verified by a numerical calculation of the \(P_{\Delta}\) defined above. In Fig.5, we show the \(P_{\Delta}\) versus the \(M_{\gamma\Delta}\) with taking the mass of the \(\Delta\) as 1.182, 1.232 and 1.282 GeV, respectively. As can be seen from the figures, the \(P_{\Delta}\) peaks appear at the corresponding TS positions in accordance with the expectations using the various \(\Delta\) mass. Here, we want to note that such a \(M_{\gamma\Delta}\) dependence is quite distinct from the expectation of a simple resonance model, since in resonance model the \(M_{\gamma\Delta}\) dependence mainly comes from the denominator of the resonance propagator and should be canceled in calculating the ratio in Eq.(14). Therefore, the spin observable \(P_{\Delta}\) can be used to verify whether the structure in invariant mass spectrum is caused by TS or a resonance. It is also interesting to notice that in the \(M_{\Delta}=1.182\) GeV case(red dashed line in Fig.5) there is a small bump at about \(M_{\gamma\Delta}=1.72\) GeV, corresponding to the \(p\omega\) threshold, in the \(P_{\Delta}\) distribution. As explained in Ref.[43], at \(p\omega\) threshold the production of the \(\Delta\) with the helicities \(\pm\frac{1}{2}\) is also enhanced due to the kinematic condition. For the other cases, there is no such a structure due to the closeness of the \(p\omega\) threshold and the TS. When considering \(\Delta\) decay, we expect the peak structure of the \(P_{\Delta}\) should still exist but with a larger width. However, by selecting the events in different \(M_{\pi N}\) regions the phenomena discussed above should be observed in experiments. Footnote 1: In the center of mass frame, if we take z axis along the direction of the momentum of the initial proton, the magnetic quantum number of the z component of orbital angular momentum has to be zero due to the fact that the momenta of the \(\pi\) and \(p\) are along z axis. Therefore, by taking the spin quantization axis along z axis, the spin projection along z axis of the intermediate state can only be \(\pm\frac{1}{2}\) due to angular momentum conservation along z axis. For resonances with spin larger than \(\frac{1}{2}\), it means that its spin is aligned. Finally, when taking into account the \(\Delta\) decay, the decay process \(J/\psi\to\gamma\bar{p}\Delta(\to\pi^{0}p/\pi^{+}n)\) through the TS mechanism involves the \(\pi^{0}p\to\pi^{0}p\) or \(\pi^{0}p\to\pi^{+}n\) scattering as a subprocess. According to Schmid theorem[59], in the \(\pi^{0}p\to\pi^{0}p\) case the contribution of the triangle loop diagram may be negligible compared to the corresponding tree level diagram. However, Ref.[60] demonstrates that Schmid theorem holds strictly only in the limit \(\Gamma_{\omega}\to 0\). Furthermore, by making a cut of the invariant mass \(M_{\gamma\pi}\) in the final states it can also reduce the contribution of the tree diagram[41]. In practice, it can also avoid the effects due to the Schmid theorem in this decay by choosing \(\pi^{+}n\) as the final state in experiment. Therefore, we expect the main features of the TS mechanism predicted in this work should still be observable after considering the Schmid theorem. ## IV Summary In this work, we investigate the triangle singularity developed in the \(J/\psi\to\gamma\bar{p}\Delta\) process, where \(\omega\), \(\pi\) and compose the internal triangle loop. According to our results, the TS mechanism may induce a structure with a width of \(0.02\sim 0.03\) GeV in the \(\gamma\Delta\) invariant mass spectrum. We find the position of the TS is dependent on the \(M_{\Delta}\) or the invariant mass of the final \(\pi N\). By adopting the value of \(M_{\Delta}\) ranging from 1.081 to 1.286 GeV, the position of the TS in \(M_{\gamma\Delta}\) can vary from 1.721 to 2.159 GeV. Therefore, by performing a cut of the invariant mass of the final \(\pi N\) the TS and the corresponding peak in the \(M_{\gamma\Delta}\) distribution should be shifted accordingly. If the TS mechanism indeed plays an important role, we also expect that the spin observable \(P_{\Delta}\) should take a relatively large value and have a peak versus the invariant mass \(M_{\gamma\Delta}\) near the TS. The predicted decay branching ratio for this process is \(\mathrm{Br}\left(J/\psi\rightarrow\gamma\bar{p}\Delta(-\pi N)\right)=1.058 \times 10^{-6}\), which should be accessible at BESIII and future super Tau-Charm factory. ###### Acknowledgements. We acknowledge the support from the National Natural Science Foundation of China under Grants No.U1832160, the Natural Science Foundation of Shaanxi Province under Grant No.2019JM-025, and the Fundamental Research Funds for the Central Universities.
2308.02831
Affective Visualization Design: Leveraging the Emotional Impact of Data
In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design.
Xingyu Lan, Yanqiu Wu, Nan Cao
2023-08-05T09:33:23Z
http://arxiv.org/abs/2308.02831v1
# Affective Visualization Design: ###### Abstract In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., _why_ emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (_where_), design tasks (_what_), and design methods (_how_), to explore the design space of affective visualization design. Information Visualization, Affective Design, Visual Communication, User Experience, Storytelling ## 1 Introduction In the 1880s, philosopher Friedrich Nitzsche wrote down his eloquent argument, "thoughts are the shadows of our feelings" [60]. Although scientists are still investigating the role and mechanism of emotion, there is a general consensus that emotion is an indispensable part of human intelligence and has a crucial influence on perception, cognition, and behavior [52]. In the visualization community, although traditional criteria of evaluating data visualization design emphasize rational metrics such as accuracy and efficiency, an increasing amount of work has recognized the importance of emotion in visualization [24, 45, 90]. For example, empirically, emotion has been observed as a commonly pursued factor in fields such as artistic visualization [90, 41, 86] and data storytelling [78, 47, 79, 44], capable of influencing user engagement and the recall of data [38, 47]. Theoretically, researchers have introduced concepts such as _emotional design_ and _affective design_ from domains such as industrial design and human-computer interaction (HCI) to help examine the relationship between data visualization and emotion [44, 10, 45]. Meanwhile, many data visualization projects that intend to leverage the benefits of emotion have been proposed. For instance, researchers have explored affective data visualization as a promising approach to communicating data to a wide audience [8, 65]. In the wild, design projects such as the U.S. Gun Death [64] have sparked heated discussion on the role of visualization in evoking emotion and doing good for society [13]. However, the development of this thread of research still faces three main challenges. Firstly, there is a need for a clear definition and characterization of the research scope. For example, the concept, _affective visualization_, has been used inconsistently in different papers (e.g., [96, 10]), thus hindering effective communication between researchers. Secondly, there is still controversy surrounding the legitimacy of considering emotion in visualization design. For example, from the perspective of traditional rationalism-driven design norms, emotion seems to be at odds with reasoning. Therefore, a frequently raised doubt is whether affective design will hinder the objectivity of data [24, 51]. However, to our best knowledge, no work has been done to systematically review the relevant arguments that are now scattered in various publications and legitimize _why_ emotion is important for visualization design. Thirdly, we still lack a clear picture of how to operationalize affective design on visualization, such as _where_ to apply such design, _what_ tasks the design can undertake, and _how_ to perform the design. Only by answering such questions can we sort out existing knowledge and practices in this growing field and suggest concrete strategies and techniques for designers. To address the above challenges, first, we conducted a literature review and identified three research threads that examined both emotion and data visualization. We clarified the differences between these research threads and set one specific thread, _affective visualization design_ (data visualization designed to communicate and influence emotion), as the focus of this work (Sec. 3). Next, we coded 109 papers concerning affective visualization design based on how they argued for the legitimacy of considering emotion in visualization design (in response to the _why_ challenge mentioned above) and categorized the arguments into five perspectives, including the perspectives of application, usefulness, rhetoric, sociology, and humanism (Sec. 4). Based on the 109 papers, we also identified 61 affective visualization design projects whose authors have explicitly stated their intent to communicate and influence emotion. We coded these design projects in three dimensions (i.e., design fields, design tasks, design methods) corresponding to the _where_, _what_, and _how_ challenges to explore the landscape of affective visualization design (Sec. 5). Last, we discuss our limitations and the research opportunities arising from this work (Sec. 6). ## 2 Background In this section, we review background knowledge about affective design and the debate about emotion in visualization design. ### _Affective Design_ Affect, as defined in the APA Dictionary of Psychology [1], represents any experience of feeling or emotion, and is a critical component of human intelligence (for simplicity, we follow prior work [66] and use _affect_ and _emotion_ in the following facts interchangeably). Over the years, researchers and practitioners have explored the mechanism and benefits of emotion and applied the knowledge to various designs. For example, in the 1980s and 1990s, a significant amount of psychological research was done to investigate the relationship between color and emotion [83]. Today, color, along with other emotional elements such as shapes, fonts, and iconography, are widely used in graphic design to create emotive posters or web pages [88]. In the field of industrial design, researchers started exploring _emotional design_[61] in the early 2000s, focusing on eliciting emotion by manipulating the appearances and functions of industrial products such as cars and furniture. In the field of HCI, designing for emotion gained popularity especially after the conceptualization of _affective computing_[66]. Ideas such as _affective design_ and _affective user interface_ were proposed and embraced by HCI researchers to develop affective chatbots, computer games, and VR environments [67, 11, 35]. In the visualization community, it is not until recent years that researchers began to highlight the opportunities of incorporating affective design in data visualization [44, 45, 10]. Following the call, this work conducts a systematic review of existing research and practices on affective visualization design and outline the latest advances in this emerging cross-disciplinary field. ### The Debate About Emotion in Visualization Design Although affective design is a well-recognized design paradigm in many fields, it remains underexplored in the visualization community [45]. As mathematical and factual depictions of information, data does not seem directly related to emotion. The mainstream visualization design norms highlight that a good visualization should be accurate, concise, and clear [82]. Under such a rationalism and minimalism design paradigm, the legitimacy of including emotional elements in data visualization faces challenges. For example, in the debate about chart junk, the use of emotion-laden embellishments was criticized as redundant and distracting [82]. Another common concern is that including emotion may reduce the objectivity of data. According to historian Porter [69], the pursuit of objectivity is a core spirit of modern science, and quantification is viewed as "a technology of distance", where people should separate their subjective emotions from objective facts. On the other hand, different opinions constantly exist. As early as 1995, a panel discussion was held at the IEEE Visualization Conference to reflect on "Is visualization struggling under the myth of objectivity?" [36]. In this panel, researchers discussed the tension between viewing visualization as objective science and the inherent subjectivity of visual design. One researcher argued that "visualization is resistant to the systematic evaluation and assessment procedures common to science; it still remains difficult to ascertain if a particular instantiation has been'successful". In recent years, more and more arguments, reflections, and critiques have been proposed. For example, Dragga [25] responded to the chart junk debate by arguing that although pictographs are statistically redundant, they are not emotionally redundant. Kennedy et al. [39] thought that the binary view of emotion and reason has viewed emotion as irrational and has made emotion intentionally undervalued in data science for a long time. In the wild, many practitioners have created emotion-laden visualizations and defended the necessity of communicating emotion with data [47, 13]. However, so far, such arguments and practices remain highly scattered and have not been systematically reviewed by the visualization community. For example, no systematic work has been done to clarify why it is important to consider emotion in visualization design. Additionally, no summarization has been done to clarify what industries or needs can be served by affective visualization design and how it should be designed. To fill these gaps, this work aims to synthesize relevant research work and design practices about affective visualization design to clarify the values of emotion in visualization and derive design implications. ## 2 Corpus Collection This section reviews the literature on affective visualization design. Below we first introduce how we collected and refined the corpus, and then provide an overview of the corpus. ### Methodology To start with, we went through papers published in three well-recognized leading venues of data visualization research (IEEE TVCG, ACM CHI, and EuroVis) in the recent three years (2020-2022) to identify an initial set of qualified papers. As prior literature has suggested that affective visualization design is close to art practice [41, 90], we also went through papers published in IEEE VIS Arts Programs in the recent three years. Specifically, we considered a paper qualified if: (i) it is about data visualization, and (ii) it explicitly mentions emotion or emotion-related words (e.g., surprise, joy) in its keywords, title, abstract, or major findings. As a result, we identified 24 qualified papers [44, 45, 47, 48, 50, 51, 53, 57, 7, 7, 74, 54, 7, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94]. For example, Samsel et al. [74] designed a color tool for creating affective scientific visualization. Lan et al. [47] investigated how visualization design facilitates the communication of negative emotions in serious data stories. Based on the 24 papers, we used the backward snowballing method (i.e., going through the reference lists of existing papers to identify new papers to include) to identify more potentially qualified papers and used the aforementioned two criteria to judge whether a paper is truly qualified. For example, some papers about user experience were deemed as potentially relevant during the snowball process. However, a further search in their texts showed that emotion is not a local point of their research. Therefore, these papers were not included in our corpus. If two papers have highly overlapping content (e.g., a short paper and a full paper written by the same authors and dealt with the same research problem), we kept the paper whose research is more comprehensive. We iterated the above process until no more qualified papers were identified. ### Concept Clarification and Corpus Refinement When constructing the corpus, a challenge that emerged immediately was the ambiguity of research boundaries and concept usage. For example, although some papers have indeed studied both visualization and emotion and were tagged as qualified in the first cycle of paper collection, their research goals are different. For example, we noticed that _affective visualization_ has been used in different contexts to refer to different meanings. To address this challenge and also clarify the research scope of this work, we categorized the papers into three research threads to elucidate their differences and refine our corpus. **Emotion as the object being visualized.** This thread of research performs visual analytics on the data that reflects emotion, such as heart rates, facial expressions, and gestures. In other words, emotion is the object being visualized and analyzed. For example, Zhang et al. [96] developed a visual analytics system to help mine the emotion in music videos and called the visualization in the system _affective visualization_. Recently, when studying virtual reality, a group of researchers used _affective visualization_ to describe the visualization of people's affective states in immersive environments [67]. Other similar work includes visualizing the emotion in presentations and TED talks [89, 95]. **Emotion as a precondition of viewing visualization.** This thread of research views emotion as a precondition that exists before viewing visualization. For example, Harrison et al. [31] studied the priming effect of emotion by first using pictures to evoke positive or negative emotions from the participants, then asking the participants to view charts, and examining whether the emotions would influence the accuracy of graphical perception. Thoresen et al. [80] studied whether people's anxiety levels would influence their modes of reading maps. **Emotion as a result of visualization design.** This thread of research studies emotion as a result of viewing data visualization and highlights the need of considering viewers' subjective experiences in visualization design. For example, Bartram et al. [10] defined _affective visualization_ as a type of visualization that "uses visual features to evoke a mood, feeling or impression" and proposed a set of color palettes that can effectively convey emotion in chart design. Lan et al. [44] presented Kineticharts, an animation design scheme for creating animated charts that express five emotions commonly desired by data storytelling. In summary, we identified multiple threads of research that stand at the intersection of visualization and emotion. In pursuit of a better conceptualization of these research threads and facilitate clearer dialogues between researchers, we propose: (i) As the concept, _affective visualization_, has been used in a broad sense and has already involved diverse work about emotion, its definition can be more inclusive and encompassing. By referring to the definition of _affective computing_[66], we define _affective visualization_ as a data visualization that relates to, arises from, or influences emotion. (ii) What this work intends to focus on is the third research thread (i.e., emotion as a result of visualization design). As the core issue that distinguishes this research thread from the other two is its emphasis on the role of design, we use the term _affective visualization design_ in the following texts to refer to a class of data visualizations designed to communicate and influence emotion. Next, we refined our corpus and only kept papers about affective visualization design. Note that we also found four papers that addressed hybrid research problems. For example, Qin et al. [70] proposed a bio-feedback system called HeartBees to visualize the emotion of the crowd. Meanwhile, the authors also intentionally used metaphorical shapes and animation to create an affective visual representation. As they also contributed knowledge about affective visualization design, we included the four papers in our corpus. The final corpus contains 109 papers in total (see [https://affectivevis.github.io/](https://affectivevis.github.io/)). ### Corpus Overview In general, the number of papers concerning affective visualization design has grown remarkably over the years. Before 2006, at most one paper was published every year, but in 2022 alone, 14 papers were published. We then analyzed which research areas the papers belong to. We searched the publication venues of these papers on the Web of Science (WoS) and identified the research areas tagged to these venues. For example, the research area of IEEE TVCG and the proceedings of ACM CHI was tagged as _Computer Science_ on WoS. For 17 papers that were not searchable on WoS, we searched the home pages of their venues manually and coded their research areas based on how these venues introduced and defined themselves using WoS's taxonomy [3]. Two authors first coded independently and then compared their codes (Cohen's Kappa = 0.82) and discussed until achieving 100% agreement. For venues that have multiple area tags, we used their primary areas to perform the following statistics. Firstly, we counted the number of papers published in each research area and identified five areas that have more than one paper concerning affective visualization design, including _Computer Science_ (75), _Geography_ (18), _Communication_ (7), _Art_ (3), and _Sociology_ (2). Four papers were published in other areas, such as _Chemistry_ (1) and _Thermodynamics_ (1). In other words, we found that affective visualization design has attracted the interest of researchers from multiple communities. In terms of temporal trends, as shown in Fig. 1 (a), the community of _Geography_ has shown earlier interest in this research topic. For example, in 1991, Gilmartin and Lloyd [29] conducted a user study to examine how different map projections influence people's emotional involvement with places. In the early 2010s, a group of geographers proposed ideas such as _affective cartography_[33] and _affective geovisualization_[4], which pushed this line of research to its peak. It is also in the 2010s that the papers published in the area of _Computer Science_ began to mushroom. To date, papers belonging to this area have become the main part of affective visualization design research. Next, we coded the research types of these papers following the six categories (i.e., _Theoretical & Empirical_, _Applications_, _Systems & Rendering_, _Representations & Interaction_, _Data Transformations_, _Analytics & Decisions_) defined by the IEEE VIS conference [2]. As a result, as shown in Fig. 1 (b), most papers belong to the _Theoretical & Empirical_ category (76). These papers either proposed theoretical frameworks, critiques, and observations or examined the relationship between visualization design and emotion by conducting user studies. The second most identified paper type is _Representations & Interaction_ (28), which explored or proposed new visualization approaches to influencing emotion. A small number of papers belong to the _Applications_ category (5), which built visualization systems or tools to help generate or create affective visualization design. Chronologically, _Theoretical & Empirical_ has also been the dominant research type. However, in recent years, we see more novel methods (_Representations & Interaction_) and design tools (_Applications_) being proposed to enrich the toolkit of affective visualization design and lower its authoring barrier. ## 4 Why Affective Visualization Design As mentioned in Sec. 2.2, as a design paradigm that embraces subjective experiences, affective visualization design still faces controversy. To address the controversy, we coded the 109 papers in our corpus following the methodology of thematic analysis [14] with the goal of distilling existing arguments in support of affective visualization design. Two authors first read the papers one by one and marked any sentences that explain the authors' justifications for studying affective visualization design. Then, we went through these sentences and generated codes independently to summarize the rationales behind the arguments. To further identify different perspectives of argumentation, we also grouped similar codes into high-level themes. For example, Kosara [41] argued that emotion is important for the domain of data art, because artists usually value the subline quality of design more than the pragmatic quality. Therefore, this argument was coded as "emotion is prioritized in some domains" and was assigned to a theme called "from the perspective of application". Then, we met to compare our codes, unify the wording of similar codes, and resolve disagreements through iterative discussion. Last, we tagged the papers again using the final coding scheme. In total, we identified seven different arguments categorized into five perspectives. ### From the Perspective of Application As data visualization is serving a wide range of domains and users, the design requirements of data visualization are becoming increasingly diverse. In our corpus, 76 papers argued the legitimacy of affective visualization design from the perspective of realistic needs. **Emotion is valued by certain domains.** Different domains have different needs, values, and priorities. For example, for data artists, emotion is a vital channel of self-expression and can spark imagination and creativity [19, 90]. For the media industry, communicating emotion through visualization design is also common, and multiple studies have observed the integration of affective visualization design in various visual stories published by news agencies, governments, or non-profits [45, 46]. For example, during the COVID-19 pandemic, many data stories were created to evoke shock, fear, or grief for deaths [47] (Fig. 2 P1). In addition, emotion is also valued by domains such as education and entertainment [16, 30]. For example, when investigating the popularity of Worde (a web-based tool for visualizing texts), Viegas et al. [87] concluded that "emotional resonance seems to be a key reason for preference of worde's over one clouds" because it enables the public to play with data and customize their own visualizations. **New application scenarios are emerging.** Several researchers justified the necessity of considering emotion in data visualization design based on new technology trends and market needs. For example, in 2004, given the trend that "the contents of digital archives that permeate our daily lives become more emotionally charged", a set of systems that visualize personal data were developed [86]. Around 2010, Fig. 1: The distribution of the 109 papers in our corpus over time in terms of their (a) research areas and (b) paper types. Cartwright et al. [17] and van Lammeren et al. [85] proposed that the maturity of Web 2.0 technology and 3D modeling technology can expand the functions of maps. For example, people can now use maps to fulfill more affective intents, such as annotating places with their cherished stories or sharing the stories with beloved ones. ### _From the Perspective of Usefulness_ Secondly, 67 papers justified the role of emotion in visualization design by providing evidence from empirical studies or neuroscience to demonstrate emotion is useful and can bring benefits. **Emotion is not the enemy of rationality.** In response to the challenge that emotion may hinder rational thinking about data, Kennedy and Hill [38] cited various arguments from sociologists and philosophers to criticize the stereotype of the "reason/emotion binary". Aitken and Craine [21, 4] argued for the legitimacy of conveying emotion in cartography by citing work from neuroscience, which has proven that emotion is not separate from, but is a huge part of rationality. When studying the affective design of inofgraphics, Lan et al. [45] referred to the dual-process theory in psychology to demonstrate that emotion is heavily involved in thought processes and can co-exist with rationality. Similarly, Kostelnick [42] and D'ignazio et al. [24] agreed that emotion can complement rather than contradict the logical analysis of data. In several user studies [44, 47], researchers indeed observed that affective visualization design was not at odds with data comprehension. Anderson and Robinson [7] found that maps whose color palettes convey congruent affective messages with content were felt more appropriate and less confusing than those that use incongruent colors, showing that emotion can work with or even amplify rationality (Fig. 2 P2). **Emotion brings additional benefits.** First, emotion has been found useful for enhancing user engagement with data. For example, Lan et al. [44] found that in data storytelling, emotion showed a significantly positive correlation with engagement-related metrics such as focused attention, enjoyment, interest, and likability. Qualitatively, Kennedy and Hill [38] observed that people were more likely to spend time with a data visualization when being emotionally aroused by its design or content. In addition, emotion has also been observed to enhance people's perceived connection with data [29, 44] and influence how they memorize data [47]. Emotion also influences people on a deeper level, such as changing attitudes, values, or behaviors. For example, Boy et al. [13] and Morais et al. [58] cited prior work from psychology and sociology to illustrate that emotions such as empathy can help people perceive others' misery and promote prosocial behavior. Lan et al. [47] used a set of serious data stories that convey negative messages about COVID-19 as stimuli and found that appropriate design methods helped evoke deep emotions and strengthened contemplative thoughts and self-reflection. ### _From the Perspective of Rhetoric_ Seventeen papers argue that data visualization design is a rhetorical device that always integrates subjective factors. **Objective data visualization does not exist.** In response to the challenge that adding emotion to visualization may make data less objective, many researchers argued that no data visualization is truly objective, because data visualization design, as all design forms, is rhetorical [15, 24, 25, 39, 51, 59]. For example, D'ignazio and Klein [24] took two charts posted on New York Times as examples (Fig. 2 P3); the two charts both look concise, "objective", and are high in the data-ink ratio, but they deliver different messages (the blue chart shows the unemployment rate in the US decreased during the Democrats' administration while the red chart shows the rate is constantly high from a Republican perspective). D'ignazio and Klein discussed the rhetorical intentions behind these designs and argued that charts that create the "illusion of objectivity" can be even more deceptive than affective charts. Similarly, Kennedy et al. [39] thought the use of clean layout and geometric shapes are common conventions that create "the ava of objectivity" in data visualization. Recently, Lee-Robins and Adar [51] also argued that data is not neutral. Fig. 2: Examples of the five argumentative perspectives: (P1) affective data stories published by news media during the COVID-19 pandemic [47], (P2) emotional color palettes can facilitate the cognitive interpretation of maps [7], (P3) two objective-looking but rhetorical charts [24], (P4) a vintage pictograph that practiced Victorian values [42], (P5) a “cruel pie" about human fatalities [25] and the call for data humanism [54]. ### From the Perspective of Sociology The arguments of nine papers emphasize that the current tendency to avoid emotion in visualization design is not a golden rule, but is constructed by the historical context, culture, and social values. **The emphasis or suppression of emotion is a social phenomenon.** Emotion was once highlighted in history. For example, Kostelnick [42] found that emotion played an important role in chart design in the 1800s when Romantic values (which emphasized individualism, emotion, and imagination) dominated all forms of design. At that age, visualization designers were also bold to use lavish color, pictorial elements, or epic narratives to enhance emotional appeal (e.g., Fig. 2 P4). However, after the Enlightenment, rationalism has become a dominating value, breading the tendency to interval value emotion [38, 42]. In a study, van Koningsbruggen and Hornecker [84] found that participants described feeling not being allowed to emotionally connect to visualizations and emotion was intentionally got rid of, and they argued that "a form of post-hoc rationalisation takes place, which obscures people's initial connections and affective responses to visualisations". ### From the Perspective of Humanism Five papers argue from the angle of humanism, which claims that data visualization should be humane, ethical, and do good to society. **Data should not be cold and cruel.** For example, Dragga [25] pointed out that statistical charts designed in a minimalistic style can be emotionless and ignore the real people behind the data. He took several charts about fatalities and disasters as examples, criticizing them as "cruel bars" and "cruel pies" (Fig. 2 P5, left). Likewise, McCleary [56] thought that a conventional map design would appear indifferent if it visualizes data about genocide. In recent years, some researchers [72, 6, 15] have recognized or applied the idea of _data humanism_ proposed by visualization designer Giorgia Lupi [54] (Fig. 2 P5, right). In Lupi's original essay, she called, "to envision ways to use data to feel more empathic, to connect with ourselves and others at a deeper level...we have to bring data to life - human life." ## 5 Exploring the Landscape of Affective Visualization Design: Where, What, and How In this section, we analyze design projects that have practiced affective visualization design and explore what the design space is. ### Methodology Based on the 109 papers we collected earlier, first, we went through all papers and kept papers that are affective visualization design projects themselves (i.e., design studies). As a result, 36 out of 109 papers belong to this situation. For example, Concannon et al. [20] designed a data film to enhance the emotional closeness between people and data. Lan et al. [44] proposed a design scheme called Kineticharts for creating animated charts that express emotions. Next, we also searched the reference lists of the 109 papers to identify more in-the-wild design projects that are potentially qualified. For example, U.S. Gun Death [64] is a design project cited by multiple papers [47, 51] as a representative example of affective visualization design. However, although such design projects were considered affective by researchers, we lacked first-hand justifications and explanations from their authors, thus preventing us from understanding their true design intents or tasks. Therefore, for each of the design projects, we searched its publication page, the author's personal website, or any interviews or talks that introduced the project to double-check the design intents. For example, we found that the designers of U.S. Gun Death have indeed recognized their affective intents by saying that "it's a massive collection of human emotion hidden within rows and rows of numbers" [63] and "we need people to react, we need people to sort of get filed up about things, get excited about things, and want to make a change in the world" [68]. Therefore, we viewed it as a qualified project of affective visualization design. After the double-checking process, we identified 25 in-the-wild design projects whose designers have explicitly stated their intent to communicate and influence emotion using data visualization. Combining with the 36 qualified projects identified earlier, we in total collected 61 projects as the exemplars of affective visualization design. Then, based on the collected papers or materials where the designers explain their design intents, considerations, and procedures, we coded the 61 design projects in three dimensions: (i) design fields, (ii) design tasks, and (iii) design methods. Specifically, (i) concerns _where_ a design problem is formed and originated, (ii) is about _what_ goals a design intends to achieve, and (iii) concerns _how_ to perform a design. For (iii), we followed the design analysis of narrative visualization [77] and coded both genres and techniques. Specifically, _genre_ refers to a high-level category of design characterized by a particular style or form, while technique refers to low-level methods used to create that design. Similar to the coding methodology in Sec. 4, two authors first independently went through the design projects and generated codes and themes regarding the three dimensions. To ensure appropriate wording of the techniques, we familiarized ourselves with conventional concepts in design literature. For example, _anthropomorphism_ has been used by visualization researchers [34, 53, 13] to describe the technique of showing concrete human behind data. After this round of coding, we achieved consensus on 11 (61%) techniques, such as _color_, _typeface_, and _anthropomorphism_. For the remaining techniques, we discussed and refined their wording until achieving 100% consensus. For example, one code initially used the conventional concept, _animation_, to describe the technique that utilizes motion to convey emotion. However, given that not all the design projects in our corpus are displayed on the screen, _animation_ is not suitable to describe the kinetic effects designed for genres such as installations. To enhance the generalizability of this design technique, we finally refined its wording as _kinetic movement_. ### Design Space After the coding process, we derived the design space in Tab. 1. The table contains 61 rows, each representing one affective visualization design project. The columns represent the three dimensions mentioned above and are divided into several sub-columns. The table uses categorical hues to represent different genres. For other cells, the table uses a check mark colored in grey to indicate the presence of a particular factor or class. Examples of these design projects can be seen in Fig. 3. #### 5.2.1 Where: Design Fields Design fields concern where a design need originated, thereby indicating the problem that the design seeks to address and suggesting (the scenarios for its application. Among the 61 design projects, 16 (26%) projects deal with issues about _environmental sciences_\(\&\ ecology\). For example, Fig. 3 A, M, N were motivated by global warming. Fig. 3 O and Q address the issue of pollution and Fig. 3 E presents the facts of water waste. 15 (25%) projects address _social issues_, such as public security, welfare, injuice, or inequality. For example, Fig. 3 C and L visualize the gun problem in the US. Fig. 3 B and I visualize the deaths in wars. Fig. 3 H reveals the problem of sexual harassment, and Fig. 3 K focuses on the caring of disadvantaged groups. 11 (18%) projects are about _health & well-being_, such as illness and mental stress. As an example, Fig. 3 P visualizes suicide data to showcase the problem of depression. Other fields identified include _news & media_ (7%), _daily_ _life_ (5%), _business_ (3%), _urban studies_ (3%), and _history_ (3%). Four projects (7%) address mixed topics and two projects (3%) have not specified which fields their design problems originated from. interest, enticing likes or sharing) through eliciting emotion. For example, Iraq's bloody toll (Fig. 3 B) is an infographic that visualizes the number of soldiers who died in the Iraq war. It turns a red bar chart upside down as if blood is dripping down. As explained by the designer, the design choice was made deliberately to create a striking visual and draw readers into the graphic [75]. **Experiment** (N = 14). This task focuses on realizing the needs of designers themselves to test new design approaches, exercise novel ideas, or stretch the boundaries of affective design. For example, given the new possibilities posed by VR technology, Ivanov et al. [34] created a data story about gun shootings (Fig. 3 C) to explore the elicitation of emotional responses through an immersive environment. **Provoke** (N = 12). This task guides people to think, reflect, contemplate, and inspire new thoughts through emotion. For example, EdiPulse [27] (Fig. 3 D) is an edible visualization designed to "provoke critical discourse" about physical activities. The designers translated users' heart rate data into 3D-printed slogans, emojis, and flower-like graphs using chocolate to create a playful and fun experience. **Advocate** (N = 11). This task recommends ideas, values, or proposals to viewers and regards eliciting emotion as a means to persuade participation and action. For example, The Water We Eat (Fig. 3 E) is an interactive website that advocates action in saving water. This web page visualizes the huge amount of water consumption every day and urges viewers strongly to change their living habits [32]. **Socialize** (N = 11). This task views emotion as an essential factor for social activities such as sharing, gift-giving, and opinion exchange. For example, in the Dear Data project [55], two designers drew visualizations on postcards based on their personal data collected every week and sent the postcards to each other (e.g., Fig. 3 F are postcards drawn to show "A week of envy"). The designers used these visualizations to collaborate and make friends [23]. **Heal** (N = 5). This task intends to increase the well-being of people through the elicitation of positive emotions and the reduction of stress or pain. For example, StressTree (Fig. 3 G) maps people's stress levels to the colors of a tree metaphorically, thus guiding them to regulate their breathing to make the tree look more healthy. This visualization was designed to help people relax and reduce negative thoughts [93]. **Empower** (N = 4). This task focuses on making people (especially vulnerable populations) stronger and more confident in expressing their feelings, controlling their life, and claiming their rights. For instance, The Academia is Tied in Knots (Fig. 3 H) surveyed women who have experienced sexual harassment and transformed the data into an interactive interface (the "knots" can be unfolded to show their testimonies) to help bring the sensitive and uneasy stories to the foreground [26]. **Commemorate** (N = 3). This task makes people honor and remember someone or something. For example, Blood Swept Lands and Seas of Red [22] (Fig. 3 I) installed 888,246 ceramic poppies (each representing a dead soldier) engulfing the Tower of London during the 100th anniversary of Britain's participation in the World War I. Its designer said, "we couldn't think of a more appropriate and fitting tribute to the remembrance of those that fell during the World Wars... the Tower of London plays the perfect backdrop to this humbling tribute." [81]. **Archive** (N = 2). This task is about the documentation of personally meaningful experiences and feelings. For example, to archive her walking experience in a park, Gardener [28] captured data (e.g., colors, physical objects such as leaves and petals) that triggered her sensory experiences at different locations along the walking route, and then visualized the data as space-time cubes (Fig. 3 J) to facilitate subjective interpretation and recollection. #### 5.2.3 How: Design Methods This section reports the methods of affective visualization design, including high-level genres and low-level techniques. **Genres.** As illustrated in Fig. 4, we identified a total of six genres of affective visualization design. The first one is the _interactive interface_, where users can interact with (e.g., click, scroll, rotate) data visualization on web pages or visual systems. For example, Fig. 3 C, E, H, L all belong to this genre. The second one is the _video_, which presents Figure 3: Examples of affective visualization design projects. data visualization dynamically as a series of continuous frames. For example, Fig. 3 K is a data film produced to elicit empathy for young adults who leave the care system in England. The third genre is the _static image/painting_, such as infographics drawn in computer software and graphs drawn by hands (e.g., Fig. 3 B, F). The fourth genre, _installation_, refers to the data-driven display installed at places such as museums, streets, and galleries. For example, Fig. 3 A, I, Q were all installed outdoors and can be encountered by any passers-by. The fifth genre is called _artifact_, which denotes the physical object made by hands or devices (e.g., a 3D printer). Compared to installations, artifacts are often small objects that are casually used or placed, such as the data-driven scarf in Fig. 3 N and the ice cubes in Fig. 3 J. The last genre is called _event_, namely an activity intentionally designed for people to participate in and experience data. For example, Fig. 3 M is a "walking data visualization" which asked the participants to walk along the future coastline of a city. Many participants experienced a sense of urgency and concern because the streets they walked on will be flooded by sea water in the future. Overall, compared to the design genres identified in the previous decade [77], the genres of affective visualization design have significantly expanded and diversified. **Techniques.** We identified a total of 18 low-level design techniques used by the 61 affective visualization design projects to influence emotion. We categorized these techniques into four strategies (S1-S4). S1: Sensation. Appealing to various senses is ranked as the most applied strategy for affective visualization design. In terms of specific design techniques, the use of _realistic/metaphorical imagery_ such as photographs, figurative data glyphs, and visual metaphors was mentioned extensively by designers. For instance, the installation in Fig. 3 A uses the imagery of ice to evoke viewers' intuitive perception of glaciers. Fig. 3 B presents a bar chart metaphorically to create an imagery of blood. Fig. 3 P uses figurative symbols to represent different types of suicide and deliberately composes them into a peaceful landscape. Fig. 3 I utilizes the symbolic meaning of poppies to mourn for deaths. Apart from this technique, the use of _color_, _kinetic movement_, _sound_, _shape_, and _typeface_ has also been mentioned by designers as approaches to influencing emotion. A few projects have also leveraged the emotional quality of _smell_ and _taste_. S2: Narrative. The second most applied strategy for affective visualization design is narrative, namely using storytelling methods to communicate data effectively. The most mentioned technique in this category is _personalized setting_. Given that people generally show stronger emotion to things that relate to themselves, designers have explored various means of improving the sense of proximity. For example, Fig. 3 K lets viewers select their own location and then shows them a customized data film based on the chosen location. The second most identified technique in this category is _anthropomorphism_, namely showing the real people behind data [57]. For example, Fig. 3 H uses one glyph to show one testimony from a victim of sexual harassment, and Fig. 3 C allows viewers to "meet" the casualties of gun shootings and read their identities, thus transforming abstract numbers into people with flesh and blood. In addition, designers have adopted techniques such as _narrative structure_ and _wording/phrasing_ to manipulate the flow of data narrative and augment emotion through textual expressions. S3: Behavior. Behavioral design is a design method that seeks to shape or influence human behavior. During our analysis, we identified three main techniques of behavior design. The first technique asks people to _manipulate/operate_ a data visualization. For example, the designers of Fig. 3 O prepared a set of river lanterns that encoded the data of river pollution in a community and invited the residents to release the lanterns into the river, thus engaging the residents emotionally with the data. The second technique asks people to _create/build_ a data visualization by themselves, such as the knitting of data-driven scarves in Fig. 3 N. The third technique makes people experience data during a _walk/wander_. For example, Fig. 3 M asked people to walk along the route of the future coastline to subjectively experience climate change, and Fig. 3 Q asked people to step into parks, streets, and neighborhoods to monitor air quality in an embodied manner. S4: Context. Context means the surrounding environment when people view or interact with a data visualization. For example, a viewer may read a data visualization from home or on the road, which can distance him/her from the content being portrayed in the visualization. To cope with such disconnection, several design projects put a visualization at a _situated place_ (i.e., somewhere related to the content being presented in a data visualization design). For example, apart from creating visualizations about river pollution, Fig. 3 O also brought local residents to the polluted river to heighten the emotional impact. Fig. 3 I chose the Tower of London as the place to install the poppies because it is a representation of national honor and is related to the theme of the data visualization, namely sacrificing for the country. We also found three projects that used VR technology or large screen projection technology to create an _immersive environment_, such as Fig. 3 C. #### 5.2.4 Design Space Observations By observing how the cells distribute in Tab. 1, we summarize implications from three main aspects. **Task distribution.** The most mentioned tasks of affective visualization design are _inform_ (mentioned by 57% of all projects) and _engage_ (44%), followed by _experiment_ (23%), _proprove_ (20%), _advocate_ (18%), _socialize_ (18%), _heal_ (8%), _empower_ (7%), _commemorate_ (5%), and _archive_ (3%). Most projects claimed hybrid tasks, and the average number of tasks undertaken per design project is 2.03 (SD = 1.06). Two projects claimed as many as five tasks. For example, Dear Data [55] project (Tab. 1 _#35_) was initiated by the motivation to make friends (_socialize_) and _experiment_ how to visualize personal data through hand-drawing postcards. During the drawing process, the two designers also had more reflections on their own lives (_provoke_) and used the drawing as a therapy to face themselves honestly (_heat_). Meanwhile, they also published the postcards to _engage_ more people with data visualization [23]. We also calculated how these design tasks interact with different design fields. For example, in the field of _environmental sciences & ecology_, 69% projects reported the _inform_ task, 50% reported the _engage_ task, and 31% reported the _socialize_ task. In the field of _social issues_, 53 % projects reported the _inform_ task, 47% reported the _engage_ task, and 40% reported the _advocate_ task. When designing for _daily life_, 67% projects intended to _archive_, _provoke_, and _socialize_. In general, although _inform_ and _engage_ are the dominant tasks in most cases, different fields still show varied priorities in design tasks. **Genre distribution.** The most popular affective visualization design genre in our corpus is the _interactive interface_ (adopted by 44% of all projects), followed by _installation_ (25%), _artifact_ (18%), _event_ (13%), _static image/painting_ (11%), and _video_ (10%). Although most design projects belong to a single genre, we identified 12 projects that belong to hybrid genres. For example, three projects (Tab. 1 #11-13) published two versions of their design, including an _interactive interface_ and a _video_. Three projects (Tab. 1 #44-46) created a data-driven _installation_ while organizing an outdoor _event_. Project #47 is the only design work that spans three genres. Its designers first set a data-driven _installation_ (i.e., a cluster of balloon sensors that monitor air quality) at public sites, and then distributed the balloons to volunteers and asked them to wander in the city and play with the balloons (_event_). In addition, they also organized a workshop for people to create the balloons by themselves (_artifact_). As for the interaction between genres and design Fig. 4: Six genres of affective visualization design. fields, _interactive interface_ is the primary genre within most fields, such as _news & media_ (adopted by 75% projects within this field), _social issues_ (67%), and _health & well-being_ (45%). However, for _environmental sciences & ecology_, compared to _interactive interface_ (19%), the dominant genre is _installation_ (63%). One reason may be that environmental issues are closely related to the physical world, so many designers decided to invite people to experience data in outdoor space rather than just showing data on screens. **Technique distribution.** Overall, sensation is the most applied strategy to elicit emotion, and among all the techniques within this strategy, the most adopted techniques are _realistic/metaphorical imagery_ adopted by 52% projects, _color_ (28%), _kinetic movement_ (26%), and _sound_ (21%). The primary sense being designed is the visual sense, followed by the auditory sense. The strategy of narrative has also been practiced by many projects, with _personalized setting_ (26%) and _anthropomorphism_ (16%) being the most frequently applied techniques. The two techniques both serve to enhance the individuality of data, either by making it relevant to viewers themselves or by showing other people behind data. It has also been observed that some techniques are preferred by specific genres. For example, genres such as _interactive interface_, _video_, and _static image/painting_ often evoke emotion through sensation and narrative. By contrast, genres such as _event_ and _artifact_ tend to appeal more to sensation and behavior. This distinction may be attributed to the nature of these genres, as the former three genres are more digital, whereas the latter two genres are more physical. However, we did not find an absolute boundary between these design genres \begin{table} \begin{tabular}{l l in terms of using design techniques. For example, digital genres can use VR technology to create a sense of physical context (e.g., Tab. 1 #6) or employ interactive features to manipulate user behavior online (e.g., Tab. 1 #16). Likewise, physical artifacts can also integrate digital elements (e.g., sensors, 3D printing) to dynamically generate personal narratives (e.g., Tab. 1 #56, 57). These design practices suggest that the fusion of online and offline experiences and the exploration of cross-media design are taking place in affective visualization design. ## 6 Discussion and Future Work Below we discuss our limitations and future research opportunities. ### Limitations In the corpus collection stage, we used the snowball search method to collect as much qualified literature as possible. However, the corpus may not be exhaustive as the snowball searching relies on the references cited in the initial set of papers, potentially overlooking less-cited sources that are not referenced in the initial set. Secondly, the 61 design projects used for characterizing the design space of affective visualization design are either research papers themselves or in-the-wild projects cited by research papers. Therefore, although being typical examples of affective visualization design in the eyes of researchers, they are by no means exhaustive nor representative of the full landscape of affective visualization design practice. We believe that more case studies and interviews with practitioners can be conducted to help identify and analyze the affective visualization design practice in the wild. Also, alternative analysis perspectives or taxonomies can be proposed to further advance our understanding of such design practice. ### Opportunities For Future Research We identify nine specific research opportunities (marked as O1-O9) categorized into three main avenues. **Scientific evaluation of affective design techniques.** The design space in Sec. 5 indicates a set of design techniques that designers thought can influence emotion. However, as shown in prior work [47], sometimes there will be a gap between designers' expectations and people's real experiences. Among the identified techniques, although some have been validated by controlled user studies as being effective in influencing emotion, such as color and animation [44, 10, 46], a considerate number of techniques still lack evaluation. Therefore, more future work needs to be done to (O1) assess the relationship between visualization design techniques and emotion. Second, controversy still exists in terms of the effectiveness of certain techniques. For example, anthropomorphism was found insignificant in augmenting empathy by Boy et al. [13], while it showed a small effect in another larger-scale study [58]. In a study hour map design, people got significantly more emotionally involved with locations close to themselves [29], but in another study, such a personalized narrative showed no significant effect [20]. To ensure the reliability of research findings, future studies can (O2) replicate existing results in diverse settings and cross-validate conflicting outcomes. Third, as more and more state-of-the-art technology has been incorporated into affective visualization design, the design space of affective visualization design will go on to expand. It is important to study (O3) how novel techniques such as AR/VR/MR, 3D printing, and multi-modal interaction (e.g., sound, smell) influence emotion and how they interact with other design elements. Fourth, more efforts can be put into enhancing the rigor of research methods. Researchers can (O4) conduct studies with more diverse stimuli and larger sample sizes, and try alternative measures (e.g., neuro-physiological measures to understand emotion more in-depth. **Embracing more application scenarios and users.** Through this work, we are excited to see affective visualization design being applied to various domains and being used as a novel approach to solving domain problems and serving realistic needs. We see journalists, non-profits, and feminists use data visualization to raise awareness of social injusice. We see environmentalists and geographers use data visualization to call for green lifestyles. We see artists create data visualization to touch the tender part of human hearts. We see scientists transform complex, cold data into vivid, affective representations. In the future, we believe that (O5) the idea of affective visualization design can be introduced to a wider range of disciplines and industries, leading to more interesting applications. Another important lesson we have learned from this work is that, rather than seeing people as passive information receivers, many design projects have treated people as active agents or collaborators. Such a paradigm has been framed by some researchers and designers as _participatory action research_, _citizen science_, or _grass-roots movements_[43, 65, 8]. Under this paradigm, designers in fact relationship some design rights to people and allow them to participate in or even dominate the process of data collection and visualization. In the future, we may further (O6) explore the approaches to stimulating people's sense of agency in data visualization design and bringing data democratization to a higher level. Relating to this issue, (O7) more authoring tools can be developed to empower ordinary people to create or customize affective visualization design. Currently, this type of tool is surprisingly rare [18, 72, 91]. **Evoking emotion appropriately and ethically.** Although this work focuses on presenting the arguments and design projects that support evoking emotion with data visualization design, we do not claim that eliciting emotion is always good or that objectivity and neutrality should be tossed away. Emotion is a double-edged sword, and an important future research direction is to (O8) examine the pitfalls or side effects of eliciting emotion with data visualization. For example, emotion may make people impulsive and prejudiced. Psychologist Bloom [12] argued for putting empathy aside in public decision-making because "empathy is biased; we are more prone to feel empathy for attractive people and for those who look like us or share our ethnic or national background...and empathy is narrow; it connects us to particular individuals, real or imagined, but is insensitive to numerical differences and statistical data." Another viewpoint is embedding a pre-defined emotion in a visualization may hinder viewers' freedom to interpret the design emotionally on their own terms. For example, cartographer Kent [40] thought that "while the emotional association with a specific place would perhaps be affected by the amount of detail apparent in its portrayal...the absence of detail inherent to cartographic symbolization allows a free play of the imagination necessary for the development of emotions associated with that sense of place". In addition, the power of emotion may be exploited by people with malevolent intents. For example, social science has a long history of examining how politicians and businessmenic incite emotion to gain profit or deceive people with low literacy, as well as how emotional contagion aids the spread of violent or destructive behavior [49]. Therefore, it is crucial to (O9) study the ethical issues surrounding affective visualization design and develop guidelines that promote the responsible use of such design. ## 7 Conclusion In this work, we have identified three distinct research threads that study visualization and emotion and defined affective visualization design as data visualizations designed to communicate and influence emotion. We have reviewed 109 papers concerning affective visualization design to characterize this growing field and outline the necessity of considering emotion in visualization design. We have also analyzed 61 affective visualization design projects to explore the design space of such practices, including where to apply the design, what tasks the design can undertake, and how to perform the design. Affective visualization design is an ongoing research field that is full of creativity and innovation. It responds to various emerging needs in the real world and also deepens our understanding of data visualization, such as what constitutes a good data visualization, how a visualization should be evaluated, and what visualization can do for individuals and society. We believe that more investigation and discussion of affective visualization design will help the visualization community embrace more ideas, values, application scenarios, and wider users. ## Acknowledgments This work was supported by NSFC 62072338 and NSF Shanghai 20ZR1461500. We would like to thank all the reviewers for their valuable feedback.
2301.04793
PINN for Dynamical Partial Differential Equations is Not Training Deeper Networks Rather Learning Advection and Time Variance
The concepts and techniques of physics-informed neural networks (PINNs) is studied and limitations are identified to make it efficient to approximate dynamical equations. Potential working research domains are explored for increasing the robustness of this technique for the solvability of partial differential equations. It is identified that PINNs potentially fails to stronger advection and longer time duration. Also, optimization function and constraint posing needs to be smarter. Even a shallow network is good for a lot of problems while powerful deeper network fails. Reservoir computing based recurrent neural network architecture is recommended to solve dynamical problems.
Siddharth Rout
2023-01-12T03:29:20Z
http://arxiv.org/abs/2301.04793v1
PINN for Dynamical Partial Differential Equations is Not Training Deeper Networks Rather Learning Advection and Time Variance ###### Abstract The concepts and techniques of physics-informed neural networks (PINNs) is studied and limitations are identified to make it efficient to approximate dynamical equations. Potential working research domains are explored for increasing the robustness of this technique for the solvability of partial differential equations. It is identified that PINNs potentially fails to stronger advection and longer time duration. Also, optimization function and constraint posing needs to be smarter. Even a shallow network is good for a lot of problems while powerful deeper network fails. Reservoir computing based recurrent neural network architecture is recommended to solve dynamical problems. **Keywords:** Dynamical Systems, Chaos, Physics Informed Neural Networks, PINN, Kuramoto-Sivashinsky Equation, ODE solution, PDE solution, Differential Equations, Recurrent Neural Network, Reservoir Computing ## 1 Introduction The solution of differential equations have been an important topic for the almost every field of the world, say it be finance, mechanics, meteorology etc. Starting from the days of Newton and Leibniz solving differential equations have been core to developments in this world. Not all differential equations are solvable by hand and initiate limitations, especially when multiple independent variables come into the equation or build system of equations. However, solving these equations are need of the hour and hence we shifted our focus from exact solutions to approximate solutions as function operations and transformations are limiting in concepts. There arose the generation for solving equations by using the basic definitions or first principles of limits, discretization and numerical analysis[1]. Popular such techniques are finite element methods, finite volume methods, finite difference approximations etc. These methods are generalizable and that is the benefit. Using these techniques we can solve almost any equation for any geometry. But with increased complexity, there are a couple of problems accompanied like how good the approximation is and computation expense. Discretization gives us a long list of simplified approximate equations to solve. Though we know how to solve however solving them will require us to use computers to do those hectic mathematical calculations. For a stable and accurate solution, a lot of time and energy are consumed in the process[1]. Though we can calculate a lot of things in this world but we have lack of time, resources, money and problems could be endless. A type of differential equation called dynamical systems is tough to solve after a certain range in time, there are reasons to it. Dynamical systems are tough to solve as the solution my bifurcate and that shall make the system chaotic. In a chaotic system, a minute change in the initial condition or equation coefficients will have drastically different outcomes. This is sometimes referred by 'The Butterfly Effect'. The aim of this project is to develop a function approximation method that can potentially replace computationally expensive solvers for dynamical systems. The one dimensional Kuramoto-Sivashinsky equation is solved for trials and research[2, 3]. Function approximations or analytical solutions are better known for their light-weight[4]. These techniques can get rid of the three primary types of errors that are evident in full order discretized approximation, namely instability, inaccuracy and shift[5]. A system of dynamical systems is mainly sensitive inaccuracy due to insanely evident sensitivity to initial conditions. A benefit of function approximation is the ability of correction and reproduction. Also, an added benefit is extrapolability. Among the function approximators, neural networks have been excellent candidate as universal approximators[6, 7]. In the past decade, deep neural networks which are basically multiple layers of neural networks have been used for various complex regression problems due to its ability to capture high dimensional strong non-linearity. These models are fitted to the data directly as input to output mapping[7]. Neural networks can also be trained to differential equations by optimizing the residuals after fitting to random points in the input domain. Such models are called physics-informed neural networks or popularly called PINNs[4, 5, 8, 9]. Solving partial differential equations using PINNs are universally accepted by the scientific community. There are a plenty of advantages of these methods over conventional methods. The major once are ability to solve wide category of problems that were tough to solve otherwise. Moreover, these methods do not require meshing and discretization which is sometimes a tough task. Another advantage unlike other analytical models does not require data from full order solutions to set the parameters. However, being newly developed these techniques are not robust enough for solving complex equations like hyperbolic equations, strongly non-linear equations, strongly advective equations[5], chaotic dynamical systems, coupled system of equations, shock wave equations etc. ## 2 Dynamical Partial Differential Equations Activities in the world is mostly the four dimensions, three in space and one in time. Each new dimension adds a layer of complexity. Dynamical systems generally mean functions that describe the dependence of state of a system with time. Henri Poincare was the first one to identify the special behaviour of dynamical systems. The theory of these dynamical systems is highly relevant in studying behaviour of complex dynamics, usually in the form of differential equations, which makes it continuous dynamical systems. The major points of focus in this domain are the attractors, chaos, fractals and bifurcations that explains the long term behaviour of states qualitatively. This helps understanding evolution of dynamical events like turbulence, storm, mixing fluids, environment change, economic changes, planetary motions and many more. The necessary applications of dynamical systems theory are to find structural stability, Lyapunov time, bifurcation points, position tracking and quantitative approximations which one way or the other determines the predictability of the state at a particular time. Predictability of dynamical systems is a tough job. Before the advent of computing machines prediction required sophisticated mathematical techniques that were specific to specific classes of dynamical systems. These are sometimes among the toughest differential equations to solve. Also considering other factors mentioned above, accurate prediction is a great deal for these kinds of systems. ## 3 Case Selection The cases below point out two major difficulties in solving differential equations clearly. The concepts are explained with reference to the terms and framework of the equation mentioned. The two equations shall be good examples to analyse the theory of PINNs. ### 1D Steady Advection-Diffusion Equation The differential equation below is the governing equation for steady one dimensional flow of combined advection and diffusion phenomena. \[\alpha u_{x}=u_{xx}\] If we notice, \(\alpha\) is the weight for the advection term in the equation. That means larger is \(\alpha\), more dominant is the advection effect, which introduces directional characters and hence discrete approximation becomes tougher. Figure 1 shows the difference in the solution with advection dominance. Higher is the Peclet number, more dominant is the advection. The figure compares the solution for Pe 1 and 50. Hence the numerical integration sees rapidly growing error that makes the solution unstable and inaccurate[10]. Hence, a major class of higher order methods are developed to tackle this particular issue. ### 1D Kuramoto-Sivashinsky Equation The equation below is the one dimensional Kuramoto-Sivashinsky equation \[u_{t}+\alpha uu_{x}+\beta u_{xx}+\gamma u_{xxxx}=0\] The linear form of it is as below: \[u_{t}+\alpha u_{x}+\beta u_{xx}+\gamma u_{xxxx}=0\] Figure 1: Solution of steady state advection-diffusion Figure 2: Solution of One dimensional Kuramoto-Sivashinsky equation This equation has the advection, diffusion and dissipation effect. It is one of the equations where the solution is extremely sensitive to the initial condition[2, 3]. The higher order terms in the expansion of the difference equation are very much relevant and hence sensitive for error propagation in time. Figure 2 shown the solution of a case of one dimensional KS equation on Julia code developed by Mahatab Lak et. al. from the University of New Hampshire. ## 4 Neural Networks as Universal Function Approximator George Cybenko was the one to prove arbitrary width case using neural networks with sigmoid activation in 1989[6]. Later in the same year, Hornik et. al. proved multi-layer feed-forward networks are universal approximators[7]. Multi-layer artificial neural networks are composites of weighted sum of inputs passed through non-linear(activation) functions like \(\tanh()\), sigmoid(), etc. This enables an extremely potent highly non-linear function with large number of trainable parameters(weights and biases). This makes it universal approximation. ## 5 Physics-informed Neural Networks Informing the physics to a neural network is a concept brought up by Lagaris et. al[4]. in the late 1990s by using neural network as a trial function to solve differential equations by reduction of the residuals using AutoGrad (an automatic differentiation technique) at various points in the domain. The boundary constraints are forced into the neural network function by modifying it mannually. In 2017, Raissi et al. proceed by using more accurate automatic differentiation and deeper networks to approximate tougher problems[8, 9]. The novelty in their work comes from the way they pose the loss function to reduce the residual. They did not manually force the constraints by modifying the trial function rather let the trial function fit to the boundary and initial constraints by adding the mean square error from the data points satisfying the conditions as summed constraints to mean squared residuals. This makes the technique very much generalizable. Almost all the differential equations could be posed to be solved using this technique, which they named PINNs. ### Advancements in PINNs The unknown solution \(\mathbf{u}(t,\mathbf{x})\) is represented by a deep neural network \(\mathbf{u_{\theta}}(t,\mathbf{x})\), where \(\mathbf{\theta}\) denotes all tunable parameters of the network (e.g., weights and biases). The physics-informed model can be trained by minimizing the following loss function. \[\mathcal{L}(\mathbf{\theta})=\lambda_{ic}\mathcal{L}_{ic}(\mathbf{\theta})+\lambda_{ bc}\mathcal{L}_{bc}(\mathbf{\theta})+\lambda_{r}\mathcal{L}_{r}(\mathbf{\theta}),\] where Here \(\left\{\mathbf{x}_{ic}^{i}\right\}_{i=1}^{N_{ic}},\left\{t_{bc}^{i},\mathbf{x}_{bc}^{i} \right\}_{i=1}^{N_{bc}}\) and \(\left\{t_{r}^{i},\mathbf{x}_{r}^{i}\right\}_{i=1}^{N_{r}}\) can be the vertices of a fixed mesh or points that are randomly sampled at each iteration of a gradient descent algorithm. The hyper-parameters \(\left\{\lambda_{ic},\lambda_{bc},\lambda_{r}\right\}\) allow the flexibility of assigning a different learning rate to each individual loss term in order to balance their interplay during model training[12, 13]. These weights may be user-specified or tuned automatically during training. ### Advantages of PINNs over Other Neural Networks The major advantage of this technique is that it does not require physical data to train the analytical model. Moreover, the technique is generalizable in the sense that with the exactly same concept various equations could be solved[4, 5, 8, 9, 12, 13]. Previous models required alteration of the learning function depending on number of equations coupled, boundary conditions etc. in order to force the constraints. Being a strong approximating function the three major kinds of error, namely instability, inaccuracy and shifting errors, could be taken care of simultaneously. These problems are taken care individually in finite numerical techniques[5]. This kind of technique is an excellent candidate for robust higher order methods. Neural network is not just an approximator but rather a smart approximator. Hence, depending on the local physical property it can act differently with switching kind of behavior. In particular for the case of dynamical system, with time progression the integrated error increases too rapidly. PINNs being an optimization technique for regression, training to measured physical data points as additional loss terms or regularization can be used for correcting the approximating function. ## 6 Experiments with the selected cases The qualitative property in dynamical problems is strong translational variance. Hence, the two major causes for PINNs performing poorly are advection dominance and time variance which are demonstrated below. ### PINNs for 1-D Steady Advection-Diffusion Peclet number is a good non-dimensional parameter to scale advective dominance over diffusion characteristic of an equation. It is the ratio of advective transport rate to diffusive transport rate. In our problem we can quantify that by the ratio of coefficient of advective term times the length of domain space to coefficient of diffusive term, so that is \(\alpha\) in our particular case. PINN can solve this problem but there is a limit set by advection characters in the differential equations. No matter how deeper and how sophisticated we make the neural network it is not possible to solve for problems with Peclet number more than something close to 8. Figure 3 shows the results noted. For lower Peclet number problems, it is noticed that it is not necessary to have deeper and wider layers. A good thing is in conventional numerical techniques fails for problems set with this value more than 2. Hence, these schemes can be used for shape functions that let larger grids with similar accuracy. The work and figures in this section has been sourced from by 2019 thesis titled "Numerical Approximation in CFD Problems Using Physics Informed Machine Learning"[5]. Parametric analysis is done to understand more about the performance and effectiveness. The impact of change in the non-linearity function as well as loss optimization algorithm is studied. Figure number 4 records the loss values in a tabular form. Among various non-linearity functions \(\tanh()\) and \(\tan()\) performs consistently as well as better than sigmoid(). However, \(\tanh()\) wins this game clearly. Figure number 5 shows the trend of loss value with iteration for various optimizers. The neural network is trained using various optimizers however L-BFGS-B and SL-SQP perfoms better than other specialised optimization techniques. BFGS performs remarkably better than others. The prime reason could be the fact that these optimizers are second order and perform better in the case of optimizing multiobjective functions than other techniques. However, it can be noted that first order techniques like Adam performs decently Figure 4: Minimized losses with various activation functions while solving one dimensional steady advection diffusion where C is coefficient of advective term. Figure 5: Loss trend while solving using various optimizer for one dimensional steady advection diffusion Figure 3: Optmimized architecture for solving one dimensional steady advection diffusion. even if they don't fail. An important thing that can be observed that the collocated PDE residuals and the fitting losses for constraints are clearly not in similar scale which is not typical for regular data-driven neural network learning. Also the gradient pathology is not smooth and hence tough for other optimizers. Guided optimization like hill climbing helps in few cases. ### PINNs for 1-D Kuramoto-Sivashinsky There are recent publications demonstrating how PINNs can solve one dimensional Kuramoto-Sivashinsky equation which is among the standard dynamical equation that can turn chaotic. However, there is a small intersecting set of coefficients for advection, diffusion and dissipation terms for which this works. As demonstrated in the previous case the dominance of advection toughens the optimization of the PINN. Here, not only it is dynamical but also non-linear. It has various orders of spatial derivatives making it difficult especially when the system becomes chaotic. CausalPINN by Wang et. al. is considered the state of art PINN[13]. They have rightly identified the problem of multiobjective optimization due to difference in scales of residuals and constraints stated by Rout et. al.[5] and devised weights for each loss terms by normalizing with the each cumulative loss terms. They are first people to be able to solve 1D KS Equation. We can validate their model using their open source code provided. Figure 6 shows how CausalPINN perfoms with time for the case provided in the figure. It can be noticed that PINN can now solve complex dynamical problems. The initial sine smoothly curls as expected while the constraints are obeyed. Like initial curve is a neat sine function and the boundaries are continuously at base zero(0). However, the typical equation where all the coefficients are considered 1 is taken for regular study of the equation. The state of art PINN fails to optimize even after an effort equivalent to one-day's run-time. The net loss is recorded to be 33.263 where the constraint loss was 0.0016. This suggests the difficulty in fitting to the PDE where as PINN could manage to obey the constraints. The residual loss is noted to be 1808.506, where its weight in the loss function disappear from the scale. This clearly shows the issue of multi-objective optimization. ## 7 Observations and Conclusion Based on the experiments and analysis a few points could be commented. Simple addition or weighed by mean addition of squared loss terms of residuals at collocation points and fitting points directly for constraints have different scale and orders of magnitudes. It is one of the major issue as it can lead to non-pareto optimal solution. It is also noticed that sometime the loss function gets stuck at local optima and gradient pathologies are tough and uneven in the parameter hyperspace[5, 12]. Hence, stochastic first order optimizers work otherwise higher order optimizers suitable for constrained optimizations like SQP and BFGS works while other fails[5]. A better representation of loss function like appropriate or adaptive weighted losses can help. Otherwise constraints could be forcefully enforced by modified architecture or trial function, like explained by Isaac Lagaris et. al[4]. Specifically, in the context of problems in dynamical systems recurrent neural networks(RNN) could prove to be better candidate over simple deep networks[14]. A concrete reasoning has been provided by Eldad Haber et. al. where RNNs can be proved to be in the form of differential equations and hence fit into the theory to learn the dynamical differential equations better[15, 16]. Especially, for chaotic systems reservoir computing have been proved to be performing better[11]. Reservoir computers are a class of RNNs where the intermediate nodes are randomly arranged and connected[11, 14]. They have random recurrent connections. The intermediate nodes are jumbled and entangled however they are connected out to the output layer linearly. The entangled architecture makes it tough to backpropagate and hence only the final layer of weights are trainable for convenience. The trainable output layers makes the effective non-linear network linear with respect to trainable parameters hence conserving strong non-linearity while making it easy to train. Apparently, RNN can also be introduced with PINNs kind of loss definition to solve chaotic problems for turbulence and extreme event prediction[17]. Ultimately, we can justify the errors and specify the right path to solve a dynamical system of partial differential equations by identifying the two prime cause of poor performance. The two causes are advection dominance and time variance, which have been identified from the case studies. We can conclude that there is not always a requirement for deep and bulky layers in the architecture. The criticality lies in the way it is posed for optimization and the optimizability. Gradient pathology must be taken care of. Deeper layers give the potential to capture extremely strong non-linearity in high dimensional Figure 6: Snapshots of velocity(U) along the x-axis overlapped through time for one dimensional Kuramoto-Sivashinsky equation. and strongly coupled system of equations. Also, specifically for time variance characteristic we should use recurrent neural networks, especially reservoir networks which are in fact light weight but performs better. "Physics-Informed Recurrent Neural Networks (PIRNN) is the right path for solving dynamical and chaotic problems".
2310.03965
Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models
Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \textit{from scratch}. To address these issues, we propose \textbf{\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\% improvement of human preference in Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent Planning.
Junchi Yu, Ran He, Rex Ying
2023-10-06T01:40:09Z
http://arxiv.org/abs/2310.03965v3
# Thought Propagation: ###### Abstract Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason _from scratch_. To address these issues, we propose _Thought Propagation (TP)_, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13% improvement of human preference in Creative Writing, and 15% enhancement in the task completion rate of LLM-Agent Planning. Figure 1: Reasoning from scratch cannot reuse insight of solving similar problems and suffers from accumulated errors in intermediate reasoning stages. Thought Propagation explores analogous problems that are related to the input one and derives insights from solutions to analogous problems. ## 1 Introduction The scaling-up Large Language Models (LLMs) (OpenAI, 2023) have achieved notable success in logical (Khot et al., 2022), arithmetic (Wei et al., 2022), and commonsense (Liu et al., 2021) reasoning with the development of prompting methods (Qiao et al., 2022). The early works employ few-shot input and output exemplars to prompt the LLM to perform simple reasoning (Brown et al., 2020). Recent methods decompose the complex reasoning process into intermediate reasoning steps to enable LLMs with multi-step reasoning abilities (Wei et al., 2022; Wang et al., 2022). Although many efforts are made to improve complex reasoning with LLMs by crafted prompt design (Zhang et al., 2022), delicate decomposition (Khot et al., 2022; Zhou et al., 2022), and advanced searching scheme (Yao et al., 2023), these methods prompt the LLM to reason _from scratch_. This scheme is problematic to complex reasoning for two reasons. First, reasoning from scratch cannot reuse the insights of solving similar problems (Hall, 1989). Using such insights as prior knowledge can ease the difficulty of solving complex problems and develop new solutions (Carbonell, 1985). One can further assess new solutions with rough ones obtained from scratch and yield refined solutions. Second, reasoning from scratch is sensitive to the errors made in intermediate stages when facing tasks involving multi-step reasoning. As shown in Figure 1, these errors will accumulate as they misguide the searching and planning afterward, which eventually leads to invalid reasoning outcome (Dziri et al., 2023; Yao et al., 2022). These two challenges motivate us to develop an alternative framework for LLM reasoning to amend the limitations of reasoning from scratch. The study of human cognition presents a promising way to amend the limitations of reasoning from scratch, primarily through the application of analogy (Bartha, 2013). The analogy highlights the occurrence of entities' relationship in the form of "A is to B what C is to D". By discerning and applying such analogical reasoning, humans can stand on the shoulder of existing knowledge to pioneer new concepts in novel domains. A compelling historical example is the discovery of Coulomb's Law, which can be traced back to the analogy drawn between gravitational forces governing celestial bodies and electrical forces acting upon charged particles (Priestley, 1775). Such a framework has been recently proven to be efficient in relational reasoning between entities on knowledge graphs (Zhang et al., 2022; Yuan et al., 2023). However, a general framework of harnessing analogies among problems to facilitate LLM reasoning, to the best of our knowledge, is yet to be explored. Hence, we advance the traditional analogical reasoning and propose a novel _Thought Propagation (TP)_ framework to amend existing reasoning-from-scratch methods and enhance the complex reasoning ability of LLMs. Given an input problem, TP first prompts LLMs to propose a set of analogous problems that are related to the input one. Then, the input problem with its analogous counterpart is solved by existing prompting approaches such as Chain-of-Thought (CoT) Wei et al. (2022). An aggregation module further aggregates the solutions from these analogous problems, facilitating input problem-solving through two distinct avenues. First, this module reuses the solutions derived from analogous problems to generate a new solution to the input problem. The aggregation module assesses the new solution produced by the analogical approach with the initial one obtained from scratch to output a refined result for the input problem. Second, this module compares the input problem with its analogous counterparts and devises high-level plans based on the results of analogous problems. These plans are then executed by the LLM to rectify its intermediate reasoning steps when addressing the input problem. TP is compatible with existing approaches, allowing plug-and-play generalization and enhancement to various tasks ranging from optimization problems to autonomous agent planning. Thus, it reduces intensive labor in task-specific prompt engineering. We test the proposed method on three tasks, including Shortest-path Reasoning, Creative Writing, and LLM-Agent Planning. These tasks require searching over graph-structure data, open-ended writing, and long-trial planning, which challenge existing methods for LLM reasoning. Experimental results show that Thought Propagation can generalize to a wide range of different reasoning tasks and enjoys superior performances on all of them. ## 2 Related Work **Graph Neural Network**. Graph neural networks (GNNs) are expressive network backbones of deep graph learning due to their inductive bias on graph-structured data (Hamilton et al., 2017; Kipf and Welling, 2017). GNNs aggregate the neighborhood node messages to obtain node embeddings (Gilmer et al., 2017). Recent works incorporate parameterized GNNs with Large Language Models (LLMs) for graph-related tasks, such as graph explainability (He et al., 2023), classification (Chen et al., 2023b; Qian et al., 2023), and question answering (Jiang et al., 2023). Differently, our work aims to improve the general reasoning ability of LLMs using problem analogy without fine-tuning. **Analogical Reasoning**. The analogical reasoning has been applied to visual reasoning (Malkinski and Mandziuk, 2022), natural language reasoning (Chen et al., 2022; Sultan and Shahaf, 2022), and knowledge graph reasoning (Zhang et al., 2022a). These methods train parameterized neural networks to perform relational reasoning between entities. Early attempts have shown LLMs can do analogical reasoning just like humans by case study (Webb et al., 2023). Recent works explore analogy generation and analogy reasoning with knowledge graphs on LLMs (Yuan et al., 2023; Bhavya et al., 2022; 2023). However, they focus on different applications instead of general reasoning problems. Moreover, they rely on large-scale external knowledge bases to store entity relationships to perform analogical reasoning, which is expensive for general reasoning tasks. Thus, a general analogical approach for LLM complex reasoning on general tasks is still in its vacuum. **Prompt-based Large Language Model Reasoning**. Originally designed for text generation, the Large Language Models (LLMs) have succeeded in many applications with prompting methods (Liu et al., 2023b; Zhao et al., 2023). Early methods employ input and output (IO) prompting that appends pairs of problems and solutions exemplars on top of the input problem (Brown et al., 2020). Recent methods decompose the complex reasoning process into intermediate reasoning steps. They use multi-step prompting (Wei et al., 2022; Wang et al., 2022a; Zhang et al., 2022b) or recursive problem decomposition (Zhou et al., 2022; Khot et al., 2022) to enable multi-step LLMs reasoning. Others design searching schemes for LLMs (Yao et al., 2023; Besta et al., 2023). However, they solve each problem from scratch. Thus, they cannot reuse the insights of solving similar problems. Moreover, they suffer from accumulated errors in intermediate reasoning steps. **LLM as Autonomous Agents**. LLM-Agents can interface with tools (Cai et al., 2023; Schick et al., 2023; Chen et al., 2023a; Liu et al., 2022), other LLMs (Wu et al., 2023; Li et al., 2023; Chan et al., 2023), and humans (Wang et al., 2023c; Liu et al., 2023a) to autonomously make decisions and formulate planning to solve tasks with feedback. The key component of LLM-Agents is the LLM-based planning module to process the environmental feedback and make planning (Wang et al., 2023b; Zhu et al., 2023a). When deployed in long-trial decisions and planning scenarios, LLM-Agents should make multiple rounds of action and planning. As LLMs are likely to hallucinate, LLM-Agents will accumulate errors and finally fail to complete planning tasks (Yao et al., 2022). Recent work amends this issue by reflecting on its previous failures in completing the same task when LLM-Agents start one more try for task completion (Shinn et al., 2023). Differently, we apply the proposed method to help LLM-Agents summarize their successful experience in completing other similar tasks and formulate a plan to improve task completion. **More Discussion with Retrieval-augmented LLMs**. The retrieval-augmented LLM is proposed to alleviate the hallucination phenomenon and improve the output quality of LLM (Asai et al., 2023; Mialon et al., 2023; Shi et al., 2023). For an input question, the retrieval-augmented LLM first queries an external database with billion-level tokens (Borgeaud et al., 2022; Lan et al., 2023; Zhu et al., 2023b) to retrieve a subset of the text corpus to construct the output answer. The retrieval-augmented LLM achieves improved question-answering quality with fewer parameters than standard LLM (Mialon et al., 2023) and has been applied to many downstream tasks such as text/multi-modal generation (Lan et al., 2023; Yasunaga et al., 2023), question answering (Borgeaud et al., 2022; Izacard et al., 2022) and biomedical applications (Wang et al., 2022b). The retrieval-augmented Figure 2: An example of existing methods on shortest path reasoning task. Although the graph in (a) is quite simple, these methods only prompt the LLM to find the sub-optimal solutions (b,c), and even repeatedly visit intermediate nodes (d), due to reasoning from scratch. LLM has been widely applied to many tasks (Lan et al., 2023; Yasunaga et al., 2023; Izacard et al., 2022; Wang et al., 2022b). Differently, the proposed method requires no external database to query from but aggregates the knowledge of solving analogous problems to hint at reasoning. ## 3 Preliminaries Denote the reasoning problem and the solution as \(\mathbf{p}\in\mathcal{P}\) and \(\mathbf{s}\in\mathcal{S}\). \(\mathcal{P}\) and \(\mathcal{S}\) are the problem and solution space. Let the LLM be \(f_{\theta}\) where \(\theta\) denotes model weights. **IO Prompting**. IO prompting (Brown et al., 2020) uses task descriptions and few-shot pairs of Input and Output (IO) prompting demonstrations to assist LLMs in reasoning to solve the input problem \(\mathbf{p}\) by \(\mathbf{s}=f_{\theta}(\mathbf{p})\). Thus, we denote the reasoning path of IO prompting as \(\mathbf{p}\rightarrow\mathbf{s}\). As shown in Figure 3 (a), the reasoning path of IO prompting is one-step. One-step reasoning is insufficient to solve complex problems which involve multi-step reasoning. **Chain-of-Thought Prompting**. Chain-of-Thought (CoT) Prompting (Wei et al., 2022) enables complex reasoning ability with LLMs by decomposing the reasoning path of these problems into multi-step: \(\mathbf{p}\rightarrow\mathbf{z}_{1}\cdots\mathbf{z}_{k}\rightarrow\mathbf{s}\). Here \(\mathbf{z}_{1}\cdots\mathbf{z}_{k}\) are sub-solutions in intermediate reasoning steps, a.k.a 'thoughts'. CoT uses few-shot prompts to prompt LLM to output reasoning results together with its intermediate reasoning steps: \(\{\mathbf{z}_{1},\cdots\mathbf{z}_{k},\mathbf{s}\}=f_{\theta}(\mathbf{p})\). Notice this framework can be done sequentially by \(\mathbf{z}_{i}=f_{\theta}(\mathbf{p};\{\mathbf{z}_{j}|j<i\})\) until reaches the final solution \(\mathbf{s}\)(Zhou et al., 2022). **Tree-of-Thought Prompting**. Tree-of-Thought (ToT) Prompting formulates LLM reasoning as searching in the solution space with heuristics, such as breadth-first searching (BFS) and depth-first searching (DFS) Yao et al. (2023). When it reaches the sub-solution \(\mathbf{z}_{i}\) at \(i\)-th step, ToT employ an LLM to propose sub-solution candidates \(\{\mathbf{z}_{i+1}^{n}|n=1\cdots N\}=f_{\theta}(\mathbf{p};\{\mathbf{z}_{j}|j \leq i\})\). Then, it leverages an LLM to evaluate \(\{\mathbf{z}_{i+1}^{n}|n=1\cdots N\}\) for the best one and choose it as the next sub-solution \(\mathbf{z}_{i+1}\). The above searching process is repeated until ToT obtains a satisfying solution. Although these methods improve the complex reasoning ability of LLMs with different prompting, they all prompt the LLM to reason from scratch. This reasoning scheme cannot reuse the prior knowledge in problem-solving and suffers from accumulated errors during multi-step reasoning. Thus, they fall short in tasks involving optimization and multi-step searching. As shown in Figure 2, IO, CoT, and ToT prompting all fail to find the optimal shortest path \(0\to 3\to 4\) from Node 0 to Node 4 in a very simple graph, which can be easily solved by humans. When using multi-step searching for this task with ToT, it even falsely searches backward and visits Node 3 two times. The result of ToT on a more complex graph is shown in Figure 4 (b.2). Figure 3: The illustrative comparison between Thought Propagation (TP) and other representative methods. For an input problem \(\mathbf{p}\), IO, CoT, and ToT reason from scratch to yield the solution \(\mathbf{s}\). Differently, TP explores analogous problems to improve solving the input problem. ## 4 Methodology When humans encounter a new problem, they often compare it to familiar ones with similar characteristics, a process known as analogical reasoning Carbonell (1985). Thus, we aim to leverage insights in exploring some problems that are analogous to the input problem, i.e. analogous problems, to facilitate input problem-solving. We introduce Though Propagation (TP), a versatile analogical framework for LLM reasoning. As shown in Figure 3 (d), TP actively generates analogous problems related to the input problem during the reasoning process, all without relying on external knowledge bases. It then combines the solutions from these proposed analogous problems to facilitate solving the input problem by creating an updated solution or formulating a high-level plan. We introduce the three modules of TP: LLM Propose, LLM Solve, and LLM Aggregate. Then, we give a general setup of the proposed framework and leave the implementation for each task in Section 5. **LLM Propose**. Given an input problem, LLM Propose generates a set of analogous problems. Solving these analogous problems should provide distinctive insights to help solve the input one. Thus, LLM Propose generates analogous problems in two perspectives. First, the solutions from analogous problems can be transferred to yield new solutions to the input problem. In this manner, TP can reuse the solutions from analogous problems to develop new solutions to the input problem in an analogical approach instead of reasoning from scratch. Second, solving analogous problems can produce high-level plans for the input problem-solving. In this way, TP can rectify the errors during planning and improve the multi-step reasoning of the input problem. Both ways to generate analogous problems are instantiated using few-shot problem exemplars or zero-shot prompting. **LLM Solve**. LLM Solve serves a dual purpose: solving the input problem to produce an initial solution and solving the analogous problems proposed by LLM Propose. This module can be instantiated using existing prompting approaches such as CoT (Wei et al., 2022). Although the solutions to analogous problems are not expert-level, the following aggregation module can assess these solutions and use the most promising one to instantiate analogical reasoning. Moreover, we introduce a multi-layer implementation of TP to improve solutions to analogous problems further. **LLM Aggregate**. LLM Aggregate aggregates solutions from analogous problems to enhance solving the input problem. This module is conjugated to the LLM Propose module, since it depends on the relationship between the input problem and its analogous counterparts generated by LLM Propose. Thus, LLM Aggregate utilizes the solutions from analogous problems in two ways. First, it prompts the LLM to develop new solutions to the input problems based on the results of analogous problems. An example of this manner is shown in Figure 4. (c). If we already obtain the shortest paths to the neighborhood nodes of the target node, only one-step reasoning is required to yield a new path to the target node. Notice this manner is different from recursive problem decomposition Khot et al. (2022) since it only requires one-step reasoning to develop a new solution to the input problem. Second, this module prompts the LLM to derive high-level plans to solve the input problem using the solutions from analogous problems. The plan is knowledge-intensive, thus the LLM can carry this plan in every round of decision-making when solving long-trial planning Figure 4: An example of TP and ToT for the Shortest-path Task. ToT (b) fails to solve the problem in (a) due to the accumulated errors in intermediate reasoning steps. Building upon solutions from analogous problems, TP (c) refines the initial sub-optimal solution and finally finds the optimal one. tasks. After generating new solutions or plans using the results of analogous problems, the LLM evaluates these outputs and chooses the best one to improve input problem-solving. **Multi-layer Implementation.** As shown in Figure 3 (d), we can stack \(K\) layers of TP to leverage the solutions from \(K\)-hop analogous problems to improve the solution to the input problem. Thus, existing methods, such as IO, CoT, ToT, etc., can be viewed as the special cases of TP by setting \(K=0\) since they solve each problem from scratch and do not instantiate analogical reasoning. By setting \(K=1\), TP aggregates the solutions from 1-hop analogous problems to refine the solution to the input one. TP further allows hierarchical refinement by setting \(K\geq 2\). In this case, the problems in \(i\)-th layer are the analogous problems in \((i-1)\)-th layer (\(i\geq 1\)). Thus, we can use the solutions from \((i)\)-th-layer analogous problems to refine \((i-1)\)-th-layer analogous problems' solutions. This hierarchical refinement finishes until the solution to the input problem is refined. **General Setup and Recipe.** TP allows plug-and-play generalization and enhancement to different tasks, since we can use existing prompting methods for LLM reasoning to instantiate LLM Solve. Using IO prompting and CoT for most tasks is sufficient in our experiments. For more complex problems involving autonomous planning and exploration, prompting methods that synergize thinking and action such as ReAct (Yao et al., 2022) is needed. Although TP builds upon existing prompting methods, it aids their reasoning-from-scratch manner with the hint of solving analogous problems and leads to significant performance gain. **Complexity Analysis.** The complexity of Thought Propagation mainly comes from two perspectives. Firstly, the complexity exponentially increases as the layer number \(k\) increases. However, the \(K\)-hop analogous problems are intuitively less related to the input problem, and considering such long-range analogous problems only leads to marginal performance gain. Thus, we only consider implementing up to 2-layers of TP to trade off performance between complexity. We find the performance gain in 2-layer TP is marginal when compared with 1-layer TP, but 2-layer TP leads to more token expenses. 1-layer TP achieves very competitive performances against the baselines with no significant increase in token expenses. For example, 1-layer TP outperforms ToT by a large margin in different LLM backends but shares similar token expenses. Secondly, instantiating LLM Solve under 5-shot setting is more expensive than 0-shot setting due to increasing prompting exemplars. We provide a detailed quantitative complexity analysis in Section 5.1. ## 5 Experiments We employ three challenging tasks, such as Shortest-Path Reasoning, Creative Writing, and LLM-Agent Planning, to evaluate the proposed method (TP). We generate 100 shortest-path problems with non-trivial solutions for the Shortest-Path Reasoning task. We employ the dataset proposed by Yao et. al. (Yao et al., 2023) with 100 writing problems for the Creative Writing task. And we use ALFWorld (Shridhar et al., 2021) game suite to instantiate the LLM-Agent Planning task with 134 \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{LLM-Backend} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{0-shot} & \multicolumn{3}{c|}{1-shot} & \multicolumn{3}{c}{5-shot} \\ \cline{3-11} & & OR\(\uparrow\) & FR\(\uparrow\) & OLR\(\downarrow\) & OR\(\uparrow\) & FR\(\uparrow\) & OLR\(\downarrow\) & OR\(\uparrow\) & FR\(\uparrow\) & OLR\(\downarrow\) \\ \hline \multirow{4}{*}{PaLM-2} & IO & 0.14 & 0.37 & 0.62 & 0.28 & 0.48 & 0.43 & 0.26 & 0.41 & **0.35** \\ & CoT & 0.24 & 0.52 & 0.40 & 0.33 & 0.45 & 0.41 & 0.29 & 0.56 & 0.39 \\ & BaG & 0.23 & 0.47 & 0.44 & 0.28 & 0.52 & 0.45 & 0.26 & 0.52 & 0.51 \\ & ToT & - & - & - & - & - & - & - & - & - \\ & **TP** & **0.36** & **0.57** & **0.37** & **0.38** & **0.59** & **0.36** & **0.37** & **0.62** & 0.36 \\ \hline \multirow{4}{*}{GPT-3.5} & IO & 0.33 & 0.50 & 0.17 & 0.62 & 0.86 & 0.15 & 0.61 & 0.9 & 0.27 \\ & CoT & 0.26 & 0.35 & 0.13 & 0.58 & 0.85 & 0.16 & 0.52 & 0.85 & 0.32 \\ & BaG & 0.25 & 0.32 & 0.13 & 0.61 & 0.87 & 0.14 & 0.64 & 0.86 & 0.13 \\ & ToT & 0.22 & 0.42 & 0.82 & 0.38 & 0.79 & 0.72 & 0.58 & 0.93 & 0.32 \\ & **TP** & **0.65** & **0.89** & **0.12** & **0.74** & **0.89** & **0.07** & **0.73** & **0.91** & **0.10** \\ \hline \multirow{4}{*}{GPT-4} & IO & 0.78 & 1.00 & 0.10 & 0.80 & 0.99 & 0.08 & 0.81 & 1.00 & 0.08 \\ & CoT & 0.76 & 1.00 & 0.10 & 0.75 & 1.00 & 0.11 & 0.78 & 1.00 & 0.08 \\ \cline{1-1} & BaG & 0.77 & 0.98 & 0.09 & 0.80 & 0.99 & 0.09 & 0.78 & 1.00 & 0.11 \\ \cline{1-1} & ToT & 0.46 & 0.84 & 0.52 & 0.46 & 0.73 & 0.40 & 0.77 & 1.00 & 0.07 \\ \cline{1-1} & **TP** & **0.88** & **1.00** & **0.05** & **0.88** & **1.00** & **0.04** & **0.86** & **1.00** & **0.05** \\ \hline \hline \end{tabular} \end{table} Table 1: The performance of TP and other baselines on Shortest-path Reasoning Task. environments. TP finds the most optimal shortest paths, generates the most coherent messages, and achieves the highest task completion rate in three tasks. ### Shortest-path Reasoning The Shortest-path Reasoning task is to find the shortest path from the source node to the target node in a weighted undirected graph. This task is challenging for LLMs since 1. the graph structure does not conform to the sequential corpus for training LLMs, and 2. this discrete optimization problem requires searching in an explosively large space. **Task Setup**. For an input graph, LLM is required to find the shortest path from the source node to the target node using the baselines and TP. For a graph with \(N\) nodes, the source node is set to Node \(0\), and the target node is set to Node \((N-1)\). We filter out the trivial cases where the shortest path only contains one edge. Detailed task setup is in Appendix. **Baselines and LLM Backends**. We use standard (IO) prompting (Brown et al., 2020), Chain-of-Thought (CoT) (Wei et al., 2022), Build-a-Graph (BaG) (Wang et al., 2023a), and Tree-of-Thought (ToT) (Yao et al., 2023) as the baseline methods. The implementation and prompting exemplars of all the baselines are shown in Appendix. We evaluate all the methods under 0-shot, 1-shot, and 5-shot prompting settings. We conduct experiments on three LLM backends such as PaLM 2 (Bison) (Anil et al., 2023), GPT-3.5 (OpenAI, 2022), and GPT-4 (OpenAI, 2023). **Thought Propagation Setup**. Suppose the input problem is finding the shortest path from Node \(0\) to Node \((N-1)\) in the graph \(G_{i}\) with \(N\) nodes. LLM Propose prompts the LLM to propose analogous problems of finding the shortest path from Node \(0\) to the neighborhood nodes of Node \((N-1)\). LLM Solve is implemented with IO prompting with 0-shot/1-shot/5-shot prompting. This module outputs the initial solutions to the input problem and analogous problems. Afterward, LLM Aggregate uses the results of analogous problems to develop a new path to the input problem. Then, it compares the new path with the initial path and outputs a better one. The implementation and prompts are shown in Appendix. **Evaluation Metrics**. Denote the length of shortest path of graph \(G_{i}\) as \(L_{i}^{*}\). Let the length of the valid path output by LLM be \(L_{i}\). \(N\) is the total number of graphs. \(N_{optimal}\) and \(N_{feasible}\) are the number of optimal paths and valid paths output by LLMs. We propose to use three metrics to evaluate the performance of different methods in Shortest-path Reasoning. **Optimal Rate (OR) \(=N_{optimal}/N\)** measures the percentage of paths generated by LLMs being the optimal paths. The higher the better. **Feasible Rate (FR) \(=N_{feasible}/N\)** measures the percentage of paths generated by LLMs being the valid paths. The higher the better. **Over-Length Rate (OLR) \(=\sum_{i=1}^{N_{feasible}}{(L_{i}-L_{i}^{*})/L_{i}^{*}}\)** measures the over-length of valid paths generated by LLMs over the optimal ones. The lower the better. These metrics take the following forms. **Results**. The quantitative results of TP and the baselines are shown in Table 1. TP achieves a significant performance gain over the baselines by generating the most optimal and valid shortest paths when testing on three LLM backends with different model capacities. Moreover, the valid paths generated by TP are the closest to the optimal paths when compared with the baselines due to the lowest Over-Length Rate (OLR). On the PaLM-2 backend, ToT fails to find valid paths from source nodes to target nodes. For GPT-3.5 and GPT-4 backends, ToT underperforms IO prompting. We find ToT sometimes searches backward or even fails to find the valid path due to the accumulated error shown in Figure 4. CoT only outperforms IO on PaLM-2 where IO performs badly. Nevertheless, we observe no significant preference gain of CoT over IO on the other LLM backends. Figure 5: Study on the complexity and performance of TP under different configurations. Although the 1-shot setting leads to performance gains over 0-shot on most prompting methods, the performance gains of 5-shot over 1-shot setting are unexpectedly marginal, or sometimes worse than 1-shot setting. There are two reasons for this phenomenon. Firstly, the 5-shot setting feeds long prompting exemplars to LLM, which potentially contains more redundant information. Secondly, the 5-shot setting sometimes leads to output cutoff due to the maximal token limit of LLMs. We leave the in-depth exploration of this phenomenon in our future work. **Impact of Layers on Performance**. We further study the influence of layer numbers of TP on the complexity and performance in the Shortest-path Task. As shown in Figure 5, 1-layer TP has similar token costs as ToT in different settings. However, 1-layer TP already achieves very competitive performance in finding the optimal shortest path. Also, the performance gain of 1-layer TP over 0-layer TP is significant. Although 2-layer TP achieves the best performance as shown in Table 1, the performance gain of 2-layer TP over 1-layer TP is less significant. And Figure 5 (a). indicates the increase in the token cost of TP with 2 layers. Thus we aim to harness multi-hop analogous problems with decreased expenses in our future work. More results are shown in Appendix. ### Creative Writing We proceed to evaluate Thought Propagation on the Creative Writing task (Yao et al., 2023). Given 4 randomly sampled sentences, the goal of this task is to generate 4 paragraphs ending with these sentences respectively to construct a coherent message. Such task challenges LLM reasoning by highly creative thinking and planning. **Task Setup**. We follow the task setup proposed by Yao et. al. (Yao et al., 2023) that consists of 100 test instances. We use the coherent score (1-10 scalar score generated by GPT-4) and user study to evaluate the coherence of generated messages. The details of the evaluation are in Appendix. **Baselines and LLM Backends**. We consider three baselines: IO prompting (Brown et al., 2020), CoT (Wei et al., 2022) and ToT (Yao et al., 2023). All these methods use zero-shot prompts due to the creative nature of writing (Yao et al., 2023). The baseline setup and prompting exemplars are shown in Appendix. We instantiate each method using GPT-3.5 and GPT-4 backends. **Thought Propagation Setup**. We build Thought Propagation with one layer for this task to maintain a fair comparison with the baselines. Every module of Thought Propagation is implemented with zero-shot prompts. LLM Propose rephrases the four input sentences using the simple prompt: "Rephrase the input sentences but do not change their meanings or orders.", and produces the analogical problem which is to generate a writing plan to write a message with the rephrased sentences. This module generates 5 analogous problems to ensure a fair comparison with baselines. LLM Solve uses CoT prompting to generate writing plans to write four paragraphs that end with four given sentences. This module is employed to solve the input and proposed analogous problems, leading to 6 plans. Since the rephrased sentences share similar contextual information with the input sentences, their writing plans potentially apply to the input ones. Thus LLM Aggregate evaluates all 6 plans output by LLM Solve and outputs the most promising plan for the input problem. Finally, the LLM is asked to write the whole message in four paragraphs using the most promising plan. The prompting exemplars of TP in the Creative Writing task are shown in Appendix. **Results**. Table 2 shows the performance of TP and baselines with GPT-3.5 and GPT-4. Thought Propagation outperforms the baselines with the highest coherent scores on both GPT-3.5 and GPT-4 backends. Moreover, TP achieves the highest human preference in user study. Additional findings are all the methods achieve better performance on GPT-4 due to the improved model capability. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Metric & \multicolumn{2}{c|}{Coherent Score} & \multicolumn{2}{c}{User Study} \\ \hline LLM-Backend & GPT-3.5 & GPT-4 & GPT-3.5 & GPT-4 \\ \hline IO & \(6.087\pm 2.229\) & \(6.193\pm 1.953\) & 14\% & 7\% \\ CoT & \(6.654\pm 2.201\) & \(6.927\pm 1.508\) & 21\% & 15\% \\ ToT & \(6.856\pm 1.975\) & \(7.684\pm 1.141\) & 26\% & 33\% \\ **TP** & \(\mathbf{7.000\pm.1783}\) & \(\mathbf{7.989\pm.1453}\) & \(\mathbf{39\%}\) & \(\mathbf{45\%}\) \\ \hline \end{tabular} \end{table} Table 2: The performance of Thought Propagation (TP) and baselines on Creative Writing Task. ### LLM-Agent Planning LLM-Agents use LLMs as the core component to interact with environments and autonomously make plans and decisions. We study the capability of TP to formulate high-level, knowledge-intensive plans for LLM-Agents in an analogical way to improve the task completion rate. **Task Setup**. ALFWorld (Shridhar et al., 2021) is a text-based game suite with various interactive housework environments aligned with ALFRED and TextWorld (Cote et al., 2019; Shridhar et al., 2020). It contains six types of tasks with 134 unseen environments for evaluation (Yao et al., 2022; Shinn et al., 2023). **Baselines and LLM Backends**. BULTER is a trainable parameterized method based on reinforcement learning (Shridhar et al., 2021). ReAct (Yao et al., 2022) builds LLM-Agents with synergy between reasoning traces and action trials. Act (Yao et al., 2022) removes the reasoning trace of ReAct. Reflexion improves ReAct with verbal reflections on previous failures in the same task to refine the planning of new trials (Shinn et al., 2023). We run Reflexion for 6 trials since its performance is stable after 4 trials. We use GPT-3 for LLM-Agents following Shinn et. al. **Thought Propagation Setup**. Unlike Reflexion which reflects upon previous failure in the **same task** to help task completion in the next planning trial, Thought Propagation aims to aggregate useful information from successful trials in **similar but different tasks** to improve task completion. Thus, LLM Propose uses a zero-shot prompt to assess the similarity score between the original task and the rest with successful planning trials. The rest tasks with the top two similarity scores are treated as two analogical problems. LLM Solve employs ReAct to instantiate LLM-Agent planning in the original task following Reflexion (Shinn et al., 2023). LLM Aggregate uses a zero-shot prompt to formulate two plans to help complete the original problem based on the successful trials of analogical problems and the planning trial of the original problem. Then, it evaluates the two plans and outputs the better one to guide the LLM-Agent to complete the task. We run Thought Propagation for 6 trials to maintain consistency with Reflexion. The prompt exemplars are shown in Appendix. (Shinn et al., 2023). **Variant Models**. We introduce two strategies for LLM Aggregate for plan evaluation: 1. Self-Evaluation (SE): The LLM evaluates two plans by zero-shot prompt and outputs the better one; 2. Simulation (SM): The LLM-Agent executes new planning trials in the task environment using two plans and outputs a better one. We additionally add Self-Reflection (SR) modules to reflect LLM-Agent on its own failures just like Reflexion. These implementations lead to four variant models of Thought Propagation: 1). TP-SR-SE: Thought Propagation with Self-Reflection and Self-Evaluation; 2). TP-SE: Thought Propagation with Self-Evaluation; 3). TP-SR-SM: Thought Propagation with Self-Reflection and Simulation; 4). TP-SM: Thought Propagation with Simulation. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Method & Pick & Clean & Heat & Cool & Look & Pick 2 & All \\ \hline BULTER & 33 & 26 & 70 & 76 & 17 & 12 & 22 \\ BULTER\_G & 46 & 39 & 74 & **100** & 22 & 24 & 37 \\ Act (best of 6) & 88 & 42 & 74 & 67 & 72 & 41 & 45 \\ ReAct (avg) & 65 & 39 & 83 & 76 & 55 & 24 & 57 \\ ReAct (best of 6) & 92 & 58 & **96** & 86 & 78 & 41 & 71 \\ Reflexion & **100.00** & 74.19 & 73.91 & 85.71 & 66.67 & 70.59 & 79.1 \\ \hline **TP-SR-SE** & **100.00** & 77.42 & 65.22 & 95.24 & **94.44** & 82.35 & 85.82 \\ **TP-SE** & 91.67 & 83.87 & 69.56 & **100.00** & 83.3 & 70.59 & 83.68 \\ **TP-SR-SM** & 95.83 & **96.77** & 78.26 & **100.00** & **94.44** & **88.24** & 92.54 \\ **TP-SM** & **100.00** & 93.55 & 86.96 & **100.00** & **94.44** & **88.24** & **94.78** \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of different variant models of Thought Propagation (TP) and baselines on LLM-Agent planning in the ALFWORLD dataset (Shridhar et al., 2021). We reproduce the result of Reflexion (Shinn et al., 2023). Other baseline results are quoted from ReACT (Yao et al., 2022). Figure 6: The success rate of task completion in different trials. **Results**. Table 3 shows good performance of Thought Propagation over the learnable parameterized method and other LLM-Agent baselines. Thought Propagation achieves large performance gains even without a memory module to store its previous failures (TP-SE/TP-SM). This shows the superiority of the reflection upon successful planning in completing similar tasks. Moreover, Thought Propagation also works well with the additional memory module to store previous failures (TP-SR-SE/TP-SR-SM). Figure 6 shows that different variant models of Thought Propagation achieve consistent performance improvement by iterative reflecting on successful planning in similar tasks. We show how TP formulates a constructive plan from solving **"examine the alarmclock with the desklamp"** task to successfully complete **"examine the book with the desklamp"** task, where ReAct and Reflexion are trapped in a loop, in Appendix. ## 6 Conclusions Existing prompting approaches for LLM reasoning cannot leverage the insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, due to reasoning from scratch. To address these issues, we propose Thought Propagation (TP), which explores analogous problems to yield a refined solution or a knowledge-intensive plan in an analogical approach to facilitate new problem-solving. TP is compatible with existing prompting methods, showing plug-and-play generalization and enhancement to a wide range of tasks such as Shortest-path Planning, Creative Writing, and LLM-Agent Planning. Future directions would further enhance the performance and efficiency of the proposed framework.
2303.12788
Open-source Frame Semantic Parsing
While the state-of-the-art for frame semantic parsing has progressed dramatically in recent years, it is still difficult for end-users to apply state-of-the-art models in practice. To address this, we present Frame Semantic Transformer, an open-source Python library which achieves near state-of-the-art performance on FrameNet 1.7, while focusing on ease-of-use. We use a T5 model fine-tuned on Propbank and FrameNet exemplars as a base, and improve performance by using FrameNet lexical units to provide hints to T5 at inference time. We enhance robustness to real-world data by using textual data augmentations during training.
David Chanin
2023-03-22T17:57:47Z
http://arxiv.org/abs/2303.12788v1
# Open-source Frame Semantic Parsing ###### Abstract While the state-of-the-art for frame semantic parsing has progressed dramatically in recent years, it is still difficult for end-users to apply state-of-the-art models in practice. To address this, we present Frame Semantic Transformer, an open-source Python library which achieves near state-of-the-art performance on FrameNet 1.7, while focusing on ease-of-use. We use a T5 model fine-tuned on Propbank and FrameNet exemplars as a base, and improve performance by using FrameNet lexical units to provide hints to T5 at inference time. We enhance robustness to real-world data by using textual data augmentations during training. ## 1 Introduction Frame semantic parsing (Gildea and Jurafsky, 2002) is a natural language understanding (NLU) task involving finding structured semantic frames and their arguments from natural language text as formalized by the FrameNet project (Baker et al., 1998). Frame semantics has proved useful in understanding user intent from text, finding use in modern voice assistants (Chen et al., 2019), dialog systems (Chen et al., 2013), and even text analysis (Zhao et al., 2023). A semantic frame in FrameNet describes an event, relation, or situation and its participants. When a frame occurs in a sentence, there is typically a "trigger" word in the sentence which is said to evoke the frame. In addition, a frame contains a list of arguments known as frame elements which describe the semantic roles that pertain to the frame. A sample sentence parsed for frame and frame elements is shown in Figure 1. FrameNet provides a list of lexical units (LUs) for each frame, which are word senses with may evoke the frame when they occur in a sentence. For instance, the frame "Attack" has lexical units "ambush.n", "ambush.v", "assault.v", "attack.v", "attack.n", "bomb.v", and many others. These lexical units are not exhaustive, however - a frame trigger may not necessarily be one of the lexical units listed in the frame, but the lexical units provide a strong hint that the frame may be present. In this paper we treat frame semantic parsing as sequence-to-sequence text generation task, and fine-tune a T5 transformer model (Raffel et al., 2020) as the base model. We increase performance by pretraining on related datasets, providing in-context prompt hints to T5 based on FrameNet data, and using textual data augmentations (Ma, 2019) to increase training data. More details on our implementation are given in Section 2 We evaluate the performance of Frame Semantic Transformer using the same dataset splits as Open Sesame (Swayamdipta et al., 2017), the previous state-of-the-art open-source parser. Frame Semantic Transformer exceeds the Figure 1: The sentence ”Jaclyn gave the box to Mark” annotated with frame trigger and frame elements for the ”Giving” frame. performance of Open Sesame, and achieves near state-of-the-art performance compared with modern frame semantic parsers that do not publish models (Kalyanpur et al., 2020; Zheng et al., 2022). The performance of Frame Semantic Transformer does not, however, come at the cost of usability. The library can be installed via PyPI (pyp) with the following command: pip install frame-semantic-transformer Performing frame semantic parsing on a sentence can be achieved with a few lines of code. We leverage the Huggingface (Wolf et al., 2020) model hub and NLTK (Bird et al., 2009) corpora so that all required models and datasets are automatically downloaded when frame semantic transformer is first run, requiring no further action by the user aside from installing the library from PyPI. Basic usage is shown in Figure 2. Pretrained models are provided based on T5-base and T5-small, with T5-base being the default model used. The code for Frame Semantic Transformer is available on Github 1. Footnote 1: [https://github.com/chanind/frame-semantic-transformer](https://github.com/chanind/frame-semantic-transformer) ## 2 Method Typically, frame semantic parsing approaches treat the task as a set of 3 subtasks which happen in serial (Kalyanpur et al., 2020). First, in the trigger identification subtask, all trigger locations are identified in the text where a frame occurs. Second, in the frame classification subtask, each identified trigger location is classified with a FrameNet frame. Finally, in the arguments extraction subtask, frame elements and their arguments are identified in the text. We treat each of the 3 subtasks as sequence-to-sequence tasks performed in series by a fine-tuned T5 model. Each of these tasks follows the format "<task name> <task-specific hints> : <text>". ### Trigger identification Given a sentence, the trigger identification task identifies locations in the sentence text which could be frame triggers. This task is conceptually the simplest of the three - it has no task-specific hints, and the goal of the task is to insert markers in the text to indicate frame triggers. For this task, we us the asterisk character * to indicate a frame trigger. This is shown in Figure 3. ### Frame classification For each trigger identified in the trigger identification step, a frame classification task is created to classify the frame that the trigger refers to. To make this task easier for the model, we use the LUs from each frame to build a list of possible frames this trigger could refer to. We normalize trigger words and frame LUs using a similar process. First, we lowercase the word and stem and lemmatize it using multiple stemmers and lemmatizers. Each stemmer and lemmatizer may treat different English words slightly differently, so multiple are used to increase the chance the normalized trigger word will match a normalized LU from framenet. Specifically, we use an English Snowball stemmer (Porter, 1980), a Lancaster stemmer (Paice, 1990), a Figure 3: Trigger identification input and expected output for the text ”It was no use trying the lift.”. Trigger locations are indicated by * in the output Figure 2: Performing frame semantic parsing requires only a few lines of code using Frame Semantic Tranformer. All needed pretrained models and datasets are downloaded automatically. Porter stemmer (Porter, 1980), and a lemmatizer based on WordNet (Miller, 1995), all from NLTK to generate a set of up to 4 possible normalized versions of the trigger word. For LUs, we also remove the part of speech (POS) tag. T5 is a powerful transformer model and likely does not need to be provided with POS info, although this is something that could be explored in future work. In addition, for the trigger word, we also generate bigrams for the trigger and the words on either side of the trigger, and normalize the bigrams in the same way. Some LUs contain multiple words, so generating bigrams increases the chance that after this normalization process the matching frame is found and can be added as a hint. For instance, for the trigger word "trying" from Figure 3, this word has the bigrams "use_trying" and "trying_to", and the monogram "trying". After normalization, these become the lookup set: { us_tri, us_try, us_trying, use_tri, use_try, use_trying, tri, try, trying, tri_to, try_to, trying_to } This lookup set overlaps with the normalized LUs for the following frames: { Attempt, Attempt_means, Operational_testing, Tasting, Trial, Try_defendant, Trying_out, } finally, these overlapping frames are provided as part of the prompt for the frame classification task as shown below: input: "FRAME Attempt Attempt_means Operational_testing Tasting Trial Try_defendant Trying_out: It was no use *trying the lift." output: "Attempt_means" Likewise, a frame classification task is generated for the trigger "lift" as well, as shown below: input: "FRAME Body_movement Building_subparts Cause_motion Cause_to_end Connecting_architecture Theft: It was no use trying the *lift." output: "Connecting_architecture" ### Argument extraction After a frame is identified, the next task is to identify the frame elements and arguments for that frame in the text. An argument extraction task is generated for every frame classified. We include all available frame element names from FrameNet for the frame in question as part of the prompt input to make the argument extraction task easier for T5. The output is of the form "<element 1>="<arguments 1>" | <element 2>="<arguments 2>" |...". For instance, the arguments extraction task for the Attempt_means frame from above is shown below: input: "ARGS Attempt_means | Agent Means Goal Circumstances Degree Depictive Domain Duration Frequency Manner Outcome Particular_iteration Place Purpose Time: It was no use *trying the lift." output: "Means="the lift"" Likewise, the arguments extraction task for the Connecting_architecture frame is shown below: input: "ARGS Connecting_architecture | Part Connected_locations Creator Descriptor Direction Goal Material Orientation Source Whole: It was no use trying the *lift." output: "Part="the lift"" ### Pretraining The training data for FrameNet 1.7 is relatively small with under 6,000 fully annotated sentences total, so it is common to leverage FrameNet exemplar data as well to increase the amount of training data available. This exemplar data includes around 100,000 sentences. Exemplar sentence annotates only a single frame per sentence, so it is not suitable for generating trigger identification tasks, but it is still a rich source of data to improve performance on frame classification and argument extraction tasks. The distribution of exemplar sentences is different from the distribution of training data for FrameNet 1.7 (Kshirsagar et al., 2015), so rather than train on exemplar data directly we instead use it for pretraining. Another rich source of additional training data is PropBank (Kingsbury and Palmer, 2002). PropBank is a similar frame parsing dataset to FrameNet, although PropBank tends to focus more on verbs than FrameNet and has simpler arguments. Still, the tasks are similar enough that pretraining on PropBank can help the model score higher on FrameNet 1.7. Specifically, we use the PropBank training data from OntoNotes (Weischedel et al., 2013) and the English Web Treebank (Bies et al., 2012). During training, we begin with a pretrained T5 model from Huggingface (Wolf et al., 2020), and then go through two additional iterations of pretraining. First we pretrain on PropBank data, and then on FrameNet 1.7 exemplars, before finally training on the FrameNet 1.7 training set. ### Data augmentation The FrameNet 1.7 training data is well formatted and grammatically correct, but in reality a lot of text that needs to be semantically parsed is not well formatted and may have errors and typos. To help make our model more robust to real world data, we also use data augmentation to expose the model to misspellings, synonyms, and other differently formatted sentences. The textual augmentations used are the following, leveraging the nlpaug Python library (Ma, 2019): * Synonyms: swaps out a word for a synonym from WordNet (Miller, 1995). * Quotations: replaces latex-style quotes with standard double quotes and vice-versa. * Random misspelling: replaces characters in words with different characters at random. * Keyboard misspelling: replace characters with typos likely based on key locations on keyboards. * Uppercase and lowercase: fully uppercase or lowercase the sentence. * Delete punctuation: randomly deletes punctuation characters in the sentence. When augmenting text during training, we make sure to adjust the indices of triggers and frame elements to match the new locations after the augmentation is applied. ### Task balancing There is a mismatch between the 3 subtasks in terms of how much training data each task has. Frame classification and argument extraction have multiple examples per training sentence, since a task is generated for every frame in a sentence. Furthermore, these tasks also benefit from pretraining with FrameNet exemplar data. Trigger identification, however, has only 1 training example per sentence, and cannot learn from exemplar data, so there is a large mismatch between the amount of trigger identification samples available and the amount of frame classification and argument extraction samples. To help address this, we sample trigger identification tasks at a 3x higher rate than frame classification and argument extraction tasks during training to help ensure that the trigger identification performance does not trail behind that of the other tasks per training epoch. We also increase the data augmentation rate for trigger identification tasks to help increase the number of training samples available. ## 3 Evaluation FrameNet 1.7 does not include an official train / test / dev split, so we follow the split and evaluation used by Open Sesame (Swayamdipta et al., 2017) as this is the most popular open-source frame semantic parser on Github, and was also the previous state-of-the-art. Other parsers also use the same split for this reason (Kalyanpur et al., 2020; Zheng et al., 2022). We calculate f1 score for each of the subtasks against the dev and test sets from Open Sesame. For trigger identification, each trigger location that is identified correctly is considered a true positive, each location that is missed is a false negative, and each location that is incorrectly marked is a false positive. For frame classification, an incorrectly classified frame is considered both a false positive and a false negative. For the argument extraction task, each frame element that is correctly identified and labeled is a true positive. If a frame element is missed it is a false negative. If a frame element is marked incorrectly it is a false positive. If a frame element is labeled, but the element is incorrectly classified or the arguments are not labeled entirely correctly, this is considered both a false positive and a false negative. We also include an ablation study showing the effects of pretraining and data augmentation on model performance in Figure 5. Data augmentation actually slighltly hurts performance in argument extraction, but we still think it is worth it to give the model more robustness to messy examples that may appear in real-world data. Lack of pretraining appears to slightly harm performance on all tasks. We did not do a statistical significance test on this data. ## 4 Related work Recent work on frame semantic parsing has focused on incorporating more information from FrameNet into the parsing process. (Zheng et al., 2022) encode the frame relation graph into an embedding space during inference to improve performance. (Su et al., 2021) encodes the full text of frame and element descriptions to aid in classification. Our work also follows in this vein by using lexical unit data to provide hints to our model during frame classification. However, both (Zheng et al., 2022) and (Su et al., 2021) do not provide pretrained, open-source models. Most similar to our work, and largely a point of inspiration, is (Kalyanpur et al., 2020). In this work, a T5 model (Raffel et al., 2020) is fine-tuned on the frame classification and argument extraction tasks. In a variant of their work, the T5 decoder is replaced with a classification head for frame classification. However, this work does not deal with trigger identification, and does not use lexical unit hints during frame classification. Furthermore, no code or models are open-sourced as part of this work, making it difficult for end-users to easily make use of the model. Previous open-source frame parsers include Open Sesame (Swayamdipta et al., 2017) and SEMAFOR (Das et al., 2010). However, both of these projects predate the rise of the transformer architecture, and their performance lags behind transformer-based solutions, especially in argument extraction. ## 5 Conclusion Frame Semantic Transformer approaches or matches state-of-the-art performance on frame semantic parsing tasks while also being easy to use as an end-user. We improve performance by pretraining both on FrameNet exemplars and PropBank data. We also incorporate frame knowledge from FrameNet via lexical units and available frame elements and pass that knowledge to T5 in-context as part of task prompts. In addition, we add NLP data augmentations to help the model generalize to real-world data which will likely be formatted differently than the FrameNet 1.7 training set. At present, Frame Semantic Transformer only provides pretrained models for English FrameNet, but we hope to support other languages and PropBank in the future as well.
2306.00552
Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local Geometry-driven Distance Metric
Quantifying the dissimilarity between two unstructured 3D point clouds is a challenging task, with existing metrics often relying on measuring the distance between corresponding points that can be either inefficient or ineffective. In this paper, we propose a novel distance metric called Calibrated Local Geometry Distance (CLGD), which computes the difference between the underlying 3D surfaces calibrated and induced by a set of reference points. By associating each reference point with two given point clouds through computing its directional distances to them, the difference in directional distances of an identical reference point characterizes the geometric difference between a typical local region of the two point clouds. Finally, CLGD is obtained by averaging the directional distance differences of all reference points. We evaluate CLGD on various optimization and unsupervised learning-based tasks, including shape reconstruction, rigid registration, scene flow estimation, and feature representation. Extensive experiments show that CLGD achieves significantly higher accuracy under all tasks in a memory and computationally efficient manner, compared with existing metrics. As a generic metric, CLGD has the potential to advance 3D point cloud modeling. The source code is publicly available at https://github.com/rsy6318/CLGD.
Siyu Ren, Junhui Hou
2023-06-01T11:16:20Z
http://arxiv.org/abs/2306.00552v1
Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local Geometry-driven Distance Metric ###### Abstract Quantifying the dissimilarity between two unstructured 3D point clouds is a challenging task, with existing metrics often relying on measuring the distance between corresponding points that can be either inefficient or ineffective. In this paper, we propose a novel distance metric called Calibrated Local Geometry Distance (CLGD), which computes the difference between the underlying 3D surfaces calibrated and induced by a set of reference points. By associating each reference point with two given point clouds through computing its directional distances to them, the difference in directional distances of an identical reference point characterizes the geometric difference between a typical local region of the two point clouds. Finally, CLGD is obtained by averaging the directional distance differences of all reference points. We evaluate CLGD on various optimization and unsupervised learning-based tasks, including shape reconstruction, rigid registration, scene flow estimation, and feature representation. Extensive experiments show that CLGD achieves significantly higher accuracy under all tasks in a memory and computationally _efficient_ manner, compared with existing metrics. As a generic metric, CLGD has the potential to advance 3D point cloud modeling. The source code is publicly available at [https://github.com/rsy6318/CLGD](https://github.com/rsy6318/CLGD). ## 1 Introduction 3D point cloud data, which is a set of points defined by 3D coordinates to represent the geometric shape of an object or a scene, has been used in various fields, such as computer vision, 3D modeling, and robotics. Measuring the difference between 3D point clouds is critical in many tasks, e.g., reconstruction, rigid registration, etc. Different from 2D images, where pixel values are _structured_ with regular 2D coordinates, allowing us to directly compute the difference between two images pixel-by-pixel, 3D point clouds are _unstructured_, i.e., there is no point-wise correspondence naturally available between two point clouds, posing a great challenge. The two widely used distance metrics, namely Earth Mover's Distance (EMD) [21] and Chamfer Distance (CD) [2] illustrated in Figs. 1(a) and 1(b), first build point-wise correspondence between two point clouds and then compute the distance between corresponding points. However, EMD is both memory and Figure 1: Visual illustration of different distance metrics for 3D point cloud data. \(\bullet\) and \(\bullet\) are two point clouds under evaluation, \(\blackbox\) and \(\bullet\) are their underlying surfaces, and the lines with arrows indicate the correspondence. In contrast to existing metrics measuring the difference between corresponding points, our metric computes the difference between the surfaces underlying 3D point clouds. time-consuming as it involves solving a linear programming problem for the optimal bijection. For each point in one point cloud, CD seeks its nearest point in the other point cloud to establish the correspondence, and it could easily reach a local minimum. Although some improved distance metrics [25; 17; 8; 22] have been proposed, they are still either inefficient or ineffective. Existing distance metrics for 3D point cloud data generally concentrate on the point cloud itself, aligning points to measure the point-wise difference. However, these metrics overlook the fact that different point clouds obtained by different sampling could represent an identical 3D surface. In this paper, we propose an efficient yet effective distance metric named Calibrated Local Geometry Distance (CLGD). _Unlike_ previous metrics, CLGD computes the difference between the underlying 3D surfaces of two point clouds, as depicted in Fig. 1(c). Specifically, we first sample a set of 3D points called reference points, which we use to induce and calibrate the local geometry of the surfaces underlying point clouds, i.e., computing the _directional distances_ of each reference point to the two point clouds that approximately represent the underlying surfaces in implicit fields. Finally, we define CLGD as the average of the directional distance differences of all reference points. We conduct extensive experiments on various tasks, including shape reconstruction, rigid registration, scene flow estimation, and feature representation, demonstrating its significant superiority over existing metrics. In summary, the main contributions of this paper are: 1. an efficient, effective, and generic distance metric for 3D point cloud data; 2. state-of-the-art benchmarks of rigid registration and scene flow estimation; and 3. potentially advancing the field of 3D point cloud processing and analysis. ## 2 Related Work Distance Metrics for 3D Point Clouds.Most of the existing distance metrics for 3D point clouds concentrate on the points in the point cloud. That is, they calculate the distance value based on the point-to-point distances from different point clouds, such as CD [2] and EMD [21]. Specifically, EMD builds a global bi-directional mapping between the source and target point clouds. Then, the sum or mean of the distances between corresponding points is regarded as the distance between the point clouds. However, the computation of bijection is too expensive, especially when the number of points is large. Differently, CD builds a local mapping between point clouds by finding the nearest point in the other point cloud, making it more efficient than EMD. However, such local correspondence between point clouds may result in local minima or sub-optimal results. Hausdorff Distance (HD) [12] is modified from CD but focuses more on the outliers. Thus, it struggles to handle the details of point clouds and usually serves as an evaluation metric. Considering the distribution of point clouds, Wu _et al._[25] proposed density-aware CD (DCD) by introducing density information as weights into CD, achieving a higher tolerance to the outliers. Nguyen _et al_. [17] proposed Sliced Wasserstein Distance (SWD) to measure the distance between point clouds, making it more efficient and effective than EMD and CD in the shape representation task. PointNetLK [1] and FMR [11] convert point clouds to feature vectors through PointNet [18] and utilize the distance of features to measure the difference between point clouds. However, the global feature cannot represent the details of the point clouds, and such a kind of distance metric relies heavily on the encoder. The above-mentioned distance metrics concentrate on points and ignore the geometric nature of a point cloud. In reality, the point clouds are sampled from the surface, and differently sampled point clouds could represent the same surface. Therefore, we should measure the difference between the underlying surfaces as the distance of the point clouds. ARL [8] randomly samples lines and calculates their intersections on the two point clouds' underlying surfaces approximately. It then calculates the difference between each line's intersections on the two point clouds to measure the dissimilarity of the two point clouds. However, the calculation of the intersection is time-consuming, and the randomness of the lines could also make the measurement unstable. To leverage the underlying surfaces of point clouds, DPDist [22] trains a network to regress the point-to-plane distance between the two point clouds. However, the trained network's accuracy in regressing the point-to-plane distance would decrease if the distribution of point cloud changes, limiting its generalization. Distance Metric-driven 3D Point Cloud Processing.A distance metric is necessary for various 3D point cloud data processing and analysis tasks. One of the most common examples is the rigid registration of point clouds. The traditional registration methods, such as ICP [3] and its variants [5; 30; 38], utilize the distance metrics between point clouds as the objective function to optimize. Some recent learning-based registration methods, such as DCP [24], FMR [11], and RPM-Net [32], could become unsupervised with the distance metrics as the alignment item in their loss functions during training. Besides, recent learning-based methods for scene flow estimation, such as PointPWC-Net [26], NSFP [15], and SCOOP [14], adopt distance metrics as the alignment item in the loss functions to train the network without using ground-truth scene flow as supervision, and they have achieve remarkable accuracy. The distance metrics are also critical in some point cloud generation tasks, e.g., point cloud reconstruction [31; 9], upsampling [34; 33; 19; 20], completion [36; 35; 37; 28; 29], etc., where the difference between the generated and ground-truth point clouds is calculated as the main loss to train the networks. ## 3 Proposed Method ### Problem Statement and Overview Given any two unstructured 3D point clouds \(\mathbf{P}_{1}\in\mathbb{R}^{N_{1}\times 3}\) and \(\mathbf{P}_{2}\in\mathbb{R}^{N_{2}\times 3}\) with \(N_{1}\) and \(N_{2}\) points respectively, we aim to construct a _differentiable_ distance metric, which can quantify the difference between them effectively and efficiently to drive downstream tasks. As mentioned in Section 1, the problem is fundamentally challenging due to the lack of correspondence information between \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). Existing metrics generally focus on establishing the point-wise correspondence between \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) to compute the point-to-point difference, making them either ineffective or inefficient. In contrast to existing metrics, we address this problem by measuring the difference between the underlying surfaces of \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). Generally, with a set of reference points generated, we associate each reference point with \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) to model their local surface geometry. Then we calculate the difference between the local surface geometry reference point-by-reference point and average the differences of all reference points as the final distance between \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). Such a process is memory-saving, computationally efficient, and effective. In what follows, we will introduce the technical details of the proposed distance metric. ### Generation of Reference Points We generate a set of 3D points \(\mathbf{Q}=\{\mathbf{q}_{m}\in\mathbb{R}^{3}\}_{m=1}^{M}\) named reference points, which will be used to _indirectly_ establish the correspondence between \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) and induce the local geometry of the surfaces underlying \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). Technically, after selecting either one of \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\)2 we add Gaussian noise to each point, where the standard deviation is \(T\) times the distance to its nearest point in the point cloud, and repeat the noise addition process \(R\) times randomly to generate \(R\) reference points that are distributed _near_ to the underlying surface. Additionally, to ensure the local surface geometry induced by \(\mathbf{Q}\) covers each point of \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\), we also include the non-selected point cloud into \(\mathbf{Q}\). See Table 7 for the ablative studies on this process. Footnote 2: For point cloud reconstruction-based tasks, where one point cloud is used to supervise a network to generate the other point cloud, the one used as supervision will be selected to generate \(\mathbf{Q}\) because the reconstructed one is messy, lacking sufficient geometric meaning. ### Calibrated Local Surface Geometry Let \(\mathbf{P}\in\{\mathbf{P}_{1},\mathbf{P}_{2}\}\), \(\mathbf{q}\in\mathbf{Q}\), and \(\Omega(\mathbf{q},\mathbf{P}):=\{\mathbf{p}_{k}\}_{k=1}^{K}\) be the set of \(\mathbf{q}\)'s \(K\)-NN (\(K\)-Nearest Neighbor) points in \(\mathbf{P}\). Note that we sort all points in \(\Omega(\mathbf{q},\mathbf{P})\) according to their distances to \(\mathbf{q}\), i.e., \(\|\mathbf{q}-\mathbf{p}_{1}\|_{2}\leq\|\mathbf{q}-\mathbf{p}_{2}\|_{2}\leq... \leq\|\mathbf{q}-\mathbf{p}_{K}\|_{2}\), where \(\|\cdot\|_{2}\) is the \(\ell_{2}\) norm of a vector. Let \(\mathcal{S}\) be the underlying surface of \(\mathbf{P}\), and \(\mathbf{p}^{\star}\in\mathcal{S}\) be the closest point to \(\mathbf{q}\). As illustrated in Fig. 2(a), we could simply approximate the distance from \(\mathbf{q}\) to \(\mathcal{S}\), i.e., the distance between \(\mathbf{q}\) and \(\mathbf{p}^{\star}\), through the weighted averaging of \(\{\|\mathbf{q}-\mathbf{p}_{k}\|_{2}\}_{k=1}^{K}\): \[f(\mathbf{q},\mathbf{P}):=\|\mathbf{q}-\mathbf{p}^{\star}\|_{2}\approx\frac{ \sum_{k=1}^{K}w(\mathbf{q},\mathbf{p}_{k})\cdot\|\mathbf{q}-\mathbf{p}_{k}\| _{2}}{\sum_{k=1}^{K}w(\mathbf{q},\mathbf{p}_{k})},\ \mathrm{where}\ w(\mathbf{q},\mathbf{p}_{k})=\frac{1}{\| \mathbf{q}-\mathbf{p}_{k}\|_{2}^{2}}. \tag{1}\] Figure 2: Illustration of the reference point-induced local surface geometry of a 3D point cloud. (a) \(f(\mathbf{q},\mathbf{P})\). (b) \(\mathbf{v}(\mathbf{q},\mathbf{P})\). Similarly, as shown in Fig. 2(b), we could also approximate the vector from \(\mathbf{p}^{*}\) to \(\mathbf{q}\) with the weighted averaging of \(\{\mathbf{q}-\mathbf{p}_{k}\}_{k=1}^{K}\): \[\mathbf{v}(\mathbf{q},\mathbf{P}):=\mathbf{q}-\mathbf{p}^{*}\approx\frac{\sum_ {k=1}^{K}w(\mathbf{q},\mathbf{p}_{k})\cdot(\mathbf{q}-\mathbf{p}_{k})}{\sum_{k= 1}^{K}w(\mathbf{q},\mathbf{p}_{k})}. \tag{2}\] Then, we concatenate \(f(\mathbf{q},\mathbf{P})\) and \(\mathbf{v}(\mathbf{q},\mathbf{P})\) to form a 4D vector \(\mathbf{g}(\mathbf{q},\mathbf{P})=[f(\mathbf{q},\mathbf{P})||\mathbf{v}( \mathbf{q},\mathbf{P})]\in\mathbb{R}^{4}\) named _directional distance_, which characterizes the local geometry of the surface underlying \(\mathbf{P}\) induced by \(\mathbf{q}\) in implicit fields. ### Local Surface Geometry-driven Distance Metric Calibrated by the reference point \(\mathbf{q}_{m}\in\mathbf{Q}\), the difference between its directional distances to \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) can reflect the difference in their local surface geometry, i.e., the difference between \(\Omega(\mathbf{q}_{m},\ \mathbf{P}_{1})\) and \(\Omega(\mathbf{q}_{m},\ \mathbf{P}_{2})\). Specifically, we calculate the difference between the directional distances of \(\mathbf{q}_{m}\) as \[d(\mathbf{q}_{m},\mathbf{P}_{1},\mathbf{P}_{2})=\|\mathbf{g}(\mathbf{q}_{m}, \mathbf{P}_{1})-\mathbf{g}(\mathbf{q}_{m},\mathbf{P}_{2})\|_{1}, \tag{3}\] where \(\|\cdot\|_{1}\) computes the \(\ell_{1}\) norm of a vector. Note that \(\mathbf{g}(\mathbf{q}_{m},\mathbf{P}_{1})\) and \(\mathbf{g}(\mathbf{q}_{m},\mathbf{P}_{2})\) share _identical_ weights \(\{w\}_{k=1}^{K}\), i.e., the weights of the point cloud selected to generate reference points are also applied to the other point cloud. The proposed distance metric for 3D point clouds is finally defined as the weighted sum of \(d(\mathbf{q}_{m},\mathbf{P}_{1},\mathbf{P}_{2})\): \[\mathcal{D}_{\mathrm{CLGD}}(\mathbf{P}_{1},\mathbf{P}_{2})=\frac{1}{M}\sum_{ \mathbf{q}_{m}\in\mathbf{Q}}s(\mathbf{q}_{m})\cdot d(\mathbf{q}_{m},\mathbf{P }_{1},\mathbf{P}_{2}), \tag{4}\] where \(s(\mathbf{q}_{m})=\texttt{Exp}(-\beta\cdot d(\mathbf{q}_{m},\mathbf{P}_{1}, \mathbf{P}_{2}))\) is the confidence score of \(d(\mathbf{q}_{m},\ \mathbf{P}_{1},\ \mathbf{P}_{2})\) with \(\beta\geq 0\) being a hyperparameter. \(s(\mathbf{q}_{m})\) is introduced to cope with the case where \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) are partially overlapped. ## 4 Experiments To demonstrate the effectiveness and superiority of the proposed CLGD, we applied it to a wide range of downstream tasks, including shape reconstruction, rigid point cloud registration, scene flow estimation, and feature representation, and compared it with EMD [21], CD [2], and ARL [8]3. In all experiments, we set \(R=10\), \(T=3\), and \(K=5\). We set \(\beta=3\) in rigid registration due to the partially overlapping point clouds, and \(\beta=0\) in the other three tasks. We conducted all experiments on an NVIDIA RTX 3090 with Intel(R) Xeon(R) CPU. Footnote 3: ARL was primarily designed for rigid point cloud registration, and thus, we compared our CLGD with ARL only in the registration task. ### 3D Shape Reconstruction We consider a learning-based point cloud shape reconstruction task. Technically, we followed FoldingNet [31] to construct a reconstruction network, where regular 2D grids distributed on a square area \([-\delta,\ \delta]^{2}\) are fed into MLPs to regress 3D point clouds. The network was trained by minimizing the distance between the reconstructed point cloud and the given point cloud in an overfitting manner, i.e., each shape has its individual network parameters. Additionally, based on the mechanism of the reconstruction network and differential geometry, we could obtain the normal vectors of the reconstructed point cloud through the backpropagation of the network. Finally, we used SPSR [13] to recover the mesh from the resulting point cloud and its normal vectors. We refer the readers to the _Supplementary Material_ for more details. Implementation Details.We utilized three categories of the ShapeNet dataset [6], namely chair, sofa, and table, each containing 200 randomly selected shapes. For each shape, we normalized it within a unit cube and sampled 4096 points from its surface uniformly using PDS [4] to get the point cloud. As for the input regular 2D grids, we set \(\delta=0.3\) and the size to be \(64\times 64\). We used the ADAM optimizer to optimize the network for \(10^{4}\) iterations with a learning rate of \(10^{-3}\). Comparisons.To quantitatively compare different distance metrics, we employed CD, Hausdorff distance (HD), and the point-to-surface distance (P2F) to evaluate the accuracy of reconstructed point clouds. As for the reconstructed triangle meshes, we used Normal Consistency (NC) and F-Score with thresholds of 0.5% and 1%, denoted as F-0.5% and F-1%, as the evaluation metrics. Table 1 and Fig. 3 show the numerical and visual results, respectively, where both quantitative accuracy and visual quality of reconstructed shapes by the network trained with our CLGD are much better. Besides, due to the difficulty in establishing the optimal bijection between a relatively large number of points, i.e., 4096 points in our experiments, the network trained with EMD produces even worse shapes than that with CD. \begin{table} \begin{tabular}{l|l|c c c|c c c} \hline \hline \multirow{2}{*}{Shape} & \multirow{2}{*}{Loss} & \multicolumn{3}{c|}{Point Cloud} & \multicolumn{3}{c}{Triangle Mesh} \\ \cline{3-8} & & CD (\(\times 10^{-2}\)) \(\downarrow\) & HD (\(\times 10^{-2}\)) \(\downarrow\) & P2F (\(\times 10^{-3}\)) \(\downarrow\) & NC \(\uparrow\) & F-0.5\% \(\uparrow\) & F-1\% \(\uparrow\) \\ \hline \multirow{3}{*}{Chair} & EMD & 2.935 & 12.628 & 9.777 & 0.781 & 0.277 & 0.524 \\ & CD & 2.221 & 9.430 & 4.543 & 0.839 & 0.465 & 0.721 \\ & Ours & 1.898 & 6.787 & 2.591 & 0.908 & 0.709 & 0.913 \\ \hline \multirow{3}{*}{Sofa} & EMD & 2.534 & 7.355 & 7.433 & 0.879 & 0.459 & 0.717 \\ & CD & 1.972 & 5.323 & 3.392 & 0.920 & 0.668 & 0.887 \\ & Ours & 1.770 & 4.191 & 2.040 & 0.940 & 0.806 & 0.949 \\ \hline \multirow{3}{*}{Table} & EMD & 2.996 & 11.374 & 8.921 & 0.768 & 0.243 & 0.473 \\ & CD & 2.272 & 8.881 & 4.353 & 0.824 & 0.403 & 0.658 \\ & Ours & 1.974 & 6.302 & 2.480 & 0.900 & 0.643 & 0.873 \\ \hline \multirow{3}{*}{**Average**} & EMD & 2.821 & 10.452 & 8.720 & 0.809 & 0.326 & 0.571 \\ & CD & 2.155 & 7.878 & 4.096 & 0.861 & 0.521 & 0.755 \\ \cline{1-1} & Ours & **1.880** & **5.760** & **2.370** & **0.916** & **0.719** & **0.911** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons of reconstructed point clouds and surfaces under different loss functions. The best results are highlighted in **bold**. \(\downarrow\) (resp. \(\uparrow\)) indicates the smaller (resp. the larger), the better. Figure 4: Efficiency comparison under 3D shape reconstruction with various numbers of points. Figure 3: Visual comparisons of reconstructed shapes in the form of point clouds and surfaces under different distance metrics. From top to bottom: EMD, CD, Ours, and GT. Efficiency Analysis.We compared the running time in Fig. 4(a), where it can be seen that with the number of points increasing, the reconstruction driven by EMD requires much more time to optimize than that by CD and our CLGD. Fig. 4(b) compares the GPU memory consumption, showing that these three distance metrics are comparable when the number of points is relatively small, but when dealing with a large number of points, EMD requires more GPU memory. ### Rigid Registration Given source and target point clouds, denoted as \(\mathbf{P}_{\mathrm{src}}\in\mathbb{R}^{N_{\mathrm{src}}\times 3}\) and \(\mathbf{P}_{\mathrm{tgt}}\in\mathbb{R}^{N_{\mathrm{tgt}}\times 3}\), respectively, rigid registration is to seek a spatial transformation \([\mathbf{R},\mathbf{t}]\) to align \(\mathbf{P}_{\mathrm{src}}\) with \(\mathbf{P}_{\mathrm{tgt}}\), where \(\mathbf{R}\in\mathbb{R}^{3\times 3}\) is the rotation matrix and \(\mathbf{t}\in\mathbb{R}^{3}\) is the translation vector. The optimization-based registration directly solves the following problem: \[\{\hat{\mathbf{R}},\hat{\mathbf{t}}\}=\operatorname*{arg\,min}_{\mathbf{R}, \mathbf{t}}\mathcal{D}\left(\mathbf{R}\mathbf{P}_{\mathrm{src}}+\mathbf{t}, \mathbf{P}_{\mathrm{tgt}}\right), \tag{5}\] where \(\mathcal{D}(\cdot,\ \cdot)\) is the distance metric that can be EMD, CD, ARL, or our CLGD. We also consider unsupervised learning-based rigid registration. Specifically, following [8], we modify RPM-Net [32], a supervised learning-based registration method, to be unsupervised by using a distance metric to drive the learning of the network. See the _Supplementary Material_ for more details. We also selected two common rigid registration methods as baselines, i.e., ICP [3] and FGR [38]. Implementation Details.We used the Human dataset provided in [8], containing 5000 pairs of source and target point clouds for training and 500 pairs for testing. The source point clouds are 1024 points, while the target ones are 2048 points, i.e., they are partially overlapped. We used Open3D [39] to implement ICP and FGR. For the optimization-based methods, we utilized Lie algebraic to represent the transformation and optimized it with the ADAM optimizer for 1000 iterations with a learning rate of 0.02. For the unsupervised learning-based methods, we kept the same training settings as [8]. We refer the reader to _Supplemental Material_ for more details. Quantitative Comparison.Following previous registration work [7], we adopted _Rotation Error (RE)_ and _Translation Error (TE)_ as the evaluation metrics. As shown in Table 2 and Fig. 5, our distance metric outperforms the baseline methods in both optimization-based and unsupervised learning methods. ICP, FGR, as long as EMD and CD, can easily get local optimal solutions since they struggle to properly handle the outliers in the non-overlapping regions. ARL [8] and our CLGD do not have such a disadvantage, but the randomly sampled lines in ARL decrease the registration accuracy. \begin{table} \begin{tabular}{l|l|c c} \hline \hline & Method & RE (\(\lx@math@degree\)) \(\downarrow\) & TE (m) \(\downarrow\) \\ \hline \multirow{4}{*}{Optimization-based} & ICP [3] & 1.276 & 0.068 \\ & FGR [38] & 19.684 & 0.384 \\ \cline{2-4} & EMD [21] & 7.237 & 0.642 \\ & CD [2] & 10.692 & 0.421 \\ & ARL [8] & 1.245 & 0.085 \\ \cline{2-4} & Ours & **1.040** & **0.040** \\ \hline \multirow{4}{*}{Unsupervised learning (RPM-Net [32])} & EMD [21] & 7.667 & 0.638 \\ & CD [2] & 2.197 & 0.287 \\ \cline{1-1} & ORL [8] & 1.090 & 0.075 \\ \cline{1-1} & Ours & **0.793** & **0.053** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparisons of rigid registration on the Human dataset [8]. Figure 5: Visual comparisons of rigid registration results. The blue and green points represent the source and target point clouds, respectively. \(\mathsf{Q}\) Zoom in to see details. Efficiency Analysis.Table 3 lists the running time and GPU memory costs of the optimization-based registration driven by different metrics4. Since the number of points is relatively small, EMD has comparable running time and GPU memory consumption to CD and our CLGD. In contrast, ARL [8] is much less efficient because of the calculation of the intersections. Footnote 4: Note that for the unsupervised learning-based methods, different distance metrics are only used to train the network, and the inference time of a trained network with different metrics is equal. ### Scene Flow Estimation This task aims to predict point-wise offsets \(\mathbf{F}\in\mathbb{R}^{N_{\mathrm{src}}\times 3}\), which can align source point cloud \(\mathbf{P}_{\mathrm{src}}\) to target point cloud \(\mathbf{P}_{\mathrm{tgt}}\). The optimization-based methods directly solve \[\hat{\mathbf{F}}=\operatorname*{arg\,min}_{\mathbf{F}}\mathcal{D}(\mathbf{P}_ {\mathrm{src}}+\mathbf{F},\mathbf{P}_{\mathrm{tgt}})+\alpha\mathcal{R}_{ \mathrm{smooth}}(\mathbf{F}), \tag{6}\] where \(\mathcal{R}_{\mathrm{smooth}}(\cdot)\) is the spatial smooth regularization term and the hyperparameter \(\alpha>0\) balances the two items. Besides, we also evaluated the proposed CLGD by incorporating it into unsupervised learning-based frameworks, i.e., replacing the distance metrics of existing unsupervised learning-based frameworks with our CLGD to train the network, We adopted two SOTA unsupervised scene flow estimation methods named NSFP [15] and SCOOP [14]. Implementation Details.We used the Flyingthings3D dataset [16] preprocessed by [10], where \(N_{\mathrm{src}}=N_{\mathrm{tgt}}=8192\). For the optimization-based methods, we used \(\ell_{2}\)-smooth regularization, set \(\alpha=50\), and optimized the scene flow directly with the ADAM optimizer for 500 iterations with a learning rate of 0.01. For the unsupervised learning-based methods, we adopted the same training settings as their original papers. Comparison.Following [15, 14], we employed _End Point Error (EPE)_, _Flow Estimation Accuracy (Acc)_ with thresholds 0.05 and 0.1 (denoted as _Acc-0.05_ and _Acc-0.1_), and _Outliers_ as the evaluation metrics. From the results shown in Table 4 and Fig. 6, it can be seen that our CLGD drives much more accurate scene flows than EMD and CD under the optimization-based framework, and our CLGD further boosts the accuracy of SOTA unsupervised learning-based methods to a significant extent, demonstrating its superiority and the importance of the distance metric in 3D point cloud modeling. Efficiency Analysis.We also compared the running time and GPU memory cost of different metrics in the optimization-based framework. As shown in Table 5, EMD consumes much more time and GPU memory than CD and our CLGD. ### Feature Representation In this experiment, we trained an auto-encoder with different distance metrics used as the reconstruction loss to evaluate their abilities. Technically, an input point cloud is encoded into a global feature \begin{table} \begin{tabular}{l|l|c c c} \hline \hline & Method & EPE3D(m)\(\downarrow\) & Acc-0.05 \(\uparrow\) & Acc-0.1\(\uparrow\) & Outliers \(\downarrow\) \\ \hline \multirow{3}{*}{Optimization-based} & EMD [21] & 0.3681 & 0.1894 & 0.4226 & 0.7838 \\ & CD [2] & 0.1557 & 0.3489 & 0.6581 & 0.6799 \\ & Ours & **0.0843** & **0.5899** & **0.8722** & **0.4707** \\ \hline \multirow{3}{*}{Unsupervised Learning} & NSFP [15] + Ours & 0.0899 & 0.6095 & 0.8496 & 0.4472 \\ & NSFP [15] + Ours & **0.0662** & **0.7346** & **0.9107** & **0.3426** \\ \cline{1-1} \cline{2-5} & SCOOP [14] & 0.0839 & 0.5698 & 0.8516 & 0.4834 \\ \cline{1-1} \cline{2-5} & SCOOP [14] + Ours & **0.0742** & **0.6134** & **0.8858** & **0.4497** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative comparisons of scene flow estimation on the Flyingthings3D dataset. \begin{table} \begin{tabular}{l|c c} \hline \hline & Time & GPU Memory \\ \hline EMD [21] & 11 & 1.680 \\ CD [2] & 11 & 1.809 \\ ARL [8] & 281 & 7.202 \\ Ours & 13 & 1.813 \\ \hline \hline \end{tabular} \end{table} Table 3: Running time (s) and GPU memory (GB) costs of different distance metrics under the optimization-based rigid registration task. \begin{table} \begin{tabular}{l|c c} \hline \hline & Time & GPU Memory \\ \hline EMD & 1011 & 2.204 \\ CD & 7 & 1.704 \\ Ours & 14 & 1.725 \\ \hline \hline \end{tabular} \end{table} Table 5: Running time (s) and GPU memory (GB) costs of different distance metrics under the optimization-based scene flow estimation task. through the encoder, which is further decoded to reconstruct the input point cloud through a decoder. After training, we used the encoder to represent point clouds as features for classification by an SVM classifier. Implementation Details.We built a auto-encoder with MLPs and used the ShapeNet [6] and ModelNet40 [27] datasets for training and testing, respectively. We trained the network for 300 epochs using the ADAM optimizer with a learning rate of \(10^{-3}\). We refer the readers to the _Supplemental Material_ for more details. Comparison.As listed in Table 6, the higher classification accuracy by our CLGD demonstrates the auto-encoder driven by our CLGD can learn more discriminative features. Besides, we also used T-SNE [23] to visualize the 2D embeddings of the global features of 1000 shapes from 10 categories in Fig. 7 to show the advantages of our method more intuitively. ### Ablation Study We carried out comprehensive ablation studies on shape reconstruction and rigid registration to verify the rationality of the design in our CLGD. Directional Distance.We removed \(f\) or \(\mathbf{v}\) from the directional distance \(\mathbf{g}\) when modeling the local surface geometry in Section 3.3. The accuracy of reconstructed 3D mesh shapes is listed in Table 7, where it can be seen that removing either one of \(f\) and \(\mathbf{v}\) would decrease the reconstruction accuracy, especially \(\mathbf{v}\), because using \(f\) or \(\mathbf{v}\) alone could hardly characterize the underlying surfaces of point clouds. Size of \(\Omega(\mathbf{q},\mathbf{P})\).We changed the size of \(\Omega(\mathbf{q},\mathbf{P})\) by varying the value of \(K\) used in Eqs. (1) and (2). As shown in Table 7, a small or large value of \(K\) would decrease the performance of CLGD because a small \(K\) only covers a tiny region on the point cloud, resulting in that \(\mathbf{g}(\mathbf{q},\mathbf{P})\) cannot represent the local surface geometry induced by \(\mathbf{q}\), while a large \(K\) includes too many points, making \(\mathbf{g}(\mathbf{q},\mathbf{P})\) would become smooth and ignore the details of the local surface geometry induced by \(\mathbf{q}\). \begin{table} \begin{tabular}{c|c c c} \hline Loss Function & EMD & CD & Ours \\ \hline Accuracy (\%) & 78.12 & 78.89 & 81.28 \\ \hline \end{tabular} \end{table} Table 6: Classification accuracy by SVM on ModelNet40 [27]. Figure 6: Visual comparisons of scene flow estimation. The blue and green points represent the source and target point clouds, respectively, and the red points are the warped source point cloud with estimated scene flows. \(\mathbf{Q}\) Zoom in to see details. Figure 7: The T-SNE clustering visualization of the features obtained from the auto-encoder trained with different distance metrics.
2305.06473
Securing Distributed SGD against Gradient Leakage Threats
This paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). First, we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. Next, we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. Finally, we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks. The code associated with this paper can be found: https://github.com/git-disl/Fed-alphaCDP.
Wenqi Wei, Ling Liu, Jingya Zhou, Ka-Ho Chow, Yanzhao Wu
2023-05-10T21:39:27Z
http://arxiv.org/abs/2305.06473v1
# Securing Distributed SGD against ###### Abstract This paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). _First_, we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. _Next_, we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. _Finally_, we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks. The code associated with this paper can be found: [https://github.com/git-disl/Fed-alphaCDP](https://github.com/git-disl/Fed-alphaCDP). federated learning, distributed system, gradient leakage attack, privacy analysis ## 1 Introduction Federated learning is an emerging distributed machine learning paradigm with default privacy. It enables collaborative model training over a large corpus of decentralized data residing on a distributed population of autonomous clients, and all clients can keep their sensitive data private. Clients perform local training with Stochastic Gradient Descent (SGD) or SGD-alike learning and only share local model updates via encrypted communication with the federated server for server aggregation. To deal with potentially unpredictable client availability, only a small subset \(K_{t}\) of the total population of \(N\) clients (\(K_{t}\leq N\)) is required to contribute to each round \(t\) (\(1\leq t\leq T\)) in federated learning [1]. **Privacy Concerns in Federated Learning.** Despite the default privacy by keeping client data local, recent studies reveal that gradient leakage attacks in federated learning may compromise the privacy of client training data [2, 3, 4, 5, 6]. For example, an unauthorized read on the gradients generated from local SGD can lead to exploitation of per-example gradient updates using the reconstruction-based gradient leakage attack method, resulting in unauthorized disclosure of client private training data. Similarly, peeping at the shared gradient for global aggregation or global-level SGD at the server may give away sensitive information about the training data from a particular client. Consequently, three fundamental questions are raised in this reality of federated learning: (1) Can a model trained with privacy-enhanced federated learning be scalable for minimizing unauthorized disclosure of sensitive local training data on its clients? (2) What level of resilience can a privacy-preserving federated learning solution provide to protect the privacy of its clients against gradient leakage threats? (3) Can a federated learning model trained with differential privacy guarantee provide high model accuracy and strong gradient leakage resilience at the same time? In this paper, we attempt to answer these questions and argue that an effective differentially private federated learning algorithm can be robust and effective against gradient leakages with high accuracy performance. In this paper, we first characterize three types of gradient leakage attacks based on where gradients are leaked and the threat model. Then we study gradient compression and gradient perturbation techniques and analyze the inherent limitations of these approaches on gradient leakage resilience. Motivated by the analysis, we present a resilience-enhanced approach to federated learning, using differential privacy controlled noise as the tool. Our method, coined as Fed-\(\alpha\)CDP, extends the conventional per-client approaches by per-example gradient perturbation and augments with adaptive parameter optimizations. Our approach makes three improvements over existing state-of-the-art solutions. _First_, we inject differential privacy controlled noise to per-example gradients during client's local training such that the local SGD at client is performed on the perturbed gradients. As a result, the local SGD is differentially private. The parameter updates resulting from client's local training and shared with the federated server at each round are perturbed rather than in the original raw format, ensuring that the server-side aggregation is differentially private by composition theorems of differential privacy. _Second_, in addition to the location of the noise injection, we revisit the use of the fixed clipping bound that defines the sensitivity in existing differentially private deep learning to address the inherent problem of the resulting constant Gaussian noise variance throughout the multi-round local SGD and global aggregation in federated learning. Given that gradient magnitude demonstrates a decreasing trend as the learning processes, we alternatively define the sensitivity of a differentially private federated learning algorithm using the \(l_{2}\) max of gradients, which results in decreasing Gaussian noise variance during the local SGD at clients' local training. _Third_, we introduce the dynamic decaying noise scale \(\sigma\) to enable Gaussian noise variance to follow the trend of gradient updates so that noise variance and noise injection in local SGD are closely aligned. We perform an empirical privacy analysis on the proposed Fed-\(\alpha\)CDP with respect to privacy and utility measured by privacy spending, model accuracy, and gradient leakage resilience. With extensive experiments conducted using five benchmark datasets, we evaluate the effectiveness of Fed-\(\alpha\)CDP in comparison to the existing state-of-the-art approaches from three perspectives: (i) resilience against three types of gradient leakage attacks, (ii) accuracy performance under the same privacy budget, and (iii) privacy guarantee under the same target accuracy (utility). We show that the proposed solution approach outperforms other alternatives with the best overall performance in terms of gradient leakage resilience, differential privacy guarantee, and model accuracy. ## 2 Preliminary ### _Differential Privacy: Definitions_ **Definition 1**.: _Differential privacy [7]: Let \(\mathcal{D}\) be the domain of possible input data and \(\mathcal{R}\) be the range of all possible output. A randomized mechanism \(\mathcal{M}\): \(\mathcal{D}\rightarrow\mathcal{R}\) satisfies (\(\epsilon,\delta\))-differential privacy if for any two input sets \(A\subseteq\mathcal{D}\) and \(A^{\prime}\subseteq\mathcal{D}\), differing with only one entry: \(||A-A^{\prime}||_{0}=1\), Equation 1 holds with \(0\leq\delta<1\) and \(\epsilon>0\)._ \[\Pr(\mathcal{M}(A)\in\mathcal{R})\leq e^{\epsilon}\Pr(\mathcal{M}(A^{\prime}) \in\mathcal{R})+\delta. \tag{1}\] A smaller \(\epsilon\) would indicate a smaller difference between the output of \(\mathcal{M}(A)\) and the output of \(\mathcal{M}(A^{\prime})\). Since \(\delta\) is the upper bound probability of \(\mathcal{M}(A)\) for breaking \(\epsilon\)-differential privacy, a small \(\delta\) is desired. Following the literature [8, 9, 10], \(\delta\) is set to \(1e-5\) for the rest of the paper. **Definition 2**.: _Sensitivity [11]: Let \(\mathcal{D}\) be the domain of possible input and \(\mathcal{R}\) be the domain of possible output. The sensitivity of function \(f:\mathcal{D}\rightarrow\mathcal{R}\) is the maximum amount that the function value varies when a single input entry is changed._ \[S=\max_{A,A^{\prime}\subseteq\mathcal{D},||A-A^{\prime}||_{0}=1}||f(A)-f(A^{ \prime})||_{p}. \tag{2}\] By Equation 2, if one wants to produce a differentially private algorithm \(\mathcal{M}(f)\) by injecting random noise under a specific distribution (e.g., Laplace or Gaussian) to function \(f\), one needs to scale the noise to match (bound by) the maximum change defined as the sensitivity of function \(f\) with neighboring inputs. **Theorem 1**.: _Gaussian mechanism [7]: Let \(\mathcal{D}\) be the domain of possible input data and \(\mathcal{R}\) be the range of all possible output. With arbitrary privacy parameter \(\epsilon\in(0,1)\), applying Gaussian noise \(\mathcal{N}(0,\varsigma^{2})\) calibrated to a real-valued function: \(f:\mathcal{D}\rightarrow\mathcal{R}\) with noise variance \(\varsigma^{2}\) such that \(\mathcal{M}(A)=f(A)+\mathcal{N}(0,\varsigma^{2})\) is \((\epsilon,\delta)\)-differentially private if \(\varsigma^{2}>\frac{2\log(1.25/\delta)S^{2}}{\epsilon^{2}}\)._ **Lemma 1**.: _Let \(\varsigma^{2}\) in Gaussian mechanism be \(\sigma^{2}S^{2}\) where \(\sigma\) is the noise scale, and \(S\) is the \(l_{2}\) sensitivity. We have the noise scale \(\sigma\) satisfying \(\sigma^{2}>\frac{2\log(1.25/\delta)}{\epsilon^{2}}\)._ Lemma 1 is straightforward with Theorem 1. When \(\delta\) is given and fixed, the noise scale \(\sigma\) has an inverse correlation with privacy loss \(\epsilon\), implying that one can calculate the smallest \(\epsilon\) differential privacy guarantee of the function based on the noise scale. **Privacy parameters in baseline DPSGD implementation.** By Definition 2, the sensitivity of a differentially private function is defined as the maximum amount that the function value varies when a single input entry is changed. Hence, the sensitivity of a differentially private function may vary for different input batches during different iterations of deep learning [12]. For federated learning, the sensitivity of the local model can be different for different local iterations at different clients and rounds. Based on Theorem 1 and Lemma 1, the Gaussian noise for a differentially private function is calibrated with noise variance defined by noise scale \(\sigma\) and sensitivity \(S\) of the function. The baseline DPSGD implementation [8], followed by most of the work [9, 13, 14, 15] suggests using a fixed clipping parameter \(C\) to estimate sensitivity \(S\), and a fixed noise scale \(\sigma\). ### _Properties of Differential Privacy_ Several important properties of differential privacy are essential in tracking privacy spending. They designate the privacy composition for a sequence of differential privacy mechanisms on the same dataset, for mechanisms that run in parallel over disjoint datasets, and for the post-processing of a differentially private mechanism. **Theorem 2**.: _Composition theorem [7]: Let \(\mathcal{M}_{i}:\mathcal{D}\rightarrow\mathcal{R}_{i}\) be a randomized function that is \((\epsilon_{i},\delta_{i})\)-differentially private. If \(\mathcal{M}\) is a sequence of consecutive invocations (executions) of \((\epsilon_{i},\delta_{i})\)-differentially private algorithm \(\mathcal{M}_{i}\), then \(\mathcal{M}\) is (\(\sum_{i}\epsilon_{i},\sum_{i}\delta_{i}\))-differentially private._ **Theorem 3**.: _Parallel Composition [16]: Let \(\mathcal{M}_{i}:\mathcal{D}_{i}\rightarrow\mathcal{R}_{i}\) be a randomized function that is \((\epsilon_{i},\delta_{i})\)-differentially private on a disjointed subset of the data. If \(\mathcal{M}\) consists of a set of \(\epsilon_{i},\delta_{i})\)-differentially private algorithms that are invoked and executed in parallel, then \(\mathcal{M}\) is \((\max_{i}\epsilon_{i},\max_{i}\delta_{i})\)-differentially private._ **Theorem 4**.: _Post Processing [7]: Let \(\mathcal{D}\) be the domain of possible input data and \(\mathcal{R}\) be the range of all possible output. Let \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{R}\) be a randomized function that is \((\epsilon,\delta)\)-differentially private. Let \(h:\mathcal{R}\rightarrow\mathcal{R}^{\prime}\) be an arbitrarily randomized or deterministic mapping. Then \(h\circ\mathcal{M}:\mathcal{D}\rightarrow\mathcal{R}^{\prime}\) is \((\epsilon,\delta)\)-differentially private._ Theorem 2 states that a randomized function \(\mathcal{M}\) that consists of a sequence of \(n\) differentially private mechanisms is differentially private. Theorem 3 states that one can view the parallel composition of a set of \(n\) differentially private mechanisms executed in parallel as the case in which each differentially private mechanism is applied to one of the \(n\) disjointed subsets of the input dataset concurrently. The overall privacy guarantee of this parallel composition will be defined by the maximum privacy loss among the \(n\) individual privacy losses. Theorem 4 states another important property of the differentially private algorithm: any post-processing applied to the result of a \((\epsilon,\delta)\)-differentially private mechanism is also \((\epsilon,\delta)\)-differential private with the same privacy guarantee. The above differential privacy definitions and theorems form the essential foundation for developing differentially private federated learning approaches. Besides composition, the data sampling method also contributes to the privacy spending accumulation via privacy amplification [17]. ## 3 Gradient Leakage Threats ### _Threat Model_ Assumptions. _On the data side_, we assume that data at rest and data in network transit are encrypted and secure. The main attack surface is during data-in-use. We focus on gradient leakage induced threats to client training data in the presence of malicious or semi-curious adversary. Hence, we focus on unauthorized access to gradient data during local training on a client and the global SGD for aggregation of local model updates performed at the federated server. Hence, we assume that the attackers cannot gain access to the training data prior to feeding the decrypted training data to the DNN algorithm during local training. _On attacker side_, we assume that the adversary on a compromised client _only_ performs gradient leakage attack, which seeks unauthorized access to gradients (training parameter updates) of participating clients at three possible attack surfaces: (i) at the federated server, (ii) at the client after local training prior to sending encrypted gradients to the federated server, and (iii) during local training prior to local SGD performed at each client. The **type-0 attack** occurs at the federated server. We assume that the server is compromised and the adversary can access the shared per-client gradient prior to performing the global SGD at the federated server. However, the adversary cannot obtain the per-example gradient or the accumulated gradients during per iteration of the local training process prior to performing local SGD at each client. The **type-1 attack** occurs at the client but the adversary on a client can only gain access to the per-client local gradient after the local training is completed and prior to the client encrypting and sending the local training gradient to the federated server at each global round. Finally, the **type-2 attack** corresponds to the third type of attack surface where the adversary at client may gain access to the per-example gradient prior to performing local SGD at each iteration during local training. This type-2 attack can result in a high attack success rate by reconstructing high-quality sensitive training data. ### _Attack Procedure_ **Figure 1** gives a sketch of the gradient leakage attack algorithm, which configures and executes the reconstruction attack in five steps: (1) the attacker configures the initialization seed (\(x^{0}_{rec}(t)\)), a dummy data of the same resolution (or attribute structure for text) as the training data. [5] showed some significant impact of different initialization seeds on the attack success rate and attack cost (\(\#\) attack iterations to succeed). (2) The dummy attack seed is fed into the client's local model. (3) The gradient of the dummy attack seed is obtained by backpropagation. (4) The gradient loss is computed using a vector distance loss function, e.g., \(L_{2}\), between the gradient of the attack seed and the actual gradient from the client's local training. The choice of this reconstruction loss function is another tunable attack parameter. (5) The dummy attack seed is modified by the attack reconstruction learning algorithm. It aims to minimize the vector distance loss by a loss optimizer such that the gradients of the reconstructed seed \(x^{i}_{rec}(t)\) at round \(i\) will be closer to the actual gradient updates stolen from the client upon the completion (type 0 & 1) or during the local training (type 2). This attack reconstruction iterates until it reaches the attack termination condition (\(\tau\)), typically defined by the \(\#\)rounds, e.g., 300 (also a configurable attack parameter). If the reconstruction loss is smaller than the specified distance threshold, then the attack is successful. **Figure 2** provides a visualization by attack examples of CPL [5] on MNIST, CIFAR10, and LFW, respectively. The type-0 & 1 gradient leakage attack is performed using the LFW dataset on the batched gradients with batch size 3. The details of these datasets as well as their setting are provided in Section 7. **Table I** compares the type-2 leakage under three representative gradient leakage attacks, in terms of \(l_{2}\) leakage the attack success rate (ASR), reconstruction quality measured by RMSE, and reconstruction iterations. For the rest of the paper, we adopt the most effective and efficient attack with patterned-initialization at the first iteration (epoch) of federated learning by following [5]. ## 4 Analysis on Gradient Transformation In this section, we examine gradient compression and gradient perturbation, which are methods in the literature applicable to gradient transformation in federated learning. We analyze the resilience each may provide against gradient leakage attacks. ### _Gradient Compression_ Gradient pruning is a common technique for DNN model compression. In each round \(t\) of federated learning, each participating \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{3}{*}{DLG [2]} & \multicolumn{2}{c|}{MNIST} & CIFAR10 & LFW \\ \cline{2-5} & ASR & 0.686 & 0.754 & 0.857 \\ \cline{2-5} & quality & 0.145 & 0.109 & 0.089 \\ \cline{2-5} & reconstruction iteration & 18.4 & 114.5 & 69.2 \\ \hline \multirow{3}{*}{GradInv [4]} & ASR & 1 & 0.985 & 0.994 \\ \cline{2-5} & quality & 0.122 & 0.095 & 0.071 \\ \cline{2-5} & reconstruction iteration & 846 & 2135 & 1826 \\ \hline \multirow{3}{*}{CPL [5]} & ASR & 1 & 0.973 & 1 \\ \cline{2-5} & quality & 0.120 & 0.094 & 0.067 \\ \cline{1-1} \cline{2-5} & reconstruction iteration & 11.5 & 28.3 & 25 \\ \hline \end{tabular} \end{table} TABLE I: Comparison of the representative gradient leakage attacks. Fig. 1: Attack schema. Fig. 2: Attack visualization. client sends a full vector of local training parameter update to the federated server. This step can be the communication bottleneck for large models on complex data. Several communication-efficient federated learning protocols have been proposed by employing structured or sketched updates [1, 18]. The former directly learns an update from a pre-specified structure, such as a low-rank matrix and random masks. The latter compresses the learned full vector of model parameter update to ensure a high compression ratio with a low-value loss before sending it to the server. We below describe two simple pruning approaches for utility-aware gradient compression. **Threshold-based gradient pruning.** Threshold-based gradient pruning [19] sends only the important gradients to the federated server at each round, i.e., the gradient coordinates whose magnitude is larger than a threshold. The gradients to be shared are first being sorted, and then a certain percentage of very small but high precision gradients are replaced with zero, based on a default pruning threshold \(\mu\%\). The level of resilience under gradient pruning depends on both the dataset and the pruning threshold \(\mu\%\). A study in [5] reveals that the resilience of gradient pruning only takes effect once more than 40% of gradients are pruned for MNIST, 50% for CIFAR10, and 80% for LFW. This implies that (i) the gradients transformed by low-rank pruning may still intrude on client privacy under gradient leakage attacks if the gradient compression ratio is not sufficiently large, and (ii) it is hard to set a universal compression ratio that works for all datasets. **Random pruning with Distributed Selected SGD.** Alternatively, a client may randomly select a subset of gradient coordinates whose values are above a given threshold and share this randomized subset with the federated server (DSSGD [20]). However, DSSGD is at most on par with the value-based pruning on model accuracy and leakage resilience. To mask the actual value of the gradient, another option of DSSGD is to add differential privacy noise to the selected subset of gradients before sharing them with the server. However, using the differential privacy controlled noise may result in a large accuracy deficit as reported in [20]. Moreover, due to the coordinate-level gradient noise, the globally trained model does not offer instance-level differential privacy protection despite the differential privacy noise added at each round. ### _Gradient Perturbation_ **Random noise addition.** It is clear that a larger injected noise will alter the raw gradients more but may also hurt the model accuracy of federated learning. The open challenge for random noise injection is how much noise is sufficient for leakage resilience while maintaining good model accuracy. Studies in [2, 5] have shown that when the noise variance is high enough, e.g., set to 0.01, for gradient perturbation, the gradient leakage induced violation of client privacy can be prevented at the cost of a significant accuracy loss of 10% for LFW and 30% for CIFAR100. Such an unstable effect, compounded with the potential accuracy drop, makes the additive random noise difficult to scale. **Client-level differential privacy noise.** Most existing approaches to differentially private federated learning [13, 21] only make client-level protection by making server-side global-level aggregation differentially private. This is done by noise injection on the client update through the iterative \(T\) rounds of training, regardless of whether the differential privacy noise is injected at the federated server before performing global SGD [13] or at each of the \(K_{t}\) clients before encrypting the gradients and sending them to the federated server [21]. Therefore, we coin this approach as Fed-SDP. By the definition and composition theorems of differential privacy, the trained global model in Fed-SDP only provides differential privacy at the client level, which ensures that an adversary cannot tell whether a local training parameter update from a single client was used, even if the adversary can observe the output of the global model \(\mathcal{F}\). However, the per-example gradients used in the local training are not perturbed, and the local SGD is not differentially private. Hence, Fed-SDP cannot provide differential privacy guarantee at the instance level. Consider the three types of gradient leakage threats and Fed-SDP performs clipping and noise injection prior to performing aggregation using server-side SGD, if the differential privacy noise is added at the federated server like in [13], then it is only effective against the type 0 leakage but still vulnerable to type 1 and type 2 gradient leakage attacks, even with the client-level differential privacy guarantee. Suppose Fed-SDP performs noise injection at each of the \(K_{t}\) contributing clients for each round \(t\) like in [21]. In that case, it can combat both type 0 and type 1 leakages but remain vulnerable to the type 2 gradient leakage attacks. This is because type 2 leakage occurs prior to performing local SGD per iteration during local training. Similarly for both gradient pruning methods and the additive random noise solution, as they perform gradient transformation after the client's local training, the client's local training is unprotected and is vulnerable to type-2 leakage. **Instance-level differential privacy noise.** By comparison, [15] introduces per-example noise to federated learning as Fed-CDP and ensures that client-side local SGD during the client's local training is differentially private. Therefore, Fed-CDP provides per-example differential privacy, such that even if an adversary can observe the output of the global model trained by Fed-CDP, he cannot tell whether the user information defined by a single example was used in training. Due to Composition theorems and Post Processing, the per-client local update is sanitized with the compound noise accumulated from multiple local iterations. Thus, the subsequent server aggregation is differentially private. Accordingly, Fed-CDP offers both instance-level differential privacy and client-level differential privacy. Both Fed-SDP [13, 14, 21] and Fed-CDP [15] commonly use a pre-defined fixed clipping bound \(C\) and the constant clipping method to cap the gradients prior to perturbing the gradients with differential privacy controlled noise, and all follow the DPSGD implementation in [8], which utilizes the fixed clipping parameter \(C\) as an estimate of the sensitivity, resulting in using a fixed sensitivity in all rounds of federated learning. By Lemma 1, the use of fixed \(S\) and \(\sigma\) will result in constant noise variance, and thus a fixed amount of noise is added for gradient perturbation each time over all \(T\) rounds of federated learning. **Figure 3** provide a visual illustration of the limitation of these methods on test accuracy and leakage resilience under three types of gradient leakage attacks for LFW. Even for Fed-CDP, we find that accuracy-oriented privacy parameter search cannot always guarantee gradient leakage resilience when under fixed privacy parameters (second column to the right in Figure 3). The resilience-accuracy co-design of Fed-CDP could be at the cost of accuracy (last column in Figure 3). Accordingly, we design Fed-\(\alpha\)CDP, a new approach to federated learning with differential privacy with three enhancements on how differential privacy noise is computed, injected, and controlled. Our approach aims to provide high resilience against all three types of gradient leakages, strong differential privacy guarantee, and high model accuracy. ## 5 Fed-\(\alpha\)CDP: A three-in-one solution In this section, we describe Fed-\(\alpha\)CDP, including the design considerations and the three key enhancements that distinguish Fed-\(\alpha\)CDP approach to the conventional differentially private federated learning: (i) enforcing client-side instance-level differential privacy, (ii) revisiting and improving sensitivity parameter \(S\) with tighter estimation, and (iii) revisiting fixed noise scale parameter \(\sigma\) with dynamic decaying strategies. ### _Fed-\(\alpha\)CDP: Per-Example Perturbation at Client_ To ensure that the global model \(\mathcal{F}\) jointly trained over \(T\) rounds of federated learning satisfies \((\epsilon,\delta)\) differential privacy, we need to ensure the server-side SGD algorithm, denoted by \(F_{t}\), to satisfy \((\epsilon_{t},\delta_{t})\)-differential privacy, where \(\epsilon=\sum_{t=1}^{T}\epsilon_{t}\) and \(\delta=\sum_{t=1}^{T}\delta_{t}\). There are two ways to make \(F_{t}\) differentially private. The first approach is to add \(\epsilon_{t}\)-controlled noise to the local gradient update obtained after the completion of local training at each client \(i\) for round \(t\) and all \(K_{t}\) clients (\(1\leq i\leq K_{t}\)) as in Fed-SDP. The second approach is to add differential privacy noise to the per-example gradient prior to performing the local SGD over each batch of local training data \(D_{i}\). Given that the local training at client \(i\) is performed over \(L\) iterations (\(1\leq l\leq L\)) on \(D_{i}\) for round \(t\) (\(1\leq i\leq K_{t}\), \(1\leq t\leq T\)), we need to ensure the local SGD algorithm performed at each iteration \(l\), denoted by \(g_{til}\), to satisfy \((\epsilon_{tl},\delta_{tl})\)-differential privacy. By parallel composition over \(K_{t}\) clients, we have \(\epsilon_{t}=\max_{i\in K_{t}}\epsilon_{ti}\) and \(\delta_{t}=\max_{i\in K_{t}}\delta_{ti}\), and by sequential composition over the \(L\) iterations of local training on client \(i\) for round \(t\), we have \(\epsilon_{ti}=\sum_{l=1}^{L}\epsilon_{til}\) and \(\delta_{ti}=\sum_{l=1}^{T}\delta_{til}\) by sequential composition. **Santizing per-example gradients.** Based on the above analysis, Fed-\(\alpha\)CDP injects differential privacy controlled random noise prior to performing local SGD for every local iteration \(l\) (\(1\leq l\leq L\)) at each contributing client \(i\) (\(1\leq i\leq K_{t}\)) in each round \(t\) (\(1\leq t\leq T\)). This ensures that the local SGD algorithm \(g_{til}\) satisfies \((\epsilon_{til},\delta_{til})\)-differential privacy. By Theorem 1 and Lemma 1, the Gaussian noise \(\mathcal{N}(0,\sigma^{2}S^{2})\) will be added to the per-example gradient in each local iteration prior to performing local SGD algorithm (\(g_{til}\)) over the batch \(B\) of training examples. For a DNN of \(M\) layers, we add this noise to per-example gradient layer by layer after performing gradient clipping with a pre-defined clipping bound \(C\). **Constant clipping method with clipping bound \(C\).** First, \(g_{til}\) draws a batch of \(B\) samples from \(D_{i}\), and for each example \(j\) in the batch, \(g_{til}\) computes the layer-wise per-example gradient vector, \(\nabla W_{ij}(t)_{lm}\) (\(1\leq m\leq M\)), by minimizing the empirical loss function. Next, \(g_{til}\) computes the \(l_{2}\) norm of per-example gradient for each of the \(M\) layers and performs clipping on the per-example gradients. If the \(l_{2}\) norm of per-example gradient for layer \(m\) is no larger than the fixed clipping bound, i.e., \(||\nabla W_{ij}(t)_{lm}||_{2}\leq C\), then the gradient value \(\nabla W_{ij}(t)_{lm}\) is preserved. Otherwise \(||\nabla W_{ij}(t)_{lm}||_{2}>C\), we clip the per-example gradient vector \(\nabla W_{ij}(t)_{lm}\) for each layer \(m\) so that its \(l_{2}\) norm is reduced to \(C\). We denote the clipped per-example gradient as \(\nabla W_{ij}(t)_{lm}\) for layer \(m\) of local training example \(j\) in local iteration \(l\). ### _Fed-\(\alpha\)CDP: Revisiting Sensitivity \(S\)_ Due to the nature of gradient descent, the trend of \(l_{2}\) norm of gradients as the training progresses would decrease. This implies that using the fixed clipping bound \(C\) to define sensitivity \(S\) can be a very loose approximation of the actual \(l_{2}\) sensitivity by Definition 2, especially at the later stage of training near convergence. With a fixed sensitivity \(S\) and noise scale \(\sigma\), the Gaussian noise with variance \(\mathcal{N}(0,\sigma^{2}S^{2})\) will result in injecting a fixed amount of differential privacy noise over the course of iterative federated learning. In most rounds, especially later rounds of training, such excessive noise injection is unnecessary and can incur adverse effects on both accuracy and convergence. Meanwhile, regarding gradient leakage attacks, per-client local gradient update at early rounds is much more informative compared to that of later rounds [5]. Therefore, it will be more effective if a larger noise at early rounds and a smaller noise at later rounds of federated learning is injected, implying smaller privacy loss (\(\epsilon\)) allocates to early rounds and larger privacy spending for later rounds instead of uniform and fixed privacy spending allocation over \(T\) rounds. Motivated by these limitations of fixed clipping based sensitivity, we hereby revisit the sensitivity definition and adapt it to federated learning so that the sensitivity strictly aligns to the gradient \(l_{2}\) norm and keep track of the sensitivity of the local training model. Specifically, we promote to use the max \(l_{2}\) norm of the per-example gradient in a batch as the sensitivity in Fed-\(\alpha\)CDP for two reasons. First, the sensitivity of the local SGD function \(g_{til}\) differs for different local iterations at the same client \(i\) and for the same round \(t\) because different iterations correspond to different batches of training examples randomly sampled from the local training dataset \(D_{i}\). Second, the \(l_{2}\)-max computed after clipping reflects more accurately the actual sensitivity of \(g_{til}\) by following Fig. 3: Limitations of existing gradient transformation approaches on gradient privacy leakage. the sensitivity Definition 2. Consider two scenarios: (i) When the \(l_{2}\) norm of all per-example gradients in a batch is smaller than the pre-defined clipping bound \(C\), then the clipping bound \(C\) is undesirably a loose estimation of the sensitivity of \(g_{til}\) under any given \(l\), \(i\), and \(t\). For fixed \(l\), fixed \(i\), and fixed \(t\), the max \(l_{2}\) norm among the per-example gradients over the entire batch for iteration \(l\) is, in fact, a tight estimation of sensitivity for noise injection. If we instead define the sensitivity of \(g_{til}\) by the max \(l_{2}\) norm among these per-example gradients in the batch, we will correct the problems in the above scenario. (ii) When any of the per-example gradients in a batch is larger than the clipping bound, the sensitivity of \(g_{til}\) is set to \(C\). In summary, the \(l_{2}\)-max sensitivity will take whichever is smaller of the max \(l_{2}\) norm and the clipping bound \(C\). In Fed-\(\alpha\)CDP, we can perform clipping layer by layer and example by example, but we need to finish computing the \(l_{2}\) norm of all \(M\) layers for all examples in the batch of iteration \(l\) in order to obtain the max \(l_{2}\) norm as the tight estimation of the actual sensitivity of \(g_{til}\). This \(l_{2}\)-max sensitivity definition is dependent on the local training function \(g_{til}\), which tends to vary for different \(l\) (\(1\leq l\leq L\)), different client \(i\) (\(1\leq i\leq K_{t}\)), and different round \(t\) (\(1\leq t\leq T\)). Even with given client \(i\) and given round \(t\), the randomly sampled batch over \(D_{i}\) will be different for different iteration \(l\). Furthermore, the function \(g_{til}\) will produce different outputs for different clients at different rounds. Hence, this \(l_{2}\)-max sensitivity is adaptive with respect to every local iteration, every client, and every round. **Gradient perturbation with \(l_{2}\)-max sensitivity.** After clipping, \(g_{til}\) will inject \((\epsilon_{til},\delta_{til})\)-differential privacy controlled Gaussian noise \(\mathcal{N}(0,\sigma^{2}S^{2})\) to the layer-wise clipped per-example gradients in the batch: \(\widehat{\nabla}W_{i}(t)_{l}=\widehat{\nabla}W_{i}(t)_{l}+\mathcal{N}(0, \sigma^{2}S^{2})\), before performing batch averaging over noisy gradients: \(\widehat{\nabla}W_{i}(t)_{l}=\frac{1}{\mathbb{Z}}\sum_{j}^{B}\widehat{\nabla }W_{ij}(t)_{l}\). Then the local training function \(g_{til}\) performs the gradient descent for iteration \(l\): \(W_{i}(t)_{l}=W_{i}(t)_{l-1}-\eta\widehat{\nabla}W_{i}(t)_{l-1}\). The same process of gradient sanitization repeats for each iteration \(l\) until all \(L\) local iterations are completed. Now the differentially private function \(f_{ti}\) is generated by a sequential composition of \(L\) invocations of \(g_{til}\) (\(1\leq l\leq L\)), which produces the local model update to be shared by client \(i\) to the federated server for round \(t\). ### _Fed-\(\alpha\)CDP[\(\sigma\)]: Dynamic Decaying Noise Scale_ With the dynamic \(l_{2}\)-max sensitivity, Fed-\(\alpha\)CDP will inject a larger noise at early rounds and a smaller noise at later rounds because the descending trend of \(l_{2}\)-max sensitivity results in the declining Gaussian variance as the training progresses for a fixed noise scale \(\sigma\). Based on Theorem 1 and Lemma 1, we can facilitate the decreasing trend for the Gaussian noise variance with a dynamically decaying noise scale instead of a pre-defined fixed \(\sigma\). This can be implemented using a smooth decay function over the number of rounds in federated learning. We use Fed-\(\alpha\)CDP[\(\sigma\)] when incorporating a dynamic noise scale. **Noise scale decay functions.** Inspired by dynamic learning rate polices [22], we consider four adaptive policies for implementing adaptive \(\sigma\) decay with a starting \(\sigma_{0}\). Each of the policies will progressively decrease the noise scale \(\sigma\) as the number of rounds for federated learning increases. _Linear decay:_ The \(\sigma\) decays linearly, as \(\sigma_{t}=\sigma_{0}(1-\gamma_{1}t)\) where \(\gamma_{1}>0\) is the smooth controlling term for \(\sigma\) at round \(t\). _Staircase decay:_ The \(\sigma\) decays with a step function: \(\sigma_{t}=\sigma_{0}(1-\gamma_{2}\lfloor t/\Gamma\rfloor)\) where \(\Gamma\) is the step size, or the changing interval for \(\sigma_{t}\) allocation and \(\gamma_{2}\) controls the magnitude of staircase. _Exponential decay:_ The \(\sigma\) decays by an exponential function: \(\sigma_{t}=\sigma_{0}e^{-\gamma_{3}t}\) where \(\gamma_{3}\) is the exponential controlling term. _Cyclic decay:_ The \(\sigma\) decay follows a linear cyclic Co-sine annealing policy inspired by [23], formulated as \(\sigma_{t}=\frac{\sigma_{0}}{2}\left(\cos\left(\frac{\pi\bmod(t-1,\lceil T/ \gamma_{4}\rceil)}{\lceil T/\gamma_{4}\rceil}\right)+1\right)\) where \(\gamma_{4}\) denotes the number of cycles and the \(\sigma_{t+mod(t-1,\lceil T/\gamma_{4}\rceil)}\)) will restart from \(\sigma_{0}\) at the beginning of every \(\lceil T/\gamma_{4}\rceil\) rounds. With a fixed \(\sigma\) and fixed \(S\), \(\epsilon\) is typically distributed uniformly over the \(T\) rounds. Under the same privacy budget and the fixed noise variance, the same amount of noise is injected each time regardless of the trend of the \(l_{2}\) norm of the gradients. In comparison, when using the dynamically decaying \(\sigma\), large differential privacy controlled noise will be injected at the early stage of training. With the noise scale gradually decaying, the amount of differential privacy noise injected will be steadily reduced as the number of training rounds increases, resulting in smaller noise in the later rounds of federated learning. By Lemma 1, \(\sigma\) is anti-correlated to \(\epsilon\). The dynamic decay of \(\sigma\) will result in non-uniform distribution of \(\epsilon\) over the \(T\) training rounds: with small \(\epsilon\) indicating small privacy loss by enforcing a larger noise scale. The smaller privacy spending will result in injecting larger noise in early rounds to protect informative gradients against leakages. Then by employing a smaller noise scale and accordingly smaller noise variance and increased \(\epsilon\) spending in the later stage of federated learning, it results in faster convergence and higher accuracy performance under a given privacy budget. While we want to construct dynamic differential privacy noise, determining \(\sigma_{t}\) will need to take the following three factors into consideration: (1) the starting \(\sigma_{0}\) need to be large enough to prevent gradient leakages. Note that general accuracy-driven privacy parameter search cannot always guarantee gradient leakage resilience. Therefore, we select the privacy parameter settings proven empirically to be resilient [15]: \(\sigma_{0}\geq\frac{C*\sigma}{S_{dxy}}\) for the initial setting. (2) the ending \(\sigma_{T}\) cannot be too small as otherwise the \(\epsilon\) privacy spending would explode, resulting in a poor DP protection. (3) the amount of noise injected is yet not too much to affect the desired accuracy performance of the global model. ### _Fed-\(\alpha\)CDP: the General Algorithm_ **Algorithm 1** provides the pseudo-code for the general Fed-\(\alpha\)CDP setup. As the sensitivity and noise scale jointly determine the noise variance, there are several options for the choice of these parameters: fixed clipping based sensitivity \(S=C\) or \(l_{2}\)-max sensitivity, and fixed noise scale \(\sigma_{t}=\sigma_{0}\) or decaying noise scale. The Gaussian noise \(\mathcal{N}(0,\sigma_{t}^{2}S^{2})\) injected to per-example gradient is most adaptive to the training process under both the decaying \(\sigma_{t}\) and the \(l_{2}\)-max sensitivity as the noise variance closely track the trend of the gradients throughout the learning process. It offers a larger noise at the early stage of training for gradient leakage resilience and a smaller noise at the latter stage for high accuracy. ## 6 Privacy Analysis In this section, we demonstrate how Fed-\(\alpha\)CDP satisfies differential privacy. We first prove that the per-example gradient noise injection is differentially private. Then we show that the trained global model is differentially private due to the composition theorems (Theorem 2 and 3) and Post-Processing (Theorem 4). In addition, we demonstrate that Fed-\(\alpha\)CDP with per-example gradient perturbation guarantees both per-example and per-client differential privacy. At last, we introduce the experimental privacy analysis approach employed to evaluate Fed-\(\alpha\)CDP. To prove that the per-example gradient noise injection step is differentially private, we model the Gaussian noise density function as \(\frac{1}{e}e^{-\beta||\mu||}\) at the local level. Inspired by [24], Theorem 5 ensures the differential privacy guarantee of \(l_{2}\)-max sensitivity under certain forms of convexity. ``` Input:# total clients \(N\), # clients per round \(K_{t}\), maximum global round \(T\), noise scale \(\sigma_{t}\), clipping bound \(C\). 1Server initialization and broadcast: global model \(W(0)\), local batch size \(B\), # local iteration \(L\), local learning rate \(\eta\). forround\(t\) in \(\{0,1,2,\ldots,T\}\)do 2for client \(i\) in \(\{1,2,\ldots,K_{t}\}\)do 3//download global model \(W(t)\). 4client!do://start\(f_{i}\) on \(D_{i}\). 5\(W_{i}(t)_{0}\gets W(t)\) 6forlocal iteration \(l\) in \(\{0,1,2..L-1\}\)do 7//batch processing, sampled over \(D_{i}\), start \(g_{til}\). 8forinstance\(j\) in \(\{1,...B\}\)do 9forlayer \(m\) in \(\{1,...M\}\)do 10//compute per-example gradient \(\nabla W_{ij}(t)_{l}\) forlayer \(m\) of sample \(j\) in the batch at local iteration \(l\) from client \(i\). 11\(\nabla W_{ij}(t)_{l}\leftarrow\{\nabla W_{ij}(t)_{lm}\}\)//compute_\(l_{2}\) norm forlayer \(m\) of sample \(j\) at local iteration \(L\). 12\(l_{2j}(t)_{lm}=||\nabla W_{ij}(t)_{lm}||_{2}\)//flip per-example gradients by coordinate forlayer \(m\) of sample \(j\) at local iteration \(l\). 13\(\nabla W_{ij}(t)_{lm}\leftarrow\nabla W_{ij}(t)_{lm}*\min\{1,\frac{C}{l_{2ij}(t) _{lm}}\}\) 14 end for//obtain the clipped per-example gradients forsample \(j\) at local iteration \(l\). 15\(\nabla W_{ij}(t)_{l}\leftarrow\{\nabla W_{ij}(t)_{lm}\},m=1,\ldots,M\) 16 17 end for 18//compute batch gradients on \(M\) layers for iteration \(l\). 19\(\nabla W_{i}(t)\leftarrow\frac{1}{N}\sum_{j=1}^{B}\nabla W_{ij}(t)_{l}\)//compute the max of \(l_{2}\) norm over \(M\) layers on the gradients of batch instance {1,...B} for local iteration \(l\), assign the sensitivity \(S\leftarrow\max_{i,m}||\nabla W_{i}(t)_{lm}||_{2},i=1,...,B,m=1,...,M\)//compute sanitized batch gradients. 20\(\nabla W_{i}(t)_{l+}\leftarrow\nabla W_{i}(t)_{l}+\mathcal{N}(0,\sigma_{t}^{ 2}S^{2}))\)//local gradient descent. 21\(W_{i}(t)_{l+1}\gets W_{i}(t)_{l}-\eta_{i}\nabla W_{i}(t)_{l}\)//end\(g_{til}\), and start next iteration. 22 23 end for 24//send out local updates. 25\(\Delta W_{i}(t)\gets W_{i}(t)_{L}-W(t)\)//with \(M\) layers 26//end\(g_{til}\) for local parameter update. 27 28 29server do 30// collect local updates from \(K_{t}\) clients. 31\(\Delta W_{i}(t),i=1,...K_{t}\)//aggregation. 32\(W(t+1)\gets W(t)+\frac{1}{K_{t}}\sum_{i=1}^{K_{t}}\Delta W_{i}(t)\)//start next round of learning until reaching \(T\). 33 34 35 end for 36 37Output: global model \(W(T)\) ``` **Algorithm 1**Fed-\(\alpha\)CDP[\(\sigma\)] The algorithm 2 is the same as in the previous section. To prove that the per-example gradient noise injection step is differentially private, we model the Gaussian noise density function as \(\frac{1}{e}e^{-\beta||\mu||}\) at the local level. Inspired by [24], Theorem 5 ensures the differential privacy guarantee of \(l_{2}\)-max sensitivity under certain forms of convexity. **Theorem 5**.: _For each iteration with batch size \(B\), adding Gaussian noise with density function \(\frac{1}{e}e^{-\beta||\mu||}\) to the training function with loss function \(h(x,w(t))\) and \(|\nabla h(x,w(t))|<S\) for sensitivity of the model at iteration \(t\) ensures \(\epsilon\) differential privacy of the training function if \(\beta=\frac{B\epsilon}{2S}\)._ Proof.: Let \(R(x,w)\) and \(r(x,w)\) be two continuous and differentiable functions. When picking \(\mu\) from the Gaussian distribution, for a specific \(\mu_{0}\), the density at \(\mu_{0}\) is proportional to \(e^{-\beta||\mu_{0}||}\). Then we have for any \(w\) given \(x_{1}\) and \(x_{2}\) be any two batches that differ in the value of one instance: \[\frac{R(x_{1},w)}{R(x_{2},w)}=\frac{\frac{1}{\zeta}e^{-\beta||\mu_{1}||}}{\frac {1}{\zeta}e^{-\beta||\mu_{2}||}}=e^{-\beta(||\mu_{1}||-||\mu_{2}||)}. \tag{3}\] Let \(\omega_{1}\) and \(\omega_{2}\) are the solutions, e.g. weights, respectively to non-private training function under neighboring inputs \(x_{1}\) and \(x_{2}\), then with triangle inequality, we have \[||\mu_{1}||-||\mu_{2}||\leq||\mu_{1}-\mu_{2}||=||\omega_{1}-\omega_{2}||. \tag{4}\] Next we assume \(R(w)\) and \(r(w)\) are \(\lambda\)-strongly convex. By \(\lambda\)-strongly convex [25], we mean for all \(\alpha\in(0,1)\), function \(f_{1}\) and \(f_{2}\) satisfy \(G(\alpha f_{1}+(1-\alpha)f_{2})\leq\alpha G(f_{1})+(1-\alpha)G(f_{2})-\frac{1}{2} \lambda\alpha(1-\alpha)||f_{1}-f_{2}||_{2}^{2}\). Assign \(R(w)\) to \(h(x_{1},w(t))\) and \(r(w)\) to \(h(x_{2},w(t))-h(x_{1},w(t))\), \(\omega_{1}\) to \(\arg\min_{\omega}h(x_{1},w(t)\), and \(\omega_{2}\) to \(\arg\min_{\omega}h(x_{2},w(t)\). According to the above assignment, \[\nabla R(\omega_{1})=\nabla R(\omega_{2})+\nabla r(\omega_{2})=0 \tag{5}\] Given the \(\lambda\)-strongly convex, we have, \[(\nabla R(\omega_{1})-\nabla R(\omega_{1}))^{\top}(\omega_{1}-\omega_{2})\geq \lambda||(\omega_{1}-\omega_{2})||^{2}. \tag{6}\] Combining Equation 5 and 6 and with Cauchy-Schwartz inequality, we get: \[||\omega_{1}-\omega_{2}||*||\nabla r(\omega_{2})||\geq (\omega_{1}-\omega_{2})^{T}\nabla r(\omega_{2}))\] \[=(\nabla R(\omega_{1})-\nabla R(\omega_{1}))^{T}(\omega_{1}-\omega_{2})\geq \lambda||(\omega_{1}-\omega_{2})||^{2}.\] By setting \(\lambda\) to 1, we have \(||\omega_{1}-\omega_{2}||\leq\max_{\omega}||\nabla r(\omega)||\) and \(\nabla r(\omega)=\frac{1}{B}(E_{1}\nabla h(x_{1},w(t))-x_{2}\nabla h(x_{2},w(t))) \leq\frac{2S}{B}\). Recall equation 3 and equation 4, we have \(\frac{R(x_{1},w)}{R(x_{2},w)}=e^{-\beta(||\mu_{1}||-||\mu_{2}||)}\leq e^{-\beta(|| \omega_{1}-\omega_{2}||)}\leq e^{-\beta\frac{2S}{B}}\). Combined with equation 1 in differential privacy definition, when \(\beta=\frac{B\epsilon}{2S}\), we have \(\frac{R(x_{1},w)}{R(x_{2},w)}\leq\epsilon\) and thus completes the proof. **Privacy Accounting.** Given that each local SGD is differentially private, next we accumulate the privacy spending with privacy composition in federated learning. The base composition originated from the composition theorem is considered loose for accurately tracking per-step privacy spending. Several privacy accounting methods have been proposed for tight privacy composition. The most representative ones are advanced composition [26], zCDP [9], and Moments Accountant [8]. All of them commonly assume a fixed \(\delta\) to track \(\epsilon\). We will use Moments accountant in our evaluation for its being the state-of-the-art tight privacy accounting method. Based on the default implementation1, the privacy spending \(\epsilon\) is computed given the total rounds \(T\), the noise scale \(\sigma\), the privacy parameter \(\delta\), and the sampling rate \(q\). For data sampling in federated learning, we follow Fed-CDP [15] due to the resemblance in the client-side per-example noise injection process. Then, the local sampling with replacement over the disjoint local datasets across all clients in federated learning can be modeled as global sampling with replacement over the global data. Consequently, differentially private federated learning can be viewed as an alternative implementation of centralized deep learning with differential privacy in algorithmic logic. **Per-example Differential Privacy.** Given that Fed-\(\alpha\)CDP algorithms add differential privacy noise to per-example gradients generated for iteration \(l\) at client \(i\) during local model training (\(1\leq l\leq L,1\leq i\leq K_{t}\)), the sampling rate can be computed as follows. At each round \(t\), \(K_{t}\) clients are sampled with \(\frac{K_{t}\times D}{|D|}\) and for each client \(i\) out of \(K_{t}\), a batch of size \(B\) is sampled over \(D_{i}\) (\(\frac{B}{|D_{i}|}\)) for the \(L\) iterations. According to [15], the sampled local data over \(K_{t}\) clients at round \(t\) can be viewed as a global data sampling over \(D\) with the sampling rate \(q_{1}=\frac{B\times K_{t}}{|D|}\), with the sampling size \(B*K_{t}\). Thus, the locally added per-example noise would have a global instance-level effect over the global collection \(D\) of distributed local data \(D_{i}\) across the \(N\) clients (\(D=\cup_{i}^{N}D_{i}\)). **Per-Client Differential Privacy.** Fed-\(\alpha\)CDP offers both per-instance and per-client differential privacy guarantee. Although Fed-\(\alpha\)CDP uses the per-iteration local training (\(\epsilon_{til},\delta_{til}\))-differentially private function \(g_{til}\) to compute and inject Gaussian noise to the per-example gradient during local model training, Fed-\(\alpha\)CDP ensures that the per-client local model parameter update is generated all over noisy gradients from \(L\) local iterations. Therefore, the local parameter update shared at client \(i\) by \(f_{ti}\) is sanitized and \(f_{ti}\) is (\(\epsilon_{ti},\delta_{ti}\)) differentially privacy with \(\epsilon_{ti}=\sum_{l}^{L}\epsilon_{til}\) and \(\delta_{ti}=\sum_{l}^{L}\delta_{til}\) by sequential composition. In comparison, Fed-SDP offers differential privacy only at the client level. This is because Fed-SDP only injects differential privacy guided Gaussian noise to per-client local model update after completing local training at the client. Therefore, the client-level privacy for Fed-SDP relies on the client sampling rate of \(q_{2}=\frac{K_{t}}{N}\) and per-round composition [13, 21]. When conducting privacy analysis for a family of \((\epsilon,\delta)\)-differentially private federated learning algorithms, two independent yet complementary methods can be employed [12]. (i) We can set a fixed target privacy budget and measure the accumulated privacy spending (or privacy loss) \(\epsilon\) at each step. The training will stop when the accumulation of the per-step \(\epsilon\) exceeds the target privacy budget at step \(t\). In this _target privacy budget_ approach, we evaluate and compare the outcome of different differential privacy algorithms in terms of the achieved model accuracy under the given target privacy budget. (ii) We can instead specify the target utility (model accuracy goal) and the total number of steps \(T\). Then we measure the accumulated privacy spending at the final step \(T\) or any \(t<T\) when the target utility is met. In this _target model accuracy_ approach, we analyze and compare the privacy spending of different differential privacy algorithms in terms of accumulated privacy spending for achieving the target accuracy (utility). Given that different differential privacy algorithms with different settings of parameters \(S\) and \(\sigma\) may have different convergence speeds under the given target accuracy, and some may achieve high accuracy sooner and thus terminate early, this may further impact the total accumulated per-step privacy spending. ## 7 Experimental Evaluation We evaluate the proposed Fed-\(\alpha\)CDP on five benchmark datasets: MNIST, CIFAR10, LFW, Adult, and Cancer. **Table II** provides a detailed description of each dataset given the parameter setup. Specifically for each dataset, we include the training data/validation data split, number of features, number of classes, amount of data per client, number of local iterations batch size at local training, number of global rounds, as well as the test accuracy, and training cost for the non-private federated learning model. Similar to existing deep learning and federated with differential privacy literature [8, 9, 21], we evaluate the three image benchmark datasets on a deep convolutional neural network with two convolutional layers and one fully-connected layer. For attribute datasets, a fully-connected model with two hidden layers is used. Our federated learning setup follows the simulator in [21] with a total of \(N\) clients varying from 100, 1000 to 10000, and \(K_{t}\) is set at a varying percentage of \(N\), e.g., 5%, 10%, 20%, 50% per round. Unless otherwise specified, we use \(N=1000\) clients and 10% of \(N\) per round, e.g., \(K_{t}=100\) participating clients. For privacy parameters, we follow [8] and set \(C=4\) and \(\sigma=6\) as default. All experiments are conducted on an Intel 4 core i5-7200U [email protected] machine with an NVIDIA Geforex 2080Ti GPU. We first report the empirical results on the resilience of Fed-\(\alpha\)CDP in comparison to existing differential privacy solutions. Second, we show that Fed-\(\alpha\)CDP obtains stronger differential privacy guarantee in terms of smaller \(\epsilon\) spending under a target accuracy. Meanwhile, Fed-\(\alpha\)CDP can significantly boost the accuracy performance compared to Fed-SDP under a target privacy budget. Finally, we show that besides the stronger leakage resilience, the proposed Fed-\(\alpha\)CDP outperforms existing adaptive clipping approaches: the quantile-based clipping [27]2 and AdaClip [28]3 with better test accuracy. We adopt their clipping implementation to our federated learning setup for fair comparison. Footnote 2: [https://github.com/google-research/federated/tree/master/differential_privacy](https://github.com/google-research/federated/tree/master/differential_privacy) Footnote 3: [https://github.com/soominkwon/DP-dSNE](https://github.com/soominkwon/DP-dSNE) ### _Gradient Leakage Resiliency_ This section evaluates the resilience of Fed-\(\alpha\)CDP[\(\sigma\)] by comparing it with four most relevant and representative differential privacy techniques against all three types of gradient leakage threats (recall Threat Model in Section 3.1). **Table III** reports \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & MNIST & CIFAR10 & LFW & Adult & Cancer \\ \hline \# training data & 60000 & 50000 & 2267 & 36631 & 426 \\ \hline \# validation data & 100000 & 10000 & 756 & 12211 & 143 \\ \hline \# features & 28\({}^{\text{\text{\text{\text{\text{\text{\text{\textasci the measurement results from 100 clients in terms of the resilience (Yes or No), the average attack reconstruction distance, and attack iterations (\(\tau\)). We measure the attack reconstruction distance by the root mean square deviation (RMSE) between reconstructed input \(x_{rec}\) and its private ground-truth counterpart \(x\): \(\frac{1}{T}\sum_{i=1}^{U}\left(x(i)-x_{rec}(i)\right)^{2}\) where \(U\) denotes the total number of features in the input. In Table III, the column \(\tau\) for MNIST (5th column) and the column \(\tau\) for LFW (8th column) are reporting the number of reconstruction iterations that the gradient leakage attack can successfully reconstruct the training data \(x_{rec}\), such that \(x_{rec}\) and the private training data \(x\) are visually indistinguishable. We set the attack termination condition to be 300 iterations in the sense that when reaching 300 iterations, if the reconstructed training data \(x_{rec}\) from the leaked gradient is still not, visually close to the private training data \(x\), the attack is considered failed. Therefore, the very small number of attack iterations, i.e., around 7 attack iterations for MNIST and 26 attack iterations for LFW, means that the reconstruction-based gradient leakage attack can succeed before reaching the maximum attack iterations of 300. The attack experiment is performed on gradients from the first local iteration as gradients at early training iterations tend to leak more information than gradients in the later stage [5]. We make three observations: (1) Under type-0 attack (the top 6 rows), we show that Fed-SDP [13], Fed-SDP at client [21], and our Fed-\(\alpha\)CDP are equally resilient, because attack is failed after exhausted all 300 attack iterations. Fed-\(\alpha\)CDP shows the highest RMSE value, indicating that the protection by Fed-\(\alpha\)CDP is strongest with the largest distance between the reconstructed training data \(x_{rec}\) and the raw training data \(x\). Type-1 gradient leakage attack and the attackers succeeded after 6 iterations for MNIST and 24 iterations for LFW. In comparison, Fed-SDP at client [21] and our Fed-\(\alpha\)CDP remain to be resilient because attack is failed after 300 attack iterations. Similarly, Fed-\(\alpha\)CDP shows the highest RMSE value measured between training data \(x_{rec}\) and the raw training data \(x\), indicating that Fed-\(\alpha\)CDP provides stronger protection than that of [21] (Fed-SDP at client). Finally, under type-2 attack (the bottom 6 rows), we show that both Fed-SDP [13] and Fed-SDP at client [21] fail to protect against type-2 gradient leakage attack, and in both differentially private scenarios, the attacker succeeded after 6 iterations for MNIST and 25 iterations for LFW. In comparison, our Fed-\(\alpha\)CDP remain to be resilient with high RMSE. Note that we will use Fed-SDP in the rest of the paper for client-level differential privacy noise injection. Given that Fed-SDP and Fed-SDP at client apply the same noise at different locations, such noise injection procedure would only impact the capability of gradient leakage resilience rather than the accuracy and \(\epsilon\)-privacy spending. (2) Fed-\(\alpha\)CDP\([\sigma]\) outperforms the other four approaches with the highest level of resilience against gradient leakage attacks with the largest reconstruction distance, making the attack hardest to succeed for all three types of leakages. (3) The quantile-based clipping [27] and AdaClip [28] have poor overall resilience against all three types of gradient leakages. This is likely because both focused on reducing the clipping bound. With the sensitivity defined by the declining clipping bound for accuracy improvement, both reduce the amount of noise added as the number of training rounds increases. But both are much weaker in gradient leakage resilience due to insufficient differential privacy noise injection, especially in the early stage of training. **Figure 4** shows a visual comparison of Fed-\(\alpha\)CDP\([\sigma]\) with Fed-SDP using examples from MNIST and LFW datasets, demonstrating the stronger resilience of Fed-\(\alpha\)CDP\([\sigma]\) under all three types of gradient leakage attacks. ### \(l_{2}\)-max Sensitivity: Utility Impact Analysis **Impact of \(N\) and \(K_{t}\).** We first compare the accuracy performance of Fed-\(\alpha\)CDP with Fed-SDP, Fed-CDP baseline [15], and the non-private case under 12 different federated learning configurations by varying total \(N\) clients from 100, 1000 to 10000. In each setting of \(N\), we additionally vary \(K_{t}\) contributing clients at each round with 5%, 10%, 20%, and 50% of \(N\). **Table IV** reports the results. We make two observations: (1) Fed-\(\alpha\)CDP consistently achieves the highest accuracy for all 12 settings of \(N\) and \(K_{t}\) and \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{N=100} & \multicolumn{4}{c|}{N=1000} & \multicolumn{4}{c|}{N=10000} & \multicolumn{4}{c|}{cost} \\ \hline 3\%-K\_qN & 5\% & 10\% & 20\% & 50\% & 5\% & 10\% & 20\% & 50\% & 5\% & 10\% & 20\% & 50\% \\ \hline non-private & 0.924 & 0.954 & 0.959 & 0.965 & 0.977 & 0.980 & 0.978 & 0.979 & 0.980 & 0.980 & 0.979 & 0.0068s \\ \hline Fed-SDP & 0.803 & 0.834 & 0.872 & 0.925 & 0.928 & 0.934 & 0.937 & 0.935 & 0.989 & 0.941 & 0.944 & 0.0069s \\ \hline Fed-CDP [15] & 0.815 & 0.831 & 0.858 & 0.903 & 0.951 & 0.956 & 0.96 & 0.964 & 0.966 & 0.963 & 0.968 & 0.0224s \\ \hline Fed-\(\alpha\)CDP & **0.854** & **0.861** & **0.892** & **0.921** & **0.965** & **0.979** & **0.977** & **0.976** & **0.975** & **0.978** & **0.979** & **0.98** & 0.023s \\ \hline \end{tabular} \end{table} TABLE IV: Effectiveness of adaptive sensitivity optimization in model accuracy under 12 different settings of federated learning on MNIST with \(T=100\) by varying \(N\) clients and \(K_{t}/N\), with \(C=4\), \(\sigma=6\). Time cost measured by second per local iteration. Fig. 4: Gradient leakage attack evaluation. With the maximum attack iteration set to 300. Fed-SDP extend type-0 leakage but not type-1 leakage since its differential privacy sanitation is performed at the server. consistently improves the Fed-CDP baseline. With all other parameters fixed (\(C,\sigma,T,K_{t},N\)), Fed-\(\alpha\)CDP benefits significantly from \(l_{2}\)-max sensitivity. (2) With \(l_{2}\)-max sensitivity, Fed-\(\alpha\)CDP can achieve an accuracy comparable to that of the non-private case, especially for \(N=1000\) or larger. With \(N=10000\), Fed-\(\alpha\)CDP can offer 97.5% to 98.0% accuracy performance for all four settings of \(K_{t}\), compared to 97.9% of the non-private algorithm. **Impact of varying clipping bound \(C\).** This set of experiments evaluates the effectiveness of \(l_{2}\)-max sensitivity by comparing Fed-\(\alpha\)CDP with Fed-SDP, Fed-CDP baseline, and the non-private training for the five benchmark datasets with constant clipping varying \(C\) from 0.1 to 32. **Table V** reports the results. We make two observations: (1) With \(l_{2}\)-max sensitivity, Fed-\(\alpha\)CDP consistently outperforms Fed-CDP and Fed-SDP under all settings of \(C\) for all five datasets. (2) Fed-CDP with fixed clipping-based sensitivity performs poorly when the clipping bound \(C\) is set to larger values (e.g., 16, 32) and smaller values (e.g., 0.1, 0.5, 1). This shows that Fed-CDP is overly sensitive to a proper setting of the clipping bound \(C\) and requires careful tuning for finding the right clipping bound \(C\) for different datasets to achieve optimal results. In comparison, Fed-\(\alpha\)CDP is much less sensitive to the clipping bound setting, thanks to the adaptive nature of \(l_{2}\)-max sensitivity. Hence, Fed-\(\alpha\)CDP no longer depends on fine-tuning to find the proper clipping bound \(C\) as long as the clipping bound \(C\) is not too small, e.g., 1.5 or 2 times the \(l_{2}\)-max norm in non-private case. This is because when using a very small clipping bound, one may clip off too much information from those informative gradients prior to noise injection, especially in early rounds where per-example gradients tend to be large. ### \(l_{2}\)-max Sensitivity: Privacy Impact Analysis **Privacy analysis under fixed total rounds \(T\).** In this set of experiments, we fix \(\delta=1e-5\), \(\sigma=6\), \(C=4\), and compare Fed-\(\alpha\)CDP with Fed-SDP in terms of privacy and utility (model accuracy). The \(\epsilon\) spending is measured using Moments accountant, at rounds 100, 100, 60, 10, and 3 for MNIST, CIFAR10, LFW, Adult, and Cancer datasets. **Table VI** reports the results. We make two observations: (1) Under fixed \(\sigma\) and \(T\), Fed-\(\alpha\)CDP and Fed-CDP will accumulate the same \(\epsilon\) spending under all five privacy accounting methods. This is consistent with Lemma 1: privacy spending \(\epsilon\) is mainly correlated with noise scale \(\sigma\) given a fixed \(\delta\), and (2) The \(\epsilon\) spending is measured using \(\epsilon\) spending under all five privacy accounting methods. This is consistent with Lemma 1: privacy spending \(\epsilon\) is mainly correlated with noise scale \(\sigma\) given a fixed \(\delta\), and (2) The \(\epsilon\) spending is measured using \(\epsilon\) spending under all five privacy accounting methods. This is because \(l_{2}\)-max sensitivity ensures smaller noise injection over the \(T\) rounds of federated learning, whereas Fed-SDP and Fed-CDP use a constant noise variance due to fixed sensitivity and fixed \(\sigma\), and inject the same amount of noise each time even when federated learning comes close to convergence. **Privacy analysis under target accuracy.** Recall Theorem 1 and Moments accountant, all privacy accounting methods are dataset-independent and clipping-independent when the following parameters are fixed: noise scale \(\sigma\), total \(T\) rounds, sampling rate \(q\), and privacy parameter \(\delta\). Hence, in this set of experiments, we measure and compare differential privacy spending \(\epsilon\) for Fed-\(\alpha\)CDP and Fed-SDP from a different perspective: we measure and compare their respective privacy spending under a target accuracy goal, given fixed noise scale \(\sigma\). We set the target accuracy using the accuracy of Fed-SDP at round \(T\) and measure the accumulated privacy spending and the number of rounds required for Fed-\(\alpha\)CDP to achieve the target accuracy for all five datasets. **Table VII** reports the results. We make two observations: (1) Fed-\(\alpha\)CDP achieves the target accuracy at the 11th, 67th, 34th, 5th, and 1st round for MNIST, CIFAR10, LFW, Adult, and Cancer datasets respectively. \(l_{2}\)-max sensitivity smoothly and adaptively reduces the amount of noise injected as federated learning progresses in the number of rounds. This result also shows that \(l_{2}\)-max sensitivity alone can largely reduce the noise variance compared to fixed sensitivity using a constant clipping bound \(C\). (2) By achieving the target accuracy early, prior to reaching the \(T\) rounds, Fed-\(\alpha\)CDP offers a much smaller accumulated privacy spending \(\epsilon\) compared to that of Fed-SDP. with initial \(\sigma_{0}=15\) and ending \(\sigma_{T}\) set to 4.85. The dynamic noise scale \(\sigma\) is set this way to not violate the \(\epsilon\) setting in the original experiment for Fed-CDP for the same number of rounds. The early rounds hold a noise scale \(\sigma_{t}\) much larger than the default \(\sigma=6\) for gradient leakage resilience. Consequently, the resulting \(\epsilon\) spending for Fed-\(\alpha\)CDP\([\sigma]\) is smaller than the \(\epsilon\) spending for Fed-CDP at this stage. The later rounds adopt a noise scale \(\sigma_{t}\) smaller than the default \(\sigma=6\) for accuracy. This leads to a larger \(\epsilon\) spending of Fed-\(\alpha\)CDP\([\sigma]\) than the \(\epsilon\) spending of Fed-CDP at the later stage. Overall, we are able to achieve better accuracy with the same privacy spending at (approximately) the same round. **Table VIII** reports the results. We make the following two observations: (1) under a target \(\epsilon\) privacy spending and for all four decaying policies of adaptive \(\sigma\), Fed-\(\alpha\)CDP\([\sigma]\) with noise scale decaying consistently shows higher model accuracy and faster convergence compared to Fed-\(\alpha\)CDP with fixed noise scale and Fed-CDP baseline. When the accumulated privacy loss reaches the target privacy budget for all four algorithms, we terminate the global training and record the accuracy and number of rounds used. The result is consistent with our analysis: by combining \(l_{2}\)-max sensitivity and decaying noise scale, Fed-\(\alpha\)CDP\([\sigma]\) will generate the most adaptive noise variance, resulting in injecting smoothly reduced noise as federated learning comes close to convergence. (2) Empirically, the adaptive noise scale \(\sigma\) with exponential decay policy is the winner for Fed-\(\alpha\)CDP\([\sigma]\), providing the smallest privacy loss measured by \(\epsilon\). ### _Time Cost of Adaptive Parameter Optimization_ This set of experiments compares the time cost of Fed-\(\alpha\)CDP approaches with Fed-SDP, Fed-CDP, and the non-private algorithm on all five benchmark datasets. **Table IX** reports the results. We make three observations. (1) The two Fed-\(\alpha\)CDP algorithms incur a very similar time cost over all five datasets, and only adds neglectable cost to the Fed-CDP baseline. (2) Since \(K_{t}\) clients can perform local training in parallel in federated learning, the overall time spent for one round is determined by the slowest client local training at each round. Although Fed-\(\alpha\)CDP approaches incur additional computation overhead for generating the \(l_{2}\)-max sensitivity and sanitizing per-example gradients in each local iteration, their absolute time cost remains low. (3) The relative cost is smaller when the model is simpler with fewer parameters. For example, CIFAR10 and LFW spend relatively more time compared to MNIST. The two attribute datasets have a lower absolute time cost. It is also possible to combining differential privacy with gradient compression for reduced communication and sanitization cost, we consider it future work due to the space limit. ### _Comparing Fed-\(\alpha\)CDP\([\sigma]\) with Adaptive Clipping_ Adaptive clipping approaches [27, 28] can be viewed as an alternative mechanism to improve model accuracy of differentially private federated learning by reducing the noise injection over the iterative rounds of joint training. Adaclip [28] performs the clipping bound estimation based on the coordinates and the loss function, whereas the quantile clipping [28] estimates the clipping bound using the quantile of the unclipped gradient norm. In this set of experiments, we compare the effectiveness of Fed-\(\alpha\)CDP\([\sigma]\) with Quantile clipping and Adaclip. Given that both quantile clipping [28] and Adaclip [28] use a rather small noise scale (\(\leq 1\)) in their implementation, we adopt their clipping implementation into our default initial setting of \(\sigma=6\) and \(C_{0}=4\) for fair comparison and report their model accuracy. **Table IX** shows the results. We make three observations: (1) Fed-\(\alpha\)CDP\([\sigma]\) consistently outperforms all other approaches, achieving the largest accuracy enhancements. (2) Compared to AdaClip and Quantile clipping, Fed-\(\alpha\)CDP\([\sigma]\) incurs the smallest time cost while offering the highest accuracy improvement. (3) Under the default setting, thw Quantile-based clipping and Adaclip are not resilient against gradient leakage attacks. In comparison, Fed-\(\alpha\)CDP offers high resilience with comparable model accuracy. Given the reduced noise due to adaptive noise scale, Fed-\(\alpha\)CDP[\(\sigma\)] is able to achieve accuracy comparable to the non-private setting with faster convergence demonstrated by smaller training rounds. (2) When different privacy accounting methods are applied to the same Gaussian noise, their measured accumulated privacy spending are different. For base composition of 100 rounds, the accumulated \(\epsilon\) privacy spending is as large as 123.354, for advanced composition 7.450, for zCDP 1.159, and for Moments accountant 0.823. This shows that the privacy budget should be set based on the specific privacy accounting method. And the comparison of privacy spending also need to be under a given privacy accounting method. ## 8 Related Work **Privacy Leakages in Federated Learning.** Despite the default privacy in federated learning, simply keeping client training data local is insufficient for protecting the privacy of sensitive client training data due to gradient leakage attacks. The early attempt of gradient leakage [30] brought theoretical insights by showing provable reconstruction feasibility on a single neuron or single-layer networks in centralized deep learning. Due to less control on the client in the distributed learning, an unauthorized read of client local parameter updates leads to more attack surfaces for gradient leakage attacks [2, 3, 4, 5, 6, 31, 32], The privacy leakage attack discloses private client training data via unauthorized inference by gradient-based reconstruction. While we consider gradient leakage attacks using L-BFS as the optimizer similar to [2, 3], with a small batch size (!16), and shallow models [5], recent attacks enhance the attack procedure with Adam optimizer for deeper machine learning models [4]. By utilizing regularization terms for batch disaggregation [6, 33], these attacks can recover the private training data with a batch size over 30. Other known attacks in federated learning include membership inference [34, 35, 36], attribute inference [37], and model inversion [38, 39], which can be launched at both client and federated server and cause adverse and detrimental effects when combined with gradient leakage attacks. **Privacy-enhanced Federated Learning.** Recent efforts can be classified into three broad categories: (1) _Differentially private federated learning_, in which several proposals [13, 14, 21] have been put forward, most of which belong to the family of Fed-SDP and provide only client level differential privacy. Besides Gaussian mechanism, [40] injects differential privacy noise with Binomial mechanism. [15] has some preliminary studies on per-example per-iteration differential privacy noise for federated learning. All these methods suffer from the fixed privacy parameter setting as discussed in Section 5. While many of the advancing techniques for differential privacy, including the two adaptive clipping approaches [27, 28] in our comparison study are investigated in the centralized setting, novel differential privacy techniques need to be developed in the federated setting. (2) _Federated learning with secure multiparty computation_, which incorporates secure multiparty computation (SMPC) to ensure secure communication of shared parameter updates between a client and the federated server [41]. Therefore, similar to Fed-SDP, it can only secure sharing between a client and the federated server in distributed learning but cannot protect against any privacy attacks happened at the client (type 1 and type 2 gradient leakages) before the cryptography process. (3) _Federated learning with local differential privacy (LDP)_, which extends the LDP concept and techniques [42, 43] developed for protecting user identity when sharing web access log or click data. Although federated learning with LDP [44] perturbs the per-client gradient updates before sharing with the federated server and [45] locally splits and shuffles the weights before sharing, they do not generate a global model with differential privacy guarantee, and also cannot provide instance-level differential privacy for client training data against gradient leakage attacks. The \(\epsilon\) in LDP is defined over the local data record rather than the global data space. Homomorphic encryption [46] and trusted execution environments (secure enclaves) [47] are alternative solutions to secure federated aggregation against malicious compromises. However, their protection will only be effective after the data is encrypted or inside the enclave. Hence they can be resilient against the type 0 and type 1 gradient leakages but are vulnerable to the type 2 gradient leakages. ## 9 Conclusion We have presented the development of Fed-\(\alpha\)CDP, a gradient leakage resilient approach to securing distributed SGD in federated learning. Fed-\(\alpha\)CDP elevates the level of resilience against gradient leakages by incorporating per-example differential privacy, \(l_{2}\)-max sensitivity, and dynamic decaying noise scale. Instead of using constant noise variance and fixed noise to perturb the local gradient updates each time, these optimizations enable non-uniform noise variance with decreasing trend so that large noise is added in early rounds and smaller noise is used in later rounds of federated learning. A formal analysis is provided on the privacy guarantee of Fed-\(\alpha\)CDP. Extensive experiments are conducted over five benchmark datasets. We demonstrate that the proposed Fed-\(\alpha\)CDP approach outperforms the existing state-of-the-art in differentially private federated learning approaches with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks. **Acknowledgement.** The authors acknowledge partial support by the National Science Foundation under Grants NSF 2038029, NSF 1564097, a CISCO grant on edge computing, and an IBM faculty award.
2304.11226
Probabilistic selection and design of concrete using machine learning
Development of robust concrete mixes with a lower environmental impact is challenging due to natural variability in constituent materials and a multitude of possible combinations of mix proportions. Making reliable property predictions with machine learning can facilitate performance-based specification of concrete, reducing material inefficiencies and improving the sustainability of concrete construction. In this work, we develop a machine learning algorithm that can utilize intermediate target variables and their associated noise to predict the final target variable. We apply the methodology to specify a concrete mix that has high resistance to carbonation, and another concrete mix that has low environmental impact. Both mixes also fulfill targets on the strength, density, and cost. The specified mixes are experimentally validated against their predictions. Our generic methodology enables the exploitation of noise in machine learning, which has a broad range of applications in structural engineering and beyond.
Jessica C. Forsdyke, Bahdan Zviazhynski, Janet M. Lees, Gareth J. Conduit
2023-04-21T19:20:40Z
http://arxiv.org/abs/2304.11226v1
# Probabilistic selection and design of concrete using machine learning ###### Abstract Development of robust concrete mixes with a lower environmental impact is challenging due to natural variability in constituent materials and a multitude of possible combinations of mix proportions. Making reliable property predictions with machine learning can facilitate performance-based specification of concrete, reducing material inefficiencies and improving the sustainability of concrete construction. In this work, we develop a machine learning algorithm that can utilize intermediate target variables and their associated noise to predict the final target variable. We apply the methodology to specify a concrete mix that has high resistance to carbonation, and another concrete mix that has low environmental impact. Both mixes also fulfill targets on the strength, density, and cost. The specified mixes are experimentally validated against their predictions. Our generic methodology enables the exploitation of noise in machine learning, which has a broad range of applications in structural engineering and beyond. 2023244 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 20232024 March 202320224 March 202320224 March 202320224 March 202320224 March 202320224 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202322024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March 20232024 March 202324 March 20232024 March 202324 March 20232024 March 20232024 March 20232024 March 20232024 March 20232024 March 202324 March 20232024 March adaptable--material properties including density, strength, durability, and appearance can be manipulated by adjusting the proportions and materials in the mix design. Conventional concrete mixes comprise three primary constituents: cement, water, and aggregate. Cement acts as the binder in concrete and is both economically and environmentally costly (Wassermann et al., 2009). Production of cement is responsible for an estimated 5-6% of global CO\({}_{2}\) emissions (United Nations Framework Convention on Climate Change (UNFCCC), 2018), owing to both process emissions and energy demands (Allwood and Cullen, 2012). Therefore, the bulk of concrete is comprised of aggregates, which can be classified as either "fine" (sand), or "coarse" (gravel and rock). Once concrete is mixed and placed in a mold, compounds in the cement undergo hydration reactions with water (Neville, 2011), forming a complex microstructure of pores, hydrated cement paste, and aggregates. This gives the concrete its "hardened state" properties including resistance to both applied loads (strength, stiffness), and aggressive substances (durability). In this era of increasing environmental CO\({}_{2}\) concentrations, the interaction of CO\({}_{2}\) with the built environment is of increasing interest (Talukdar and Banthinia, 2016; Jiang et al., 2018). In particular, there exists a chemical reaction of carbon dioxide (CO\({}_{2}\)) with hydration products in concrete (Papadakis et al., 1989), which is referred to as concrete carbonation. The CO\({}_{2}\) diffuses into the concrete from the surrounding environment. The resulting carbonated material has a changed microstructure compared to noncarbonated concrete (Groves et al., 1991; Greve-dierfeld et al., 2020), and a lower alkalinity of the pore water (Papadakis et al., 1989). Therefore, this carbonation can lead to structural damage since this loss of alkalinity facilitates corrosion of the internal steel reinforcement which is required to carry tensile loads in concrete structures (Page, 2007). Ultimately, carbonation is one of the primary deterioration mechanisms of steel-reinforced concrete structures, making it a significant area of concern. Specification of the most appropriate concrete for a particular application can be carried out using a performance-based or prescriptive approach. Where a performance-based concrete specification process is implemented, targets are placed on concrete output properties, such as strength or carbonation coefficient. This is opposed to a prescriptive concrete specification approach, which limits the input parameters such as cement content or water/cement ratio (the principle applied by Eurocode BS EN 206:2013+A1 (British Standards Institution, 2016). At present, concrete mix design methods (Abrams, 1918; ACI Committee 211, 1991; Teychenne et al., 1997; Wilson and Kosmatka, 2011) use empirically derived relationships between mix ratios and concrete properties, in particular the strength of the concrete, to proportion mixes. In any large-scale concrete application following a performance-based approach, a variety of mixes are designed using one of these methods and then trial mixes are cast to verify that the final properties of a particular mix are acceptable. Performance-based specification can, therefore, lead to more efficient, optimal mixes being selected (Wally et al., 2022). However, the trial-and-error process can be both time-consuming and costly. Machine learning offers an opportunity to capture complex multi-dimensional relationships between the inputs (mix proportions) and outputs (material properties), and to reduce the need for a trial-and-error approach in concrete design. Machine learning uses historical data to train a model, which later can be used to predict quantities of interest. This approach has been used to predict properties of many different materials (Bhadeshia et al., 1995; Sourmail et al., 2002; Agrawal et al., 2014; Ward et al., 2016; Ward et al., 2017; Kim et al., 2018), including concrete (Taffese et al., 2015; Ben Chaabene et al., 2020; Prayogo et al., 2020; Liu et al., 2021; Tran et al., 2022), proving its capability and versatility. Here we turn to machine learning to design concrete. Machine learning methods can also utilize the uncertainty in predictions of target variables to focus on the most robust predictions. For example, the use of uncertainty has been extensively demonstrated and experimentally verified for design of materials most likely to fulfill target criteria (Conduit et al., 2017, 2018, 2019). Furthermore, values of uncertainty itself can be useful for predicting the quantity of interest (Goujon, 2009; Zerva et al., 2017; Zhang, 2020; Zviazhynski and Conduit, 2022). In concrete, the appearance of randomly distributed aggregates of different shapes and sizes may be considered similar to white noise, leading to variability and uncertainty in properties such as carbonation depth and compressive strength. Information can be extracted from uncertainty in some of the measured properties, and used to predict other properties to allow better insights and, ultimately, better mix design. In this work, we develop a methodology capable of extracting information from uncertainty. We use the methodology to select concrete mixes that have specific desired properties for sustainability and durability targets, providing a template for future machine learning applications to systems where there is information hidden in the noise. In this article, we first explore the relationships between measured properties of concrete mixes prepared in a laboratory study in Section2. We then describe the machine learning methodology capable of extracting information out of noise to help predict properties of concrete in Section3. We next describe the application of this methodology to concrete specification in Section4 and validate our proposal on experimental results in Section5. Finally, we discuss the future applications of the methodology to concrete-related areas and beyond in Section6. ## 2 Relationships Between Concrete Properties To build our machine learning model on a solid foundation we first explore the relationships between various concrete properties using empirically collected data from 21 laboratory concretes, including two mixes reported in Forsdyke and Lees (2021). The dataset (provided in the Supplementary Material) contains information about the proportions of constituent materials (cement, gravel, sand, and water) in each batch, as well as properties of interest (carbonation coefficient, environmental impact, strength, density, and cost) for the resulting concrete. In all concretes in this publication, the gravel used is crushed limestone with the maximum particle size of 10 mm, and the sharp sand used has a maximum particle size of 4 mm (with 80% of sand particles smaller than 1 mm). The mixes were originally proportioned using the BRE method (Teychenne et al., 1997), which uses an estimated mass per cubic meter, also given. As a measure of concrete durability, 16 of the 21 concretes were exposed to a 4% CO\({}_{2}\) environment. The carbonation depth was measured over time from a minimum of three samples of each mix to produce a carbonation profile from which carbonation coefficient, \(K\), was calculated. A low carbonation coefficient represents high resistance to carbonation. Another variable, preconditioning time, refers to the period between water curing and elevated CO\({}_{2}\) exposure for those specimens where carbonation performance was measured. An estimate of environmental impact in the form of kg embodied CO\({}_{2}\) per kg of concrete (kgeCO\({}_{2}\)/kg) was also provided, as well as compressive cube strength of the concrete; water saturated density measured in air; and estimated cost per kg (\(\xi\)/kg). The correlations between the properties from the concrete dataset, measured with the Pearson correlation coefficient, are shown in Figure1. The first nine rows represent input properties such as mix proportions and estimated mass per cubic meter assumed before the concrete has been cast. The final five rows represent measured properties of the resultant concrete. Correlations of particular interest are highlighted by the boxes (a-k) in Figure1. Overall, Figure1 shows that there are correlations between many of the variables in the concrete dataset, which can be exploited to fit a machine learning model to the data and make predictions. We now discuss notable correlations for each individual property. ### Carbonation coefficient A strong negative correlation between carbonation coefficient and cement content is observed at (a) in Figure1. This is because higher proportions of cement provide higher volumes of carbonatable material in the concrete matrix once hydrated (Papadakis et al., 1991), increasing the time for carbonation to reach a particular depth. Strong positive correlations are observed between carbonation coefficient and crushed gravel content, (b), and between carbonation coefficient and total aggregate/cement ratio, (c). This is also logical when considering a fixed volume of hydrated cement paste will penetrate deeper into a concrete when more aggregates are suspended in it. ### Environmental impact It is well established that cement is the component of concrete responsible for the majority of its environmental impact. Whereas, the relative embodied carbon of the aggregates, acting as inert filler materials (Neville, 2011), are low. This is confirmed in Figure 1 by the strong positive correlation between environmental impact and cement content, (d), and the strong negative correlation between environmental impact and total aggregate/cement ratio, (e). With such strong correlations, we expect machine learning predictions for environmental impact to be more accurate than for other target variables, which have weaker correlations with composition variables. Environmental impact in this study is calculated assuming constant embodied emissions for each constituent component. Therefore, environmental impact is a linear function of concrete mix proportions. Generally, however, embodied emissions of each component depend on the manufacturing process, Figure 1: Correlation map for concrete properties in the training dataset. Light colors correspond to strong positive correlations, dark colors correspond to strong negative correlations, and intermediate colors correspond to weak correlations, as per the color scale shown. Green boxes highlight notable correlations. Properties written in dark pink are intermediate quantities derived from concrete mix proportions and properties written in light pink are target variables. transport to the site, and solid waste generation (Babor et al., 2009; Sousa and Bogas, 2021). The effect of the manufacturing process, transportation, and solid waste generation on environmental impact cannot be evaluated analytically and so this study represents a template of how machine learning could be used to predict environmental impact. ### Compressive strength Figure 1 also demonstrates negative correlations between compressive strength and water/cement ratio, (f), and between compressive strength and total aggregate/cement ratio, (g). Concrete strength is widely considered to be a function of pore structure (Powers, 1958; Pantazopoulou and Mills, 1995; Chen et al., 2013). Above the minimum required for full cement hydration (Powers, 1958; Aitcin, 2016), increasing the water/cement ratio leads to loss of concrete strength due to microscopic pores formed by water molecules that are not chemically bound in hydration products (Pann et al., 2004). Increasing the total aggregate/cement ratio similarly decreases strength due to loss of bond between the cement matrix and aggregates (Poon and Lam, 2008) and may change the degree of compaction (Marar and Eren, 2011) that leads to lower strength (Mindess et al., 2003). A strong negative correlation between strength and carbonation coefficient is also observed (h), supporting the validity of models that use strength to predict carbonation behavior (Silva et al., 2014; Forsdyke and Lees, 2021). ### Density Density does not appear to have any strong correlation to the input parameters, which we envisage to make the application of machine learning to predict density more difficult than the other target variables. The strongest relationship observed is a negative correlation to the absolute volume of coarse aggregate, crushed gravel, (i). Density of concrete in the hardened state is influenced by the particle-size distribution having allowed effective compaction in the wet state (Sims et al., 2019). Total aggregate content varies minimally between the mixes in this study. Therefore, higher quantities of coarse aggregate correspond to lower quantities of fine aggregate within the data set (as seen by the strong negative correlation between sharp sand and crushed gravel). It is possible that this leads to less optimal particle packing, which in turn leads to the lower density observed here for mixes with higher coarse aggregate volumes. Other information not captured here, such as the particle size distribution and smoothness of the aggregates, may also influence density, leading to noise in this data. ### Cost A strong positive correlation is observed between cost and cement content, (j), whereas a strong negative correlation is observed between cost and total aggregate/cement ratio, (k). This is expected, since cement is the most expensive component of concrete, whereas aggregates act as a filler material and are much cheaper components of concrete. Like for environmental impact, we expect machine learning predictions for cost to be more accurate than for other target variables. Cost in this study is calculated assuming fixed price for each constituent component. Therefore, cost, like environmental impact, is a linear function of concrete mix proportions. Generally, however, the price of each component depends on regulation, inflation, and supply (Velumani and Nampoothiri, 2018; Ma et al., 2022). The effect of these factors on cost cannot be evaluated analytically and so this study serves as a foundation showing how machine learning could be used to predict cost. ## 3 Machine Learning Methodology Machine learning algorithms are trained on preexisting data to make predictions. In this case, the predicted quantities are the properties of concrete mixes. A few examples of widely used machine learning algorithms are _k_-means clustering (Hastie et al., 2001), neural networks (Heskes, 1997), and Gaussian processes (Tancret, 2013). In this article, we use the random forest algorithm (Pedregosa et al., 2011), as it is computationally cheap and robust for finding nonlinear relationships. We compare a single-layer linear model with a single-layer random forest model and our two-layer random forest model that can extract information from the noise present in concrete data. Figure 2 shows the flow of information through these three model types. ### Single-layer linear model The most basic method for property prediction is to fit a hyperplane to the training data. The hyperplane uses the input values (e.g., concrete mix proportions) to directly predict the final target variable (e.g., strength). Figure 1 showed the strength of linear correlations between input and output properties, motivating this approach. This method, which we call a single-layer linear model, is illustrated by Flows A1 and A2 in Figure 1(a). The method is simple and robust; however, it does not capture the nonlinear relationships between concrete properties. If the final target variable has a nonlinear dependence on the inputs, the linear model predictions will not be accurate. Therefore, we also consider a model that captures nonlinearities in the data, which is a single-layer random forest model. ### Single-layer random forest model The random forest model is a collection of independent identical regression trees (Loh, 2011). During the training phase, each regression tree learns how to map the input variables to the target variables. The random forest algorithm computes the prediction and its uncertainty by bootstrapping (Hastie et al., 2001). When bootstrapping a dataset with \(N\) entries, new subsets are generated by sampling \(N\) entries randomly, with replacement. Each subset is then used to train one regression tree in the random forest. The compound predictions are averaged to give the random forest prediction, and their standard deviation is the uncertainty in prediction. The depth of regression trees in a random forest, and, therefore, accuracy of the predictions, is determined by the hyperparameters of the random forest. To achieve the best possible predictions for blind data we tune these hyperparameters, assessing the accuracy of predictions using leave-one-out Figure 2: (a) Flowchart for the single-layer linear model. (b) Flowchart for the single-layer random forest model. (c) Flowchart for the two-layer random forest model. cross-validation (Hastie et al., 2001) on the training data. For predictions of each property, we use the same hyperparameters to mitigate overfitting. The random forest algorithm can learn and exploit correlations between various target variables and their uncertainties to make more accurate predictions of the final target variable of interest. A single-layer random forest approach follows Flows B1 and B2 in Figure 2b to use only the input features (concrete mix proportions) to predict the final target variables (e.g. strength). ### Two-layer random forest model Alternative to the conventional single-layer approach, the flow of information through the two-layer random forest model is demonstrated in Figure 2c. In the two-layer method, we train the first random forest model on the concrete mix proportions (Flow C1) to predict all of the intermediate and output variables such as carbonation coefficient, alongside its uncertainty (Flow C2), following the approach described in Zviazhynski and Conduit (2022). We then train the second layer random forest model, taking the concrete mix proportions (Flow C3) and other recently predicted variables including values for uncertainty (Flow C4) to predict the outputs, for example, strength (Flow C5). Using an intermediate variable and its uncertainty as an input for the second layer random forest model is particularly useful if the intermediate variable is cheaper to measure than the target variable. For example, strength is both more economical to measure and more commonly measured in concrete than carbonation coefficient. Therefore, strength can be used as an intermediate variable in the two-layer random forest model to predict carbonation coefficient, without the need for time-consuming carbonation experiments. On the other hand, to assess the condition of an existing structure, taking a large core from the structure to measure compressive strength can be particularly destructive. Instead, an intermediate quantity, such as carbonation depth, can be measured from smaller, less invasive samples, to estimate the strength with less damage to the structure. The strategy of using the expected value of an intermediate variable to help predict the final variable has previously been successfully applied to materials and drugs design to exploit relationships between various properties (Vergoort et al., 2019; Irwin et al., 2020; Mahmoud et al., 2021), but we now use uncertainty in the intermediate variable. The uncertainty in the intermediate variable that machine learning should capture arises from the inevitable scatter in training data due to experimental uncertainty. There are two methods to quantify the uncertainty in the intermediate variable. The first method is to calculate the standard deviation of predictions of regression trees in the random forest, and this quantity is used in the two-layer random forest model. The second method is to calculate the standard deviation of the experimental measurements. In both cases, the uncertainty in a given variable has the same unit as this variable. To demonstrate the utilization of uncertainty in the two-layer random forest model, we consider an example shown in Figure 3. Here, for a given value of cement content \(X\), the first layer random forest model predicts the corresponding value of carbonation coefficient \(Y\) and its uncertainty \(\sigma_{Y}\), as shown in the leftmost plot in Figure 3. Then the second layer random forest model utilizes uncertainty in carbonation Figure 3: Utilization of uncertainty in the two-layer random forest model. Starting with cement content \(X\), the model utilizes uncertainty in carbonation coefficient to predict compressive strength \(Z\). coefficient to predict the corresponding value of compressive strength, \(Z\), as shown in the middle plot in Figure 3. Compressive strength is a decreasing function of uncertainty in carbonation coefficient (Spearman correlation of \(-0.79\)), which enables the two models to work together to predict compressive strength for a given cement content, as shown in the rightmost plot in Figure 3. ### Training and testing the model First, the hyperparameters of the three alternative models were tuned using leave-one-out cross-validation. In this method, for each entry in the existing data on concrete mixes, the prediction of the concrete property (e.g., strength) is made by the model trained on the rest of the existing data (Figure 3(a)). Then these predictions are compared against the true values to obtain the leave-one-out cross-validation \(R^{2}\), coefficient of determination (ranges from 1 for perfect predictions to \(-\infty\) for arbitrarily inaccurate predictions), which is to be maximized. The leave-one-out cross-validation \(R^{2}\) values for each property after hyperparameter tuning of each model can be seen in the table in Figure 3(b). The single-layer linear model gives good predictions for environmental impact, compressive strength, and cost, since these are approximately linear functions. However, linear model predictions of carbonation coefficient and density are inaccurate. The single-layer random forest model improves on those predictions at the expense of environmental impact, strength, and cost prediction accuracy. As expected from the data analysis in Section 2, the highest values of cross-validation \(R^{2}\) are achieved for environmental impact and cost by all three models and the lowest values are generally achieved for density. The two-layer model, capable of exploiting correlations between target variables and uncertainties in them, therefore outperforms the single-layer linear model and single-layer random forest model. This demonstrates the necessity of data-driven machine learning for property predictions and viability of the approach even when the dataset only contains 21 samples. The hyperparameters of single-layer linear model, single-layer random forest model, and two-layer random forest model can be found in the Supplementary Material. Figure 4: (a) Schematic of leave-one-out cross-validation. Blue squares are entries in the existing data and magenta squares are the test entries for each fold. (b) Table of leave-one-out cross-validation \(R^{2}\) values for property predictions. Numbers in bold are the best \(R^{2}\) values for a given property. To evaluate the effect of intermediate variables, we furthermore test a single-layer random forest model that has access to values of the intermediate variables taken from the experimental data to predict the target variable. This model is known as imputation model and has been successfully used in materials and drug discovery (Vergoort et al., 2019; Whitehead et al., 2019; Irwin et al., 2020, 2020; Mahmoud et al., 2021). Imputation models exploit correlations between the intermediate variables and the target variable and therefore achieve higher cross-validation \(R^{2}\) than both a single-layer linear model and a single-layer random forest model. The imputation model does not need to predict intermediate variables so does not propagate the error in them, which leads to better cross-validation \(R^{2}\) than the two-layer random forest model. However, the imputation model requires measured concrete properties as inputs, so is unsuitable for designing and predicting properties of a novel concrete mix. Therefore, for the remainder of this study, we adopt the two-layer random forest model. The two-layer random forest model has access to cement type, concrete mix proportions, ratios of concrete mix proportions, assumed mass/m\({}^{3}\), and preconditioning time. The relative importances of each feature for predicting the corresponding target variable are presented in the Supplementary Material. After having trained the two-layer random forest model with the tuned hyperparameters on all of the existing data on concrete mixes, the model can now be used to predict the target variables and uncertainties in them for the unseen mixes. The probabilities of the unseen mixes satisfying the set target criteria would then be calculated; the mixes with the highest probability of satisfying the given targets would be experimentally validated. ## 4 Concrete Specification With the machine learning model in place, we are well-positioned to explore completely unseen concrete mixes. We below specify two sets of challenging target properties required of the concrete. To emulate real-life usage on a construction site, where local material availability will determine the composition of possible mix designs, we use machine learning to seek the best mix from a family of hypothetical mixes, over a grid of water/cement ratio values. We first specify a total water content of 205 kg/m\({}^{3}\), chosen assuming constant workability requirements of these mixes. The hypothetical mixes are then proportioned following the BRE method of mix design (Teychenne et al., 1997), which uses the relative density of the aggregates (2.7 in this case) and the percentage of fine aggregate passing through a 600 micron sieve (70% in this case) to calculate the proportions of the remaining constituents and total density of the concrete for each water/cement ratio. This ensures the proportions of all potential mixes are realistic, and the mixes are viable. Machine learning is then used to explore this space of mixes, and allows for the prediction of extensive properties of these mixes which would otherwise need to be experimentally obtained. For the first target mix, Low-\(K\), we focus on minimizing carbonation coefficient and therefore providing high resistance to carbonation. This mix would be ideal for structural applications with severe exposure conditions, where a high resistance to carbonation is required to protect steel reinforcement from carbonation-induced corrosion. For the second target mix, Low-\(E\), we focus on minimizing environmental impact whilst maintaining a reasonably low carbonation coefficient, which is challenging as these properties are negatively correlated. This mix would be ideal for large-scale structural applications with moderate exposure conditions, where low environmental impact is desired. For both scenarios, we also seek to satisfy constraints on minimum strength, maximum density, and maximum financial cost. A summary of the two sets of target criteria is presented in Table 1. We now calculate the probability of successfully meeting the target criteria for each hypothetical mix so that we can focus on the most robust mixes that will work in practice. First, for each target variable (carbonation coefficient, environmental impact, compressive strength, density, and cost) of a given hypothetical mix, we construct the probability distribution curve of machine learning predictions that reflects the uncertainty in the prediction. Then, we evaluate the area under this curve within the target region. Finally, we multiply the areas obtained for each target variable to give the probability of the given mix successfully meeting the target criteria. We seek the mix that maximizes the probability of success. Figure 5 demonstrates the probability of successfully meeting the target criteria for the family of hypothetical mixes with water/cement ratios ranging from 0.4 to 0.95. The water/cement ratio in particular is varied as this was shown in Figure 1 to be an important variable in the mix parameters. The lines are plotted using cubic spline fit. From the potential mixes, the 0.6 water/cement ratio mix (NC0.6) and the 0.8 water/cement ratio mix (LC0.8) were found. In general, increasing water/cement ratio improves the environmental impact, whereas reducing the ratio improves the carbonation resistance. A compromise between these two factors that fulfills the Low-\(K\) criteria outlined in Table 1 with high probability of success is achieved by NC0.6. For the Low-\(E\) criteria, the accurate model for environmental impact (see the table in Figure 4b) with small uncertainty drives the sudden increase in probability of success seen in Figure 5 above a water/cement ratio of 0.75. LC0.8 is selected to be sufficiently far from the boundary to circumvent experimental error in mixing whilst retaining a high probability of success. The two selected mix designs are given in Table 2. with a trovel to achieve a flat exposed face. The concrete cubes were removed from the molds following an initial setting period of 24 hr, during which they were covered with polythene sheeting to prevent moisture loss, and then cured under water at 20\({}^{\circ}\)C until 28 days old. During casting, a trace amount (0.2% by mass) of dye was added to the LC0.8 mix for experimental reasons. We measured carbonation coefficient, compressive cube strength, and water-saturated density in air. Cost and embodied carbon of each of these mixes was calculated using coefficients for each of the constituent materials from a published cost estimation spreadsheet (Fibo Intercon, 2019) and the ICE database (Jones and Hammond, 2019) respectively. Experimental uncertainty of 0.7% is assumed for measurements of mass (deriving from a 0.1 kg precision of weighing scales for a mass measurement of 15 kg). For other experimental values, uncertainty is quantified through repeat measurements. ### Validation of individual properties In this section, we measure and validate each of the five properties of the two concrete mixtures. The machine learning predictions of the five properties (carbonation coefficient, environmental impact, strength, density, cost) and their experimental values, for both concrete mixes, are presented in Figure 6. For both concrete mixes, all the predictions agree with the corresponding experimental values within standard error. We now discuss the measurement and validation of each individual property. \begin{table} \begin{tabular}{l c c} \hline Target criteria & Low-\(K\) & Low-\(E\) \\ \hline Selected mix & NC0.6 & LC0.8 \\ Probability of success & 0.79 & 0.89 \\ Cement (CEM I 52.5N) (\%) & 14.2 & 10.5 \\ Crushed gravel (\%) & 48.9 & 48.2 \\ Sharp sand (\%) & 28.4 & 32.6 \\ Water (\%) & 8.5 & 8.5 \\ Water/cement ratio & 0.6 & 0.8 \\ Total aggregate/cement ratio & 5.5 & 7.7 \\ \hline \end{tabular} \end{table} Table 2: The two compositions that are each most probable to fulfill their respective target criteria, so are proposed for experimental validation. Figure 6: Summary of machine learning predictions (orange, hatched) and experimental results (blue) of properties for the two concrete mixes. Bars correspond to standard error regions for both predicted and experimental values. Gray areas correspond to the property targets. #### 5.2.1 Carbonation coefficient Carbonation reactions take place in the pores of the cement paste matrix when concrete is exposed to CO\({}_{2}\). The process is modeled as 1-dimensional diffusion according to Fick's first law (Kropp and Hilsdorf, 1995), where the square of the depth of penetration of CO\({}_{2}\), known as the carbonation depth, \(x\), is proportional to the exposure time, \(t\), by a carbonation coefficient, \(K\), which is itself a function of the concrete's properties and the concentration of environmental CO\({}_{2}\). Including all boundary conditions, the relationship is defined (Moreno, 2013): \[x(t)=\sqrt{x(0)^{2}+K^{2}t}, \tag{1}\] where \(t\) is the time of exposure to a constant concentration CO\({}_{2}\) and \(x(0)\) is the initial carbonation depth at \(t=0\) (often equal to 0 mm). Since the carbonation process in extremely slow at atmospheric concentrations, taking years to reach significant carbonation depths, accelerated carbonation tests are performed under elevated CO\({}_{2}\) concentrations to gauge the relative performance of different concrete mixes. A test concentration of 4% is typical, resulting in the value of \(K\) herein referred to as the 4% accelerated carbonation coefficient, \(K_{4\%}\). Noncarbonated material is revealed on the freshly split concrete surface in Figure 7 in pink, due to a phenolphthalein indicator solution. Carbonated material remains gray. Aggregates are generally of low permeability to CO\({}_{2}\) compared to the matrix of cement paste and pore space. Therefore, they impede the progression of carbonation through the cement paste and result in a nonuniform tortuous carbonation front (Huang et al., 2012; Shen and Pan, 2017) seen in Figure 7. To account for this tortuosity, the carbonation depth is measured using the method outlined in BS 1881-210:2013 (BSI, 2013) at multiple equidistant locations along the front. The carbonation depth at exposure time \(t\), \(x(t)\), is taken as the average of these measurements, shown in Figure 7. The carbonation depth data was converted into a 4% accelerated carbonation coefficient, \(K_{4\%}\) by curve fitting all available data points of \(x(t)\) versus \(\sqrt{t}\) for each mix to equation (1), shown in Figure 8, using the curve fitting method from BS EN 12390-12:2020 (BSI, 2020). The error in carbonation coefficient arises from variability of \(x(t)\) across the carbonation front. To estimate this error, we curve fit \(x(t)+\sigma_{x}(t)\) versus \(\sqrt{t}\), where \(\sigma_{x}(t)\) is standard deviation of carbonation depth at a given \(t\), and obtain the upper bound for carbonation coefficient. We then subtract the \(K_{4\%}\) value from the upper bound to estimate the error in carbonation coefficient. The same estimate can be obtained using the lower bound for carbonation coefficient, as the latter is assumed to be normally distributed. It can be seen in Figure 6 that the machine learning predictions of carbonation coefficient agree with the experimentally measured values within standard error. We note that machine learning predictions are Figure 7: Concrete sample with aggregate/cement ratio of 6.9, with upper surface exposed to 4% CO\({}_{2}\) for 49 days, other surfaces contact the rest of the sample. The carbonation front is revealed using 1% phenolphthalein in ethanol indicator solution (magenta when not carbonated). higher than the corresponding experimental values for both mixes, this may be due to prevalence of high carbonation coefficient values in the training data. #### 5.2.2 Environmental impact The environmental impact of the newly proposed concrete mixes was estimated with equation (2): \[\mathrm{kgCO_{2}e/kg_{concrete}=\sum_{i}ef_{i}}, \tag{2}\] where \(e_{i}\) is the embodied CO\({}_{2}\) per kg of the \(i^{\mathrm{th}}\) material, and\(f_{i}\) is the mass fraction of the \(i^{\mathrm{th}}\) material in the concrete mix. The coefficients for embodied CO\({}_{2}\) in each of the constituent materials, \(c_{i}\), were taken from the ICE database (Jones and Hammond, 2019) and presented in the Supplementary Material. Experimental error in this value is estimated by assuming an error in mass measurements of 0.7%, based on precision of scales used for the experimental series of \(\pm\) 0.1 kg for every 15 kg weighed. Figure 6 shows that the machine learning predictions of environmental impact are in excellent agreement with the experimental values. This is due to the fact that environmental impact is an approximately linear function of the concrete composition proportions and therefore straightforward to predict. #### 5.2.3 Strength For each of the newly proposed concrete mixes, following a wet curing period of 28 days, compressive cube strength was measured in accordance with BS EN 12390-3:2019 (BSI, 2019) on three 100 mm \(\times\) 100 mm \(\times\) 100 mm cube specimens, giving a mean achieved cube strength and standard deviation when assuming that cube strengths are normally distributed (Dayaratnam and Ranganathan, 1976). This is the property most commonly measured from field concretes, as it determines the load-bearing capacity of structures made from this mix. It can be seen in Figure 6 that the machine learning predictions of strength agree with experimentally measured values within standard error. Machine learning predictions are lower than the corresponding experimental values for both mixes due to prevalence of lower strength values for the mixes with similar cement content in the training set. Figure 8: Experimental carbonation results showing estimate of carbonation coefficient (black linear fit) and standard error bounds (gray regions). #### 5.2.4 Density The water-saturated density in air of concrete was measured using the method outlined in BS EN 12390-7:2019 (BSI, 2019b) for cubes of \(100\,\mathrm{mm}\times 100\,\mathrm{mm}\times 100\,\mathrm{mm}\). Three cubes of each mix were measured, and the standard deviation of these measurements recorded as the experimental error. Low experimental error indicates that the concrete is well mixed in the fresh state, which should also reduce variability of other properties. It can be seen in Figure 6 that the machine learning predictions of density agree with experimentally measured values within standard error. This is a particular success because the leave-one-out \(R^{2}\) value for the two-layer model in Figure 4b was lower than for the other properties. The machine learning prediction is higher than the experimental value for the NC0.6 Low-\(K\) mix and lower than the experimental value for the LC0.8 Low-\(E\) mix, that is, machine learning predictions have larger variance than the experimental values. This may be because density varies with the aggregate size, which can be different even for similar mixes, making it difficult to predict density. #### 5.2.5 Cost The cost of concrete is calculated using equation (3). Experimental error in this value is estimated using the same assumptions as experimental error in environmental impact. \[\centering\,\centering/\mathrm{kg}_{\mathrm{concrete}}=\sum_{i}c_{i}f_{i},\@add@centering \tag{3}\] where \(c_{i}\) is the price per kg of the \(t^{\mathrm{th}}\) material, and \(f_{i}\) is the mass fraction of the \(t^{\mathrm{th}}\) material in the concrete mix. Commercially, the pricing of concrete mixes will vary largely dependent on not only mix proportions but also scale of project or application. For this reason, representative values have been assumed from Fibo Intercon (2019) for medium size batches to give realistic relative values between concretes for optimization, but these should not be considered absolute. The values used are presented in the Supplementary Material. Experimental error in the cost is estimated by assuming an error in mass measurements of 0.7%, based on precision of scales used for the experimental series of \(\pm\) 0.1 kg for every 15 kg weighed. It can be seen in Figure 6 that the machine learning predictions of cost show excellent agreement with the experimental values. This is due to the fact that cost, like environmental impact, is an approximately linear function of the concrete composition proportions and therefore relatively easy to predict. ## 6 Discussion ### Ramifications for concrete design and specification The concretes proposed and experimentally validated in this work demonstrate the potential of the machine learning methodology to predict concrete behavior. Carbonation coefficient, environmental impact, strength, density, and cost are chosen as examples of constraints in concrete specification, but this could be extended to other properties, such as: fresh state behavior (e.g. slump, wet density), structural response (e.g. stiffness, flexural tensile strength), or durability behavior (e.g. permeability, porosity). Performance-based specification is a growing area of research within concrete durability design, including design for resistance to carbonation (Younsi et al., 2011; von Greve-Dierfeld and Gehlen, 2016, 2016; Wally et al., 2022). Performance-based approaches may allow more economical and sustainable structures to be realized (Teply and Vofechovska, 2009). However, to satisfy performance requirements, concrete properties must be demonstrated through tests such as the compressive strength and accelerated carbonation tests employed in this work. Using knowledge of the performance of previous specimens, machine learning could reduce the need for trial and error through such testing when selecting a mix, saving vital time on construction projects. This also enables the selection over multiple different parameters, including environmental impact, meaning that sustainability can be prioritized whilst still fulfilling other requirements. This could herald an era of just-in-time concrete design, with bespoke mixes specified on the construction site that offer optimal properties. ### Future uses of machine learning The machine learning method employed in this work can operate on any number of input features and target variables. The generic algorithm can be applied to images of concrete (Lemaire et al., 2005), where noise in the carbonation fronts or the distribution of aggregates may contain information about concrete properties and help improve their predictions. The method could also be used to predict the temporal dynamics of concrete properties. This would be useful to specify concrete mixes that satisfy the given targets throughout their life cycle. The approach to extract information from noise has potential applications in areas beyond concrete design as well. One of these areas is autonomous vehicles. Here, uncertainty in the distance measured to the object contains information about its shape or type. For example, high uncertainty in distance tells that the object could be a fence, whereas low uncertainty could be characteristic of a wall. The methodology could also be applied to additive manufacturing (Rasiya et al., 2021), where noise in metal powder microstructure can be used to devise the optimal melting process. Another potential application area is information engineering, where noise in the data transmitted by sensors can be used for object tracking (Placzek and Bernas, 2014). Beyond engineering, the methodology could be applied to research of cancer, which is known to cause genetic chaos (Calin et al., 2003). The information extracted from this chaos could potentially be used for early cancer detection. ### Conclusions This work presented the use of a two-layer random forest regression model to select concrete mix designs with the highest probability of successfully achieving various target properties. The model extracts information out of noise, making it particularly applicable to the random distribution of aggregates within the concrete matrix that drives variability of concrete properties, such as strength and carbonation coefficient. The methodology was effective even when trained on sparse data, and gave leave-one-out cross-validation \(R^{2}\) values above 0.50 for difficult-to-predict density and above 0.99 for environmental impact and cost, demonstrating overall superior performance than both a single-layer linear model and single-layer random forest model. Predictions for two blind mixes were experimentally validated to within standard error. Overall, these results are promising for future use of machine learning that can exploit noise for performance-based design of concrete across multiple properties, as well as for other materials and applications. The authors acknowledge the assistance of lab technicians in the Civil Engineering building, University of Cambridge in conducting the experimental work reported here. There is Open Access to this article at [https://www.openaccess.cam.ac.uk](https://www.openaccess.cam.ac.uk). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising. Conceptualization: J.C.F., B.Z., J.M.L., G.J.C.; Data curation: J.C.F.; Formal analysis: J.C.F., B.Z.; Funding acquisition: J.M.L., G.J.C.; Investigation: J.C.F.; Methodology: J.C.F., B.Z.; Project administration: J.M.L., G.J.C.; Software: B.Z.; Supervision: J.M.L., G.J.C.; Validation: J.C.F., B.Z.; Visualization: J.C.F., B.Z.; Writing--original draft: J.C.F., B.Z.; Writing--review and editing: J.M.L., G.J.C. G.J.C. declares a potential financial conflict of interest as a Director of machine learning company, Intellegens Ltd. The other authors declare no competing interests exist. The data relating to this work are given in the Supplementary Material and can also be found at [https://doi.org/10.17863/CAM.92245](https://doi.org/10.17863/CAM.92245). The authors acknowledge the financial support of the Engineering and Physical Sciences Research Council (Grant Nos. EP/N017668/1 and EP/N509620/1), Harding Distinguished Postgraduate Scholars Programme Leverage Scheme, and the Royal Society (Grant No. URF; R 201,002). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The supplementary material for this article can be found at [http://doi.org/10.1017/dec.2023.5](http://doi.org/10.1017/dec.2023.5)
2307.07114
Reexamine the dark matter scenario accounting for the positron excess in a new cosmic ray propagation model
The positron excess in cosmic rays has stimulated a lot of interests in the last decade. The dark matter origin of the extra positrons has attracted great attention. However, the $\gamma$-ray search set very stringent constraints on the dark matter annihilation/decay rate, which leads to great disfavor of the dark matter scenario. In the work, we incorporate the recent progress in cosmic rays propagation and reexamine the dark matter scenario accounting for the positron excess. Recent observations indicate that cosmic rays propagation in the Milky Way may be not uniform and diffusion in the Galactic disk should be slower than that in the halo. In the spatial-dependent propagation model, the positrons/electrons are more concentrated in the disk and lead to smaller dark matter annihilation/decay rate to account for the positron excess and also a smaller deficit in the background positron flux. Especially for the $\mu^+\mu^-$ channel the positron spectrum fit the AMS-02 latest data perfectly and the annihilation rate satisfies all the present constraints from $\gamma$-ray and CMB observations.
Xing-Jian Lv, Xiao-Jun Bi, Kun Fang, Peng-Fei Yin, Meng-Jie Zhao
2023-07-14T01:35:43Z
http://arxiv.org/abs/2307.07114v2
Reexamine the dark matter scenario accounting for the positron excess in a new cosmic ray propagation model ###### Abstract The positron excess in cosmic rays has stimulated a lot of interests in the last decade. The dark matter origin of the extra positrons has attracted great attention. However, the \(\gamma\)-ray search set very stringent constraints on the dark matter annihilation/decay rate, which leads to great disfavor of the dark matter scenario. In the work, we incorporate the recent progress in cosmic rays propagation and reexamine the dark matter scenario accounting for the positron excess. Recent observations indicate that cosmic rays propagation in the Milky Way may be not uniform and diffusion in the Galactic disk should be slower than that in the halo. In the spatial-dependent propagation model, the positrons/electrons are more concentrated in the disk and lead to smaller dark matter annihilation/decay rate to account for the positron excess and also a smaller deficit in the background positron flux. Especially for the \(\mu^{+}\mu^{-}\) channel the positron spectrum fit the AMS-02 latest data perfectly and the annihilation rate satisfies all the present constraints from \(\gamma\)-ray and CMB observations. ## I Introduction As the observation of positron excess in cosmic rays (CR) by PAMELA [1] and later precise confirmation by AMS-02 [2; 3], a multitude of studies have emerged, aiming to resolve its origin. In the literature, two primary conjectures have attracted significant attention: the involvement of dark matter (DM) annihilation or decay within the Galactic halo [4; 5; 6], and the presence of nearby astrophysical sources [7; 8; 9]. These interpretations have been thoroughly investigated. However, attempts to employ DM annihilation as an explanation for the observed excess have encountered challenges. For instance, the annihilation/decay of DM into pairs of quarks or gauge bosons is ruled out due to the absence of corresponding excesses in the flux of cosmic ray antiprotons[10]. For leptonic channels, the emission of high-energy photons in conjunction with charged leptons would generate discernible signals in systems with high DM densities and low baryon densities, such as dwarf galaxies. Consequently, the absence of such signals in the Fermi-LAT data strongly disfavor the DM-based explanations [11; 12; 13; 14]. Furthermore, the injection of energy resulting from DM annihilation/decay during recombination could impact the cosmic microwave background (CMB). Precise measurements carried out by Planck[15] have imposed stringent constraints on the properties of DM [16; 17], which also conflict with the requirements to explain positron excess. As a result, complicated DM models have been proposed to reconcile this apparent discrepancy. These attempts include proposals involving velocity-dependent annihilation cross sections, such as the Sommerfeld[18; 19; 20; 21; 22] and Breit-Wigner mechanisms[23; 24; 25; 26; 27], as well as local DM over-density[28]. The field of CR propagation has recently witnessed substantial progress owing to several observations. The identification of TeV halos around some middle-aged pulsars has unveiled diffusion coefficients in the vicinity of these pulsars that are more than two orders of magnitude lower than the galactic average [29; 30; 31; 32]. Moreover, the spatial magnetic-energy spectrum within the Galaxy suggests that the intensity of magnetic field turbulence within the galactic disk surpasses that found in the halo [33], which implies a significant reduction in the diffusion coefficient within the Galactic disk. Collectively, these findings indicate that the diffusion coefficient in the Galactic disk could be significantly smaller than that of the Galactic halo, thereby challenging the assumption of homogeneous diffusion embedded in conventional CR propagation models. To address this issue, a spatial-dependent diffusion model, encompassing a slow-diffusion disk (SDD) proximate to the Galactic plane, has been proposed in a previous investigation [34]. This model explains the observed spectral hardening of CRs at several hundreds of GeV energies, as reported by many experiments, including ATIC-2 [35; 36], CREAM [37; 38], PAMELA [39], and AMS-02 [40; 41]. Additionally, the model presents a plausible resolution for the relatively low magnitude of local CR anisotropy [42; 43] and addresses other related concerns. It is also found that the SDD model exhibited a higher prediction of secondary positrons compared to the conventional diffusive-reacceleration model due to the higher concentration of electrons/positrons in the disk. Therefore, it is pertinent to explore whether the excess can be solely attributed to DM annihilation within the framework of the SDD model. In this study, we undertake a quantitative analysis of the AMS-02 results within the context of the SDD model. To prevent potential biases resulting from the preselection of background parameters, a global fitting procedure is employed, simultaneously determining both the background and DM parameters. To efficiently sample the parameter space with high dimensionality, the Markov Chain Monte Carlo [44] method is employed. Additionally, this investigation employs a new electron/positron production cross-section model developed by Ref. [45], which is based on the latest collider data, in order to mitigate biases stemming from hadronic interactions. Furthermore, a charge-sign dependent solar modulation potential is incorporated, since particles of opposite charges explore distinct regions of the solar system [46]. To address systematic uncertainties among different detectors, only the most recent CR data provided by the AMS-02 collaboration is utilized [47; 2; 3]. This paper is structured as follows. In Sec. II, we provide a detailed description of the SDD propagation model and the employed methodology utilized to determine the propagation and source parameters, which serve as the foundation for calculating the background electron/positron flux. Additionally, we introduce our setups for DM annihilation/decay within this section. In Sec. III, we present the fitting results obtained under different DM setups, accompanied by comparisons with other DM indirect detection results. Finally, we summarize our findings and offer insightful discussions in Sec. IV. ## II Method ### Description of the global fitting scheme The approach employed for the global data fit follows the methodology presented in our prior investigation of the AMS-02 positron fraction results [48; 4]. Initially, parameters associated with propagation in the SSD model are determined through the fitting to the secondary to primary ratios, and these parameters remain fixed throughout the study. Subsequently, the injection spectrum for protons and helium, which are crucial for calculating the secondary \(e^{\pm}\) spectrum, is obtained by fitting against the latest AMS-02 proton and helium data [47]. These parameters are also held constant during the fitting process for the lepton data. In the final step, we perform a fit to the latest AMS-02 lepton data, incorporating both the primary electrons and the electrons/positrons arising from DM annihilation/decay. Notably, we choose to utilize the positron flux \(\Phi_{e^{+}}\), instead of the conventional positron fraction. This selection is motivated by the heightened sensitivity of \(\Phi_{e^{+}}\) to novel physics phenomena, as emphasized by the AMS-02 collaboration [2]. Utilizing \(\Phi_{e^{+}}\) offers a distinct advantage as it is independent of the energy dependence of electrons. The fit also takes into account the combined \(e^{+}+e^{-}\) spectrum. It is worth noting that the analysis excludes AMS-02 lepton data with energies below 7.5 GeV, as these measurements are substantially affected by solar modulation effects, as reported by the AMS-02 collaboration [2]. In each fitting procedure, the Python package cobaya1[49; 50; 51] is utilized to implement the Markov Chain Monte Carlo technique, enabling the derivation of posterior probability distributions for the parameters based on observational data. Following Bayes' theorem, the posterior probability of a parameter set denoted as \(\vec{\theta}\) with the given observational data is proportional to the product of the likelihood function \(\mathcal{L}(\vec{\theta})\propto\exp(-\chi^{2}(\vec{\theta})/2)\), which represents the model's fit to the data, and the prior probability \(\mathcal{P}(\vec{\theta})\) of the model parameters prior to the current observations. In this study, we adopt flat (constant) prior probabilities for all model parameters within specified ranges, some of which are logarithmic. Detailed information can be found in the provided tables. Footnote 1: [https://cobaya.readthedocs.io/en/latest/](https://cobaya.readthedocs.io/en/latest/) ### CR propagation model Within the diffusive halo, the propagation of CRs can be mathematically described by the diffusion equation [52]: \[\begin{split}\frac{\partial\psi}{\partial t}=& Q(\mathbf{x},p)+\nabla\cdot(D_{xx}\nabla\psi)+\frac{\partial}{\partial p}[p^{2}D_{pp} \frac{\partial}{\partial p}(\frac{\psi}{p^{2}})]\\ &-\frac{\partial}{\partial p}\dot{p}\psi-\frac{\psi}{\tau_{f}}- \frac{\psi}{\tau_{r}}\;,\end{split} \tag{1}\] where \(Q(x,p)\) represents the CR source term, \(\psi=\psi(x,p,t)\) denotes the CR density per unit momentum \(p\) at position \(\mathbf{x}\), \(\dot{p}\equiv dp/dt\) is the momentum loss rate, and the time scales \(\tau_{f}\) and \(\tau_{r}\) characterize fragmentation processes and radioactive decays, respectively. In the framework of diffusive-reacceleration, the momentum space diffusion coefficient, \(D_{pp}\) is related to the \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Best-fit values & posterior 95\% range \\ \hline \(D_{0}(10^{28}cm^{2}s^{-1})\) & 3.379 & [2.986,4.023] \\ \(\delta\) & 0.583 & [0.557,0.608] \\ \(L(\text{kpc})\) & 4.743 & [4.323,5.625] \\ \(V_{2}(\text{km/s})\) & 19.718 & [17.130,21.706] \\ \(\eta\) & -1.299 & [-1.518,-1.099] \\ \(\xi\) & 1.153 & [0.965,1.277] \\ \(h(\text{kpc})\) & 0.468 & [0.406,0.515] \\ \hline \(\chi^{2}_{\text{min}}/n_{\text{dof}}\) & 167.55/265 & - \\ \hline \hline \end{tabular} \end{table} Table 1: The best-fit values and posterior 95% range of all parameters in SDD model spatial diffusion coefficient \(D_{xx}\) through the relation \(D_{pp}D_{xx}=4p^{2}v_{A}^{2}/\left(3\delta\left(4-\delta^{2}\right)\left(4- \delta\right)\right)\)[53; 54], where \(v_{A}\) is the Alfven velocity. Notably, convection is not considered in this work, as previous studies have suggested that convection may not be necessary[55; 56]. In the SDD model, the diffusion coefficient in the vicinity of the Galactic plane is suppressed. Consequently, the associated diffusion coefficient \(D_{xx}\) is parameterized as follows: \[D_{xx}(R,z)=aD_{0}\beta^{\eta}(\frac{R}{R_{0}})^{b\delta} \tag{2a}\] \[a=\begin{cases}\xi\,,&|z|\leq h\\ 1\,,&|z|>h\end{cases}\] (2b) \[b=\begin{cases}0\,,&|z|\leq h\\ 1\,,&|z|>h\end{cases}\, \tag{2c}\] where \(\xi\) is a free parameter to be determined, \(\beta=v/c\) is the particle velocity in natural unit, and the factor \(\beta^{\eta}\) describes the effect of the low-energy random-walk process, where \(\eta\neq 1\) is introduced to improve the calculated B/C ratio at low rigidity [55]. The spatial variation of the diffusion coefficient is determined by the scale factors \(a\) and \(b\). \(a\) alters the normalization at the reference rigidity of \(R_{0}=4\) GV, while \(b\) adjusts the slope index. The extent of the slow diffusion region is characterized by the parameter \(h\). The accurate prediction of secondary \(e^{\pm}\) relies heavily on the interstellar medium gas density and the treatment of energy losses. In this study, we employ the 2D default models implemented in GALPROP v562[57] to characterize the gas density and to account for the energy losses. The numerical solution takes account of the dominant losses, such as the synchrotron losses on the Galactic magnetic field and the inverse Compton losses on the interstellar radiation fields, for \(e^{\pm}\) detected at energies exceeding approximately 10 GeV. Additionally, adiabatic, bremsstrahlung, and ionization losses, which impact the prediction at lower energies around a few GeV, are also taken into consideration. The interstellar radiation field model utilized in this study is the default GALPROP one, which aligns with the more recent estimates [58]. Synchrotron energy losses are computed based on a regular magnetic field proposed in Ref. [59], along with a random component modeled according to Ref. [60]. Footnote 2: Current version available at [https://galprop.stanford.edu/](https://galprop.stanford.edu/) We modify the default GALPROP code to enable the consideration of a spatially dependent diffusion coefficient. Following the methodology outlined in a previous investigation [34], the carbon flux, \({}^{10}\)Be/\({}^{9}\)Be ratio, B/C ratio, and Be/B ratio serve as constraints for the parameters within the framework of the SDD model. The posterior distributions of all parameters are found to exhibit favorable behavior, successfully reproducing the nucleon fluxes and ratios. Table 1 presents the posterior mean values and associated 95% confidence intervals for the model parameters. Before reaching Earth, local interstellar CRs undergo solar modulation effects within the heliosphere. Traditional approaches have relied on the force field approximation (FFA) [61], employing a single solar modulation potential \(\phi\), to account for this phenomenon. However, this approximation assumes a spherical symmetry and overlooks the drift effect caused by the heliospheric magnetic field configuration. Recent studies employing realistic simulations and solving Parker's transport equation demonstrated that this drift effect induces charge-sign dependent behavior in CR spectra [62; 63; 46]. Consequently, employing the FFA with a single modulation potential \(\phi\) proves insufficient in accurately describing all CR particles. In this study, we incorporate the FFA to account for solar modulation effects while considering two modulation potentials \(\phi_{e^{+}}\) and \(\phi_{e^{-}}\) for positrons and electrons, respectively. ### CR injection sources The detected CR \(e^{\pm}\) particles consist of three distinct components: the primary electrons originating from supernova remnants, the secondary electrons and positrons arising from spallation processes of primary nuclei within the interstellar medium, and the \(e^{\pm}\) pairs generated by exotic sources like DM annihilation or decay. The combined impact of the primary and secondary components is regarded as the background. In this section, we provide a comprehensive overview of the injection CR \(e^{\pm}\) spectra pertaining to both the background and DM annihilation/decay sources. #### ii.3.1 The \(e^{\pm}\) background spectrum The distribution of regular CR sources is expected to align with the radial profile of supernova remnants around the Galactic disk, which can be described as follows: \[f(r,z)=\left(\frac{r}{r_{\odot}}\right)^{a}\exp\left(-b\cdot\frac{r-r_{\odot}} {r_{\odot}}\right)\exp\left(-\frac{|z|}{z_{s}}\right)\, \tag{3}\] where \(r_{\odot}=8.5\) kpc represents the distance between the Sun and the Galactic center, and \(z_{s}\approx 0.2\) kpc denotes the characteristic height of the Galactic disk. In accordance with Ref. [64], we adopt the parameters \(a=1.25\) and \(b=3.56\), which are adjusted to match the observed \(\gamma\)-ray gradient. Regarding the energy dependence of the source term, the shock acceleration theory [65] predicts that the injection spectra of primary CRs follow a power law relation in rigidity: \[q^{i}(R)\propto\left\{\begin{array}{l}\left(R/R_{\rm br}^{i}\right)^{-\nu_{0}^{ i}},\ \ \ \ \ R\leq R_{\rm br}^{i}\\ \left(R/R_{\rm br}^{i}\right)^{-\nu_{1}^{i}},\ \ R>R_{\rm br}^{i}\ \ \ \ \ \,\end{array}\right. \tag{4}\] where \(i\) denotes the species of the nuclei, \(R\) stands for the particle's rigidity, and \(R_{\rm br}\) is a low-energy break introduced to account for the observed low-energy spectral bumps observed in all nuclei. The spectral indices below and above the break are denoted as \(\nu_{0}^{i}\) and \(\nu_{1}^{i}\), respectively. Upon adopting the best-fit propagation parameters outlined in Table 1, we further constrain the injection parameters based on the proton and helium flux data. The resulting injection parameters can be found in Table 2. In Fig. 1, we present a comparison between the best-fitting spectrum and the corresponding observational data. Notably, our calculated proton and helium fluxes, both before and after solar modulation, exhibit excellent agreement with observations across the entire energy range. The determination of secondary electrons and positrons is a straightforward process given the known injection spectrum and propagation parameters. In this study, we adopt a parameterization for the production cross section of secondary leptons as presented in Ref. [45], incorporating the latest collider data from experiments such as NA49 [67; 68] and NA61 [69]. Consistent with prior investigations, we introduce a renormalization parameter, denoted as \(c_{e^{\pm}}\), to account for uncertainties arising from factors including the \(e^{\pm}\) production cross section, enhancement factor from heavier nuclei, and uncertainties in propagation. It is also important to note that these uncertainties may not be accurately captured by a constant factor, as they likely possess an energy-dependent nature. The utilization of this constant factor \(c_{e^{\pm}}\) is merely an approximation employed for the data fitting purpose. Regarding the primary electron injection spectrum, we assume a broken power-law relation in rigidity, featuring a low-energy break suggested by synchrotron observations [70; 71; 72]. We fix the position of the break at 5 GV and the spectral index below the break at 1.5, since we do not include data points below 7.5 GV. In summary, the free parameters governing the background electron and positron spectra are as follows: \[\mathbf{\theta}=\{A_{e},\,\nu_{1},\,\phi_{e^{-}},\,\phi_{e^{+}},\,c_{e^{\pm}}\}\;,\] where \(A_{e}\) is the post-propagated normalization flux of primary \(e^{-}\) at 25 GeV, and \(\nu_{1}\) stands for the spectral index above the spectral break. The solar modulation potentials for the electrons and positrons are denoted by \(\phi_{e^{-}}\) and \(\phi_{e^{+}}\), respectively, and \(c_{e^{\pm}}\) represents the rescaling factor for secondary \(e^{\pm}\). #### ii.2.2 \(e^{\pm}\) from DM annihilations The extensive DM halo surrounding the Milky Way provides a distinctive opportunity to explore the potential non-gravitational interactions between the DM and standard model particles [73]. If such interactions exist, they could give rise to the production of CRs, presenting an unconventional CR source. Specifically, for CR electrons and positrons, the source term arising from DM \begin{table} \begin{tabular}{l l l} Parameter & Prior range & posterior 95\% range \\ \hline \(\nu_{0}^{0}\) & [1.4,2.8] & \(2.17^{+0.02}_{-0.04}\) \\ \(\nu_{1}^{0}\) & [1.8,3.0] & \(2.428^{+0.005}_{-0.005}\) \\ \(R_{\rm br}^{\rm br}({\rm GV})\) & [5.0,25.0] & \(13.7^{+1.10}_{-1.1}\) \\ \(A_{p}^{0}\) & [2.6,5.4] & \(4.135^{+0.018}_{-0.019}\) \\ \(\nu_{1}^{\rm Heb}\) & [1.5,3.0] & \(2.377^{+0.005}_{-0.005}\) \\ Abund. He\({}^{\rm e}\) & [8.6,11.8] & \(9.88^{+0.005}_{-0.05}\) \\ \(\phi_{\rm nuc.}({\rm MV})\) & [0.1,0.9] & \(0.709^{+0.029}_{-0.028}\) \\ \end{tabular} \end{table} Table 2: The prior range, best-fit values and posterior 95% range of the proton and helium injection spectra. Figure 1: The fluxes of the proton (Top) and helium (Bottom) for the corresponding parameter shown in Table 2, compared with the latest data from AMS-02 [47] and Voyager [66]. The solid line is the spectrum after solar modulation while the dashed line is the local interstellar spectrum in units of kinetic energy per nucleon. annihilation/decay can be expressed as follows: \[\begin{split} Q^{\text{ann.}}_{\text{DM}}(\vec{r},E)& =\frac{1}{2}\left(\frac{\rho_{\text{DM}}(\vec{r})}{m_{\text{DM}}} \right)^{2}\langle\sigma v\rangle\sum_{k}B_{k}\frac{\text{d}N_{e^{\pm}}^{k}}{ \text{d}E}\,\\ Q^{\text{dec.}}_{\text{DM}}(\vec{r},E)&=\left(\frac{ \rho_{\text{DM}}(\vec{r})}{m_{\text{DM}}}\right)\frac{1}{\tau}\sum_{k}B_{k} \frac{\text{d}N_{e^{\pm}}^{k}}{\text{d}E}\,\end{split} \tag{5}\] where the factor \(1/2\) corresponds to the DM particle being scalar or Majorana fermion, \(m_{\text{DM}}\) denotes the mass of the DM particle, \(\langle\sigma v\rangle\) represents the thermally averaged DM annihilation cross section in the case of DM annihilation, and \(\tau\) stands for the DM lifetime in the case of DM decay. The \(e^{\pm}\) production spectrum per annihilation/decay to final state \(k\) with the branching ratio \(B_{k}\), obtained from the PPPC 4 DM ID [74], is denoted by \(\text{d}N_{e^{\pm}}^{k}/\text{d}E\). The DM density profile in the Milky Way \(\rho_{\text{DM}}(r)\) is assumed to follow the Navarro-Frenk-White density profile [75]: \[\rho(\vec{r})=\frac{\rho_{s}}{\left(r/r_{s}\right)\left(1+r/r_{s}\right)^{2}}\, \tag{6}\] where \(r_{s}=20\) kpc and \(\rho_{s}=0.35\) GeV have been selected, resulting in a local DM density of 0.4 GeV cm\({}^{-3}\). This choice of parameters is in agreement with the latest constraints derived from the Galactic rotation curve [76; 77]. Alternative density profiles, such as the Einasto [78; 79] or Burkert [80; 81] profiles, are not taken into account, as they yield similar \(e^{\pm}\) spectrum at Earth. The propagation of the DM induced \(e^{\pm}\) is simulated using GALPROP, utilizing the same configuration as the background \(e^{\pm}\), ensuring a unified treatment. The free parameters for DM annihilation include the DM particle mass, \(m_{\text{DM}}\) and the thermally averaged DM annihilation cross section \(\langle\sigma v\rangle\). Conversely, in the case of decay, the parameters of interest are the mass, \(m_{\text{DM}}\), and the life-time \(\tau\). Figure 2: The expected spectra of the best-fit results for the case of DM annihilation. The left panel shows the total \(e^{\pm}\) spectra compared with the AMS-02 [3] data, while the right panel shows the \(e^{+}\) spectrum alone with AMS-02 [2] results. The dashed, dotted, and solid lines represent the backgrounds, DM contributions, and total results, respectively. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Prior Range & \multicolumn{2}{c}{\(\tau^{+}\tau^{-}\)} & \multicolumn{2}{c}{\(\mu^{+}\mu^{-}\)} \\ \cline{3-6} & Best & Mean & Best & Mean \\ \hline \(\log(A_{e^{\pm}})\) & [-10.5,-7.5] & -8.942 & \(-8.940^{+0.008}_{-0.009}\) & -8.939 & \(-8.940^{+0.008}_{-0.0098}\) \\ \(\nu_{1}\) & [1.5,4.0] & 2.84 & \(2.84^{+0.02}_{-0.02}\) & 2.84 & \(2.84^{+0.02}_{-0.02}\) \\ \(\phi_{e^{-}}\)/GV & [0.1,1.8] & 0.510 & \(0.524^{+0.061}_{-0.067}\) & 0.532 & \(0.522^{+0.055}_{-0.059}\) \\ \(\phi_{e^{+}}\)/GV & [0.1,1.8] & 0.639 & \(0.652^{+0.067}_{-0.060}\) & 0.711 & \(0.713^{+0.049}_{-0.051}\) \\ \(c_{e^{\pm}}\) & [0.25,4.0] & 1.70 & \(1.71^{+0.06}_{-0.06}\) & 1.78 & \(1.79^{+0.041}_{-0.06}\) \\ \hline \(\log(m_{\text{DM}}/\text{GeV})\) & [1.0,5.5] & 3.35 & \(3.37^{+0.12}_{-0.12}\) & 2.891 & \(2.916^{+0.077}_{-0.077}\) \\ \(\log(\sigma v)^{\text{b}}\)) & [-28.0,-22.0] & -22.95 & \(-22.93^{+0.17}_{-0.17}\) & -23.92 & \(-23.89^{+0.13}_{-0.12}\) \\ \hline \(\chi_{\text{g}^{\pm}}^{2}\) & - & 35.0 & - & 35.8 & - \\ \(\chi_{\text{g}^{\pm}}^{2}\) & - & 50.1 & - & 54.8 & - \\ \(\chi_{\text{tot.}}^{\text{c}}/\text{DoF}\) & - & 85.1/103 & - & 90.6/103 & - \\ \hline \hline \end{tabular} * Post-propagated normalization flux of \(e^{-}\) at 25 GeV in unit cm\({}^{-2}\)s\({}^{-1}\)sr\({}^{-1}\)MeV\({}^{-1}\) * In unit cm\({}^{3}\)s\({}^{-1}\) \end{table} Table 3: The prior ranges, best-fit values, mean values, and posterior 95% range of the model parameters for DM annihilation. The number of data points for \(e^{+}\) and \(e^{+}+e^{-}\) are 54 and 56, respectively. ## III Results & Discussion ### DM annihilation We summarize the fitting results in Table 3 and show the corresponding spectra compared with data in Fig. 2. We find that both the \(\mu^{+}\mu^{-}\) and \(\tau^{+}\tau^{-}\) annihilation channels yield reasonable fits to the AMS-02 data, with the reduced \(\chi^{2}\) values smaller than one. The \(\tau^{+}\tau^{-}\) channel provides slightly better agreement with the total \(e^{\pm}\) spectrum, exhibiting a more gradual decline at high energies compared to \(\mu^{+}\mu^{-}\) and resulting in improved concordance with three data points in the \(428.5-832.3\) GeV/n range. Since the high-energy end of the electron spectrum suffers severe energy losses, making it more easily influenced by nearby sources [82, 83], it is acceptable that the \(\mu^{+}\mu^{-}\) channel does not well reproduce these high-energy data points. Regarding the positron spectrum, although both channels yield comparable results in terms of \(\chi^{2}\) statistics, the \(\tau^{+}\tau^{-}\) channel appears to generate an excess of positrons at the high-energy end because it exhibits a more gradual decline in its spectrum at high energies. It is important to note that this finding contradicts previous studies conducted with the standard propagation model using older AMS-02 data from the 2011-2015 period [84], where the \(\tau^{+}\tau^{-}\) channel was strongly favored over the \(\mu^{+}\mu^{-}\) channel. This discrepancy can be attributed to two factors. Firstly, the new AMS-02 data (2011-2017)[2] extends to higher energies compared to the previous data, and for the first time, a spectral cut is observed. Secondly, the SDD model assumes that CRs propagating in the slow disk could contribute a harder component at high energies, which give rise to the positron flux compared to the standard propagation model, thereby compensating for the hard spectrum of the \(\mu^{+}\mu^{-}\) channel. It is important to note that due to the enhanced production of secondary \(e^{\pm}\) in the SDD model, the rescaling factors required to reproduce the data for secondary \(e^{\pm}\) are significantly smaller compared to previous works [4, 5] utilizing the standard diffusion-reacceleration model. Specifically, the rescaling factors for secondary \(e^{\pm}\) in the SDD model are found to be 1.694 and 1.783 for the \(\tau^{+}\tau^{-}\) and \(\mu^{+}\mu^{-}\) channels respectively, in contrast to previous studies where values of \(c_{e^{\pm}}\) around 3 are typically employed. Another noteworthy consequence of the SDD model's capability to generate a greater number of secondary \(e^{\pm}\) is that the contribution of positrons from the additional source never surpasses that of the secondary component. This finding stands in contrast to the results obtained using the standard propagation model, where the contribution of positrons from the additional source dominates over the secondary component above tens of GeV. As for the solar modulation potential, we obtain reasonable results within the range of \(0.5-0.7\) GV, consistent with the values obtained from the fitting to nuclei data. This represents a significant improvement compared to the results obtained using the standard propagation model, where the modulation potentials are typically larger than 1 GV, which is deemed unphysical based on our understanding of solar modulation from the fitting to nuclei data. An interesting observation regarding the modulation potential is that the potential for positively charged particles \(\phi_{e^{+}}\), is larger than that for negatively charged particles \(\phi_{e^{-}}\), by approximately 0.1 GV. This finding is in agreement with the case of the CR proton and antiproton [85]. ### DM decay Although DM particles are generally assumed to be stable, the potential for DM decay cannot be completely disregarded, particularly if the decay process unfolds over a timescale exceeding the age of the universe [87]. If such decay occurs, the resulting products within the Milky Way halo could potentially account for the observed excess of positrons [88, 89, 90]. In this section, we perform fits to the AMS-02 total \(e^{\pm}\) spectrum and \(e^{+}\) spectrum under the assumption of DM decay. A summary of the fitting outcomes is presented in Table 4, and visual comparison with the data is depicted in Fig. 3. Similar to the case of DM annihilation, both the \(\mu^{+}\mu^{-}\) and \(\tau^{+}\tau^{-}\) decay channels yield satisfactory fits to the AMS-02 data, as indicated by reduced \(\chi^{2}\) values smaller than one. Furthermore, the resulting parameters are well constrained and demonstrate reasonable values similar to those obtained in the annihilation scenario. Specifically, the rescaling factors for secondary \(e^{\pm}\) in both the two decay channels are approximately 1.8, highlighting the production of leptons in the high-energy region facilitated by the SDD model. Furthermore, the solar modulation potentials for the electron and positron are determined to be approximately 0.7 and 0.5 GV, respectively, emphasizing their distinct impacts on the CR flux. Importantly, it should be noted that the contribution from the additional source never surpasses that of the secondary origins, affirming the dominance of the secondary component under the SDD model. The \(\tau^{+}\tau^{-}\) channel gives a better fit to the total \(e^{\pm}\) spectrum, because the resulting spectrum from decay is slightly harder than that from annihilation, and this is favored by the AMS-02 data. It is worth noting, however, that caution is warranted in interpreting this improvement in the \(\chi^{2}\) value as a definitive physical significance. As previously discussed, the presence of nearby electron sources can significantly influence the high-energy end of the electron spectrum. Therefore, while the \(\tau^{+}\tau^{-}\) channel yields the best statistical agreement with the data, further investigations and considerations are required to verify the true physical implications of this result. Regarding the positron spectrum, although the \(\tau^{+}\tau^{-}\) channel provides the best fit in terms of the reduced \(\chi^{2}\) value, it is evident that it also overestimates the positron flux at the highest energies accessible to AMS-02, similar to the \(\tau^{+}\tau^{-}\) annihilation scenario. Therefore, with the ongoing enhancement in the precision of positron data, it is possible that the \(\tau^{+}\tau^{-}\) channel may be less favored in future investigations. ### Comparison with other constraints The DM annihilation/decay hypothesis as an explanation for the positron excess is subject to strong constraints imposed by various other observations, including the CMB [16] and \(\gamma\)-ray measurements, which encompass observations such as dwarf galaxy gamma-rays [11], diffuse gamma-rays in the Milky Way halo [12], and the isotropic gamma-ray background (IGRB) [13; 14]. In the case of DM annihilation, attempts to reconcile the tension between these observations have involved the introduction of complex velocity-dependent cross sections, such as the Breit-Wigner mechanism [23; 24; 25; 26; 27]. However, these models are somewhat _ad hoc_ in nature and are unable to circumvent the constraints imposed by IGRB observations. On the other hand, the constraints from \(\gamma\)-ray observations are even more stringent for DM decay scenarios, rendering them exceedingly difficult, if not impossible, to evade. Given that the SDD model predicts an increased abundance of high-energy secondary \(e^{\pm}\), the required cross section/decay rate for DM to account for the positron excess is smaller than that predicted by conventional models. Consequently, our models have the potential to elude the constraints imposed by various observations. To illustrate this, we present the exclusion limits derived from Fermi observations of dwarf galaxies [11], diffuse gamma-rays in the Milky Way halo [12], IGRB [14], and the Planck CMB observations [16]. In addition, we depict the 68% and 95% confidence regions for the DM mass and the thermally averaged annihilation cross section/lifetime of the DM particle in Fig.4. For comparison, we also include DM properties from previous works [4; 5; 14] utilizing the standard propagation model to fit the lepton data. The inspection of the top panels of Fig. 4 reveals that for the \(\mu^{+}\mu^{-}\) channel, the required DM annihilation cross section in the SDD model is consistent with all the imposed constraints. Conversely, the standard models' cross sections are excluded by nearly all the constraints. However, when considering the \(\tau^{+}\tau^{-}\) channel, despite \begin{table} \begin{tabular}{l c c c c} \hline \hline & Prior Range & \multicolumn{2}{c}{\(\tau^{+}\tau^{-}\)} & \multicolumn{2}{c}{\(\mu^{+}\mu^{-}\)} \\ \cline{2-5} & Best & Mean & Best & Mean \\ \hline \(\log(A_{e}\)a & [-10.5,-7.5] & -8.945 \(-\)0.009 & -8.944 \(-\)8.945 \(-\)0.008 \\ \(\nu_{1}\) & [1.5,4.0] & 2.84 \(2.84\) & 2.84 \(2.83\) \(-\)0.02 \\ \(\phi_{e^{-}}\)/GV & [0.1,1.8] & 0.485 \(0.481\) \(-\)0.074 \(0.486\) & 0.481 \(-\)0.063 \(-\)0.058 \(-\)0.058 \\ \(\phi_{e^{+}}\)/GV & [0.1,1.8] & 0.691 \(0.692\) \(0.074\) \(0.771\) & 0.783 \(-\)0.057 \(-\)0. the SDD model's cross section being smaller than that of the conventional models, it remains excluded by the majority of observational constraints. This is because that the \(\tau^{+}\tau^{-}\) final state produces a larger amount of gamma-ray emissions compared to the \(\mu^{+}\mu^{-}\) channel, resulting in stronger constraints. Additionally, the rescaling factor for the \(\tau^{+}\tau^{-}\) channel is smaller than that of the \(\mu^{+}\mu^{-}\) channel, necessitating a larger cross section to compensate for the lower rescaling factor to achieve agreement with the data. Similar trends are observed in the case of DM decay. The \(\mu^{+}\mu^{-}\) channel appears to be compatible with the available data, while the \(\tau^{+}\tau^{-}\) channel is not. These findings reinforce the notion that the \(\mu^{+}\mu^{-}\) channel, whether in the context of annihilation or decay, exhibits more promising agreement with the data compared to the \(\tau^{+}\tau^{-}\) channel. ## IV Summary In this study, we present a quantitative study of the CR electron and positron fluxes observed by the AMS-02 experiment, employing the SDD model. Our analysis incorporates the latest advancements in secondary electron/positron production cross sections and accounts for the charge-sign dependent solar modulation potentials. Remarkably, we demonstrate that the SDD model outperforms conventional propagation models in several key aspects. One notable improvement offered by the SDD model arises from its ability to predict an increased population Figure 4: \(1\sigma\) and \(2\sigma\) confidence regions in the \(m_{\rm DM}-\langle\sigma v\rangle\) plane, together with the exclusion lines from the Fermi observations of dwarf galaxies [11], diffuse gamma-rays in the Milky Way halo [12], IGRB [14], and the Planck CMB observations [16]. The left and right panels show the results for the \(\mu^{+}\mu^{-}\) and \(\tau^{+}\tau^{-}\) channels, respectively. The upper and lower panels correspond to DM annihilation and decay, respectively. The fitting results to the AMS-02 observations from some previous analyses [4; 5; 14] are shown as _colored points_. Additionally, the thermal relic cross section from Ref. [86] is illustrated as a dashed gray line in the upper panels. of secondary electron/positron particles at energies above 10 GeV. Consequently, the issue of positron/electron excess is mitigated compared to conventional propagation models. This reduction in excess is exemplified by the rescaling factor, denoted as \(c_{e^{\pm}}\), which is nearly halved when compared to conventional models (reducing from approximately 3 to around 1.7). This substantial improvement is noteworthy since assuming the production uncertainties greater than 200% may not be convincing. Another notable advantage of the SDD model lies in its solar modulation potential, which aligns more closely with the modulation potential observed for cosmic nuclei. In contrast, conventional models often necessitate a large modulation potential to compensate for the significant rescaling factor \(c_{e^{\pm}}\). Furthermore, we find that, unlike the conventional propagation models where primary sources dominate the positron spectrum at high energies, the secondary component overwhelmingly contributes to the positron fluxes across all energy ranges within the SDD model. All considered scenarios demonstrate a satisfactory fit to the AMS-02 data, as evidenced by reduced \(\chi^{2}\) values below unity. In the case of DM annihilating into the \(\mu^{+}\mu^{-}\) final states, the corresponding DM mass required is approximately 790 GeV, accompanied by a thermally averaged cross section of around \(1.26\times 10^{-24}\)cm\({}^{3}\)s\({}^{-1}\). Importantly, these parameters remain consistent with the constraints from Fermi and Planck. For DM decay into the \(\mu^{+}\mu^{-}\) final states, the fitting analysis yields a DM mass estimate of roughly 1.4 TeV, coupled with a lifetime of approximately \(1.58\times 10^{27}\) s. Notably, this scenario also conforms to available constraints from other observations. Conversely, both the annihilation and decay channels associated with the \(\tau^{+}\tau^{-}\) final states are excluded by independent constraints derived from \(\gamma\)-ray and CMB observations. These stringent constraints provide compelling evidence against the \(\tau^{+}\tau^{-}\) channel as a viable explanation for the observed CR electron and positron excess. The agreement of DM annihilation/decay into the \(\mu^{+}\mu^{-}\) final states with the AMS-02 data, coupled with their compliance with the relevant CMB and gamma-ray constraints, underscores the potential as a plausible explanation for the observed phenomena. To pinpoint the precise mechanism behind the excess of positrons, the acquisition of additional data becomes imperative, either through the extension of measurements to higher energy ranges[91] or by reducing statistical and systematic errors to enable a more thorough analysis of spectral fluctuations[9]. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China under the grants No. 12175248, No. 12105292, and No. 2022YFA1604802.
2306.01894
Atmospheric Influence on the Path Loss at High Frequencies for Deployment of 5G Cellular Communication Networks
Over the past few decades, the development of cellular communication technology has spanned several generations in order to add sophisticated features in the updated versions. Moreover, different high-frequency bands are considered for advanced cellular generations. The presence of updated generations like 4G and 5G is driven by the rising demand for a greater data rate and a better experience for end users. However, because 5G-NR operates at a high frequency and has significant propagation, atmospheric fluctuations like temperature, humidity, and rain rate might result in poorer signal reception, and higher path loss effects unlike the prior generation, which employed frequencies below 6 GHz. This paper makes an attempt to provide a comparative analysis about the influence of different relative atmospheric conditions on 5G cellular communication for various operating frequencies in any urban microcell (UMi) environment maintaining the real outdoor propagation conditions. In addition, the simulation dataset based on environmental factors has been validated by the prediction of path loss using multiple regression techniques. Consequently, this study also aims to address the performance analysis of regression techniques for stable estimations of path loss at high frequencies for different atmospheric conditions for 5G mobile generations due to various possible radio link quality issues and fluctuations in different seasons in South Asia. Furthermore, in comparison to contemporary studies, the Machine Learning models have outperformed in predicting the path loss for the four seasons in South Asian regions.
Rashed Hasan Ratul, S M Mehedi Zaman, Hasib Arman Chowdhury, Md. Zayed Hassan Sagor, Mohammad Tawhid Kawser, Mirza Muntasir Nishat
2023-06-02T20:00:40Z
http://arxiv.org/abs/2306.01894v2
Atmospheric Influence on the Path Loss at High Frequencies for Deployment of 5G Cellular Communication Networks ###### Abstract Over the past few decades, the development of cellular communication technology has spanned several generations in order to add sophisticated features in the updated versions. Moreover, different high-frequency bands are considered for advanced cellular generations. The presence of updated generations like 4G and 5G is driven by the rising demand for a greater data rate and a better experience for end users. However, because 5G-NR operates at a high frequency and has significant propagation, atmospheric fluctuations like temperature, humidity, and rain rate might result in poorer signal reception, and higher path loss effects unlike the prior generation, which employed frequencies below 6 GHz. This paper makes an attempt to provide a comparative analysis about the influence of different relative atmospheric conditions on 5G cellular communication for various operating frequencies in any urban microcell (UMi) environment maintaining the real outdoor propagation conditions. In addition, the simulation dataset based on environmental factors has been validated by the prediction of path loss using multiple regression techniques. Consequently, this study also aims to address the performance analysis of regression techniques for stable estimations of path loss at high frequencies for different atmospheric conditions for 5G mobile generations due to various possible radio link quality issues and fluctuations in different seasons in South Asia. Furthermore, in comparison to contemporary studies, the Machine Learning models have outperformed in predicting the path loss for the four seasons in South Asian regions. 5G-NR, mm-wave propagation, path loss, atmospheric influence, NTUSIM, ML ## I Introduction Millimeter-wave is a crucial component of fifth-generation technology. It is expected that 5G and B5G technology would operate at extremely high frequencies, anything from 1 GHz to 100 GHz. 5G cellular systems require enormous bandwidths that are yet to be accessible in the sub-6 GHz bands of frequencies for the purpose of fulfilling their high data throughput demands. There is a lot of excitement about the potential for 5G cellular coverage in the millimeter-wave bands, which are underutilized yet have vast accessible bandwidths. However, radio propagation at microwave and millimeter wave frequencies is fundamentally dissimilar, which has far-reaching effects on all aspects of the system, including performance and reliability and the system's response to atmospheric fluctuations [1]. In today's world, almost all cellular networks operate in the microwave spectrum. There is, thus, a lot of knowledge and data from previous measurements; nevertheless, there are significantly fewer statistical measures available for the mm-wave bands [2]. Using millimeter waves in wideband communication networks has the potential to completely revolutionize our way of life. As a result, the issue of increasing data consumption rates on mobile devices with decreasing bandwidth would be resolved [3]. Millimeter-wave telecommunications have become an integral feature of the 5G network and have laid the foundation for the next generation of communications networks. Challenges with coverage, connectivity, and quality of service (QoS) are projected to be resolved with the introduction of 5G [4]. The fifth generation mobile network, also known as 5G-New Radio (5G-NR), is a technological advancement that would systematically combine operations such as enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low-latency communication (uRLLC), and their optimal deployment will necessitate appropriate channel modeling. However, the millimeter waves are influenced by a wide range of atmospheric constituents, including water vapor, fog, and other impediments. This results in scattering, reflection, and diffraction, leading to a large overall impact attenuation every time the frequency climbs higher. The problem is that millimeter-wave can hardly be utilized for long-distance applications. The atmosphere influence and the propagation mechanism effect are two of the key issues in the expected reliable technological advancement in 5G and B5G. Due to atmospheric attenuation in millimeter wave networks, received signal level and path loss characteristics dynamically change depending on the combined radio link quality. As the objective of 5G communication technology is to ensure maximum reliability, the quick channel volatility under sharp atmospheric fluctuations might have a detrimental influence on availability, performance, and QoS. Thus, this article makes an attempt to investigate how various meteorological factors could influence certain mm-wave frequencies in terms of path loss performance that are specified in 3GPP Release 17 [5]. In order to carry out our investigation, four different frequencies--7.125 GHz, 24.25 GHz, 52.60 GHz, and 71 GHz--were taken into account. The simulations were carried out on the NYUSIM mm-wave channel simulator, which maintains an approximate realistic outdoor environment [6]. For the four major seasonal changes in South Asian countries, the cumulative attenuation of four key atmospheric factors like environmental temperature, effective rain rate, overall humidity, and barometric pressure, have been taken into consideration. On top of that, taking into account the overall seasonal fluctuations in some South Asian region, this research compares and contrasts the four anticipated frequency bands for 5G adoption in an urban microcell (UMi) scenario. In addition, this study can be interpreted as a pilot tutorial simulation particularly in South Asian region for evaluating the general performance characteristics of 5G installation with respect to variable weather conditions, with the aim of identifying and assessing potential atmospheric roadblocks and future opportunities in mm-wave implementations and applications. This article also demonstrates the influence of weather conditions on the path loss effects of the transmitted signal in various environments and scenarios, as well as how the millimeter-wave responds as it travels through free space and the atmosphere. Wireless communication systems heavily rely on path loss prediction to effectively design and optimize the coverage area of a cell, which involves determining the number and placement of base stations. Accurate path loss prediction is also essential for evaluating the system's performance under various scenarios. By accurately predicting path loss, it becomes possible to optimize the transmission power and modulation scheme, leading to improved spectral efficiency, decreased interference, and extended battery life for mobile devices [7]. To enhance the accuracy of path loss prediction for 5G cellular communication deployments, several supervised machine learning algorithms have been incorporated for the selection of regression techniques. This integration of machine learning has the potential to significantly improve the accuracy of path loss prediction and optimize the performance of 5G cellular communication systems. The remaining parts of the paper are organized accordingly: Section 2 represents the literature review. Section 3 elaborates the methodology. Section 4 describes the simulation scenario. Section 5 assesses and discusses the obtained results. Finally, section 6 draws a conclusion to the paper. ## II Literature Review At mm-wave frequencies, channel noise is relatively higher and is significantly influenced by atmospheric influences, which vary by geographical region and throughout the day. As a result, it is essential to develop accurate models for the channels of mm-wave since the wireless channel shows a considerable effect on the total system performance. In order to validate the appropriate channel model, a number of atmospheric effect studies utilizing 5G candidate frequencies have been conducted. The influence of atmospheric pressure on a 5G channel estimation validated by codes, that used convolution techniques, was introduced in the work of Zhang et al. [7]. However, barometric pressure does not change significantly under different atmospheric conditions and might not reflect overall impact for atmospheric variations. In a recent study by Baihaqi et al. [8], the aim was to quantify the influence of both light and heavy rainfall on the performance of a model of the 5G-NR channel. The study did not account for any other types of weather influences. A study conducted by Mukhlsin et al. [9] demonstrated the impact of varying levels of humidity on the outage performance of 5G communication channels. The results provide important considerations for the design and optimization of 5G communication networks. The impact of rain on millimeter wave communication in tropical regions has been the subject of numerous studies. Two notable examples are the works by Nandi et al. [10] and Budalal et al. [11], which delve into the specifics of how rainfall can affect the performance of these systems. Their research provides valuable insights into the complex interactions between weather conditions and millimeter wave communication, and underscores the importance of accurately characterizing these effects in the design and deployment of 5G networks. However, these papers didn't consider any particular frequency bands for their research, rather, they demonstrated the impact of rainfall on a range of frequency bands. This highlights the need for further research to explore the effects of weather on specific frequency bands used in 5G networks, as these may exhibit unique performance characteristics under different environmental conditions. On the other hand, in addition to the studies previously mentioned, Kamrul et al. [12] evaluated the performance of MIMO in urban microcells in the city of Dhaka at 28 GHz, by taking into account the average weather conditions. For the 5G channel model, considering the effect of temperature in Bandung, Indonesia, the propagation characteristics at 28 GHz, 73 GHz, and 4 GHz are concentrated in the research of Rahayu et al. [13]. Though Sub- 6GHz frequencies might not be a good option for 5G deployments, analysis on the impact of rain and atmospheric gases on propagation of mm wave for 5G wireless communications were performed by Squali et al. across different seasons in various frequency bands [14]. Although the studies discussed above have examined the effects of various atmospheric impairments on mm-wave signals at different frequencies and environmental conditions, they have not taken into account the cumulative impact of all possible atmospheric conditions. Therefore, there is a need for further research that considers the combined effect of multiple atmospheric conditions on the performance of 5G networks in different regions and environments. In this paper, a comprehensive analysis of the cumulative impact of atmospheric variations on 5G cellular communication at high frequencies was presented. The various atmospheric impairments such as atmospheric attenuation, scattering, and absorption, as well as environmental factors such as rain rate, temperature, humidity, and atmospheric pressure were considered. To assess the combined impact of these atmospheric conditions, a stability analysis is performed in the estimation of path loss using regression techniques. Multiple regression analysis is employed to develop models that relate path loss to various environmental parameters, and the models are validated using real-world measurements. The study provides innovative solutions to these issues and highlights the importance of considering weather parameters when designing and deploying 5G networks. By taking into account the impact of weather on network performance, network planners and operators can ensure that 5G networks are optimized for the local environment, resulting in improved performance and user experience. Overall, this paper provides valuable insights for researchers and industry professionals working on 5G system design and deployment. ## III Methodology This paper adopts two distinct approaches to analyze the impact of atmospheric conditions on the path loss of 5G cellular communication. The first strategy involves utilizing the NYUSIM package to model accurate real-world simulation scenarios, which take into account the prevailing weather conditions and overall radio link quality, to obtain simulated path loss results. To conduct a detailed evaluation of the 5G network's performance, a comprehensive simulation was carried out using real-time weather data obtained from Weather Spark, an open-source website [15]. The simulation was split into four subsections, each segment emphasizing a different season in order to achieve an in-depth examination of the network's performance under various weather conditions. This approach allows for a precise assessment of the effect of atmospheric conditions on 5G network performance. The second strategy involves incorporating Machine Learning regression models as a validation tool for stability analysis in estimations of path loss at high frequencies for different atmospheric conditions for 5G cellular communication. This approach enables us to evaluate the accuracy of path loss predictions for various atmospheric conditions using regression analysis. By comparing the simulated path loss results with the predictions obtained from the regression models, the models to better account for the impact of different atmospheric conditions on 5G network performance can be refined. In addition, a comparison among the different regressor models was also considered to identify the best-performing model for path loss prediction under varying atmospheric conditions. Furthermore, path loss (PL) is an essential metric to take into account while characterizing the wireless communication channel. At a carrier frequency \(f_{c}\), in GHz, along a free space line with a benchmark distance of 1 m and attenuations due to atmospheric constraints, the following equation describes the typical form of the Cell-Individual (CI) path loss model [16]. \[\begin{split}\text{PL}^{\text{CI}}(f_{c},d)[\text{dB}]& =\text{FSPL}[\text{dB}]+10n\log_{10}(d)+\\ \delta[\text{dB}]+\chi_{\alpha}\end{split} \tag{1}\] In the above formula, the three-dimensional separation distance between transmitter end and receiver end denoted as \(d\), the atmospheric attenuation is represented as \(\delta\), and the path loss exponent term is specified by \(n\). A random Gaussian variable \(\chi_{\alpha}\) along with the standard deviation \(\alpha\) in decibels also contributes to the overall path loss model. As seen by the aforementioned equation, distance and carrier frequency impact this model. Additionally, the equations of free space path loss (FSPL) and attenuation \(\delta[\text{dB}]\) are described as follows [17]: \[\text{FSPL}(f_{c})[\text{dB}] =20\log_{10}\left(\frac{4\pi f_{c}\times 10^{9}}{c}\right) \tag{2}\] \[=32.4[\delta\text{B}]+20\log_{10}f_{c}\] \[\delta[\text{dB}] =\alpha[\text{dB}/\text{m}]\times d[\text{m}] \tag{3}\] In these preceding equations, \(f_{c}\) stand for the carrier frequency in GHz, \(c\) indicates the velocity of light, and the attenuation factor \(\alpha\) is in decibels per meter in case of the frequency bands from 1-100 GHz. ## IV Simulation ### _Dataset Creation at Different Frequencies and Atmospheric Conditions using NYUSIM_ The dataset was generated using NYUSIM 3.0 mm-wave software simulation to model the 5G cellular communication system in an urban microcell (UMi) environment considering Sothern Asian cities [18]. The simulation was based on real environmental data collected over the course of one year. To simplify the simulation process, four separate seasons were considered: summer, fall, winter, and spring, with each season consisting of three months as shown in Fig. 1. For each season, specific parameters like temperature, rain rate, humidity and barometric pressure were considered for four different frequencies: 7.125 GHz, 24.25 GHz, 52.60 GHz, and 71 GHz. The simulation was conducted to observe the overall path loss for a range of 10 to 500 meters for each of the four frequencies. The simulation parameters included the number and location of base stations and users, as well as the types of frequency bands used as shown in Table 1. Overall, the simulation environment was designed to accurately model the real-world conditions of an urban microcell environment in South Asian locations with typical weather, and to provide insights into the relative impact of different atmospheric conditions on 5G cellular communication for various operating frequencies. Fig. 1: Different atmospheric conditions in South Asia over a year. ### _Comparison of Regression Technique_ #### Iv-B1 Dataset Description The dataset has been produced using NYUSIM software simulations [18]. In the simulated dataset, eleven input features and one output attribute are present. The Data Source' and 'Simulation Number' attributes have been disregarded because they have no bearing on the Path Loss' output. The description of the dataset is provided in Table 2 below. #### Iv-B2 Data Pre-processing Label Encoding The 'Season' attribute included four seasons: 'Spring,' Summer,' Fall,' and 'Winter. Due to their string data type, it was required to convert those into numeric data. The prevalent "Label Encoder" method was used for conversion and it assigned numeric descriptors to each of the categorical values in "Season." Categorical attributes had to be converted to numeric values because ML models cannot process string-type values [19]. Supervised Models As the output of the dataset is of continuous type, regression supervised models are to be used for training and testing the dataset [20]. For this study, nine regression algorithms have been implemented altogether and the best results among them was showcased in the results section. The research study employs a diverse set of regression techniques to predict path loss, including linear regression, robust regression, ridge regression, LASSO regression, elastic net, polynomial regression, stochastic gradient descent (SGD), random forest regressor (RF), and support vector machine regressor (SVM). These models were carefully selected based on their ability to capture the complex relationships between T-R separation distance and path loss, considering the specific characteristics of the data and the research objectives. The utilization of multiple regression techniques enhances the robustness and validity of the research findings, providing a solid foundation for the conclusions drawn from the study. ## V Result Analysis ### _Software Simulation Result_ The simulation findings demonstrate that the 7.125 GHz frequency exhibited a relatively stable path loss effect across all four seasons, while there was a consistent increase in path loss as the Transmitter-Receiver (T-R) separation distance was expanded. This trend can be attributed to the longer wavelength of 7.125 GHz, which enables it to effectively penetrate obstacles and travel further distances. Conversely, the higher frequencies, namely 24.25 GHz, 52.60 GHz, and 71 GHz, displayed a relatively higher path loss level, regardless of the TR separation distance. As the TR separation distance was further extended, the path loss for higher frequencies also increased marginally. This phenomenon is due to the shorter wavelength of higher frequencies, which makes them more prone to atmospheric absorption and scattering. It is noteworthy that no significant variations were observed in path loss effects across the four seasons for all frequencies under examination. This finding suggests that atmospheric conditions, such as rain rate, humidity, and temperature, did not have a marked impact on path loss for the frequencies investigated in this study. Finally, the 7.125 GHz frequency performed well in closer distances where the transmitter and receiver are located nearby. However, at greater separation distances, the 7.125 GHz frequency performed better than the other frequencies, implying that it could be a more suitable option for long-range communication in urban microcell environments. The following four figures (Fig. 2, Fig. 3, Fig. 4, Fig. 5) presented in this research paper depict the path loss versus T-R separation distance under four different seasons. These figures provide visual representations of the impact of T-R separation distance on path loss for the frequencies under investigation in various seasons, including fall, winter, summer, spring. ### _Path Loss Prediction Result using ML_ The research study evaluates the performance of the regression models using various prediction metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R-squared (R2). The results are presented in Table 3, and it is evident that the Random Forest Regressor outperforms the other models for this particular dataset. R2 metric, as shown in Fig 6, clearly indicates the superior performance of the Random Forest Regressor. These findings highlight the effectiveness and accuracy of the Random Forest Regressor in predicting path loss in the specific research context. The results provide valuable insights for the selection of the most appropriate regression model for future studies and practical applications in similar scenarios. Table 4 presents a comprehensive comparison of the performance metrics obtained in this study with those reported in previous path loss prediction studies. The findings highlight the superiority of the regression models used in this study, as evidenced by the lower values of MAE, MSE, and RMSE, and the higher value of R2. These results indicate that the proposed models in the present study outperform the models used in previous studies in terms of predictive performance. Fig. 7 presents a graphical comparison of the Root Mean Squared Error (RMSE) performance obtained in the current research study with that reported in other relevant articles on path loss prediction. The superior performance of the regression models utilized in the study is clearly illustrated by the lower RMSE values observed as compared to the values reported in other articles. The lower RMSE values reflect better accuracy and predictive performance of the proposed models in estimating path loss in the specific research context. This comparison substantiates the effectiveness of the selected regression techniques in this study and further reinforces the reliability and robustness of the findings presented. Fig. 4: Path loss vs. T-R separation in spring Fig. 5: Path loss vs. T-R separation in summer Fig. 6: Comparison of R2 performance metric among all nine algorithms ## VI Conclusion The findings of this research paper highlight the importance of frequency selection in wireless communication systems, particularly in urban microcell environments. The simulation results reveal that the 7.125 GHz frequency exhibits relatively stable path loss characteristics across all seasons and performs well in both short and long-range communication scenarios. The use of ML models has been found to have a significant impact on accurately predicting path loss for the frequencies under investigation. Furthermore, the lack of significant variations in path loss effects across different seasons suggests that atmospheric conditions have minimal impact on the frequencies examined in this study. Further research and experimentation in real-world environments are warranted to validate and extend these findings, and to optimize the performance of wireless communication systems in urban microcell environments.
2304.10138
The statistical analysis of the dynamical evolution of the open clusters
We present the dynamical evolution of ten open clusters which were part of our previous studies. These clusters include both young and intermediate-age open clusters with ages ranging from 25$\pm$19 Myr to 1.78$\pm$0.20 Gyr. The total mass of these clusters ranges from 356.18$\pm$142.90 to 1811.75$\pm$901.03 M$_{\odot}$. The Galactocentric distances to the clusters are in the range of 8.91$\pm$0.02 to 11.74$\pm$0.18 kpc. The study is based on the ground-based UBVRI data supplemented by the astrometric data from the Gaia archive. We studied the minimum spanning tree of the member stars for these clusters. The mass segregation in these clusters was quantified by mass segregation ratios calculated from the mean edge length obtained through the minimum spanning tree. The clusters NGC 2360, NGC 1960, IC 1442, King 21, and SAI 35 have ${\Gamma}_{MSR}$ to be 1.65$\pm$0.18, 1.94$\pm$0.22, 2.21$\pm$0.20, 1.84$\pm$0.23, and 1.96$\pm$0.25, respectively which indicate moderate mass segregation in these clusters. The remaining five clusters are found to exhibit weak or no mass segregation. We used the ratio of half mass radius to the tidal radius i.e. R$_{h}$/R$_{t}$ to investigate the effect of the tidal interactions on the cluster structure and dynamics. The ratios of half mass radii to tidal radii are found to be positively correlated with the Galactocentric distances with a linear slope of 0.06$\pm$0.01 having linear regression coefficient r-square = 0.93 for the clusters.
Jayanand Maurya, Y. C. Joshi, Manash Ranjan Samal, Vineet Rawat, Anubha Singh Gour
2023-04-20T07:56:53Z
http://arxiv.org/abs/2304.10138v1
# The statistical analysis of the dynamical evolution of the open clusters ###### Abstract We present the dynamical evolution of ten open clusters which were part of our previous studies. These clusters include both young and intermediate-age open clusters with ages ranging from 25 \(\pm\) 19 Myr to 1.78\(\pm\)0.20 Gyr. The total mass of these clusters ranges from 356.18\(\pm\)142.90 to 1811.75\(\pm\)901.03 M\({}_{\odot}\). The Galactocentric distances to the clusters are in the range of 8.91\(\pm\)0.02 to 11.74\(\pm\)0.18 kpc. The study is based on the ground-based UBVRI data supplemented by the astrometric data from the Gaia archive. We studied the minimum spanning tree of the member stars for these clusters. The mass segregation in these clusters was quantified by mass segregation ratios calculated from the mean edge length obtained through the minimum spanning tree. The clusters NGC 2360, NGC 1960, IC 1442, King 21, and SAI 35 have \(\Gamma_{MSR}\) to be 1.65\(\pm\)0.18, 1.94\(\pm\)0.22, 2.21\(\pm\)0.20, 1.84\(\pm\)0.23, and 1.96\(\pm\)0.25, respectively which indicate moderate mass segregation in these clusters. The remaining five clusters are found to exhibit weak or no mass segregation. We used the ratio of half mass radius to the tidal radius i.e. R\({}_{\rm d}\)/R\({}_{\rm t}\) to investigate the effect of the tidal interactions on the cluster structure and dynamics. The ratios of half mass radii to tidal radii are found to be positively correlated with the Galactocentric distances with a linear slope of 0.06\(\pm\)0.01 having linear regression coefficient r-square = 0.93 for the clusters. Open clusters--Mass segregation--tidal interactions. ## 1 Introduction The star clusters evolve dynamically as the age passes after their birth from molecular clouds. Open clusters go through two-body relaxation and their core approaches spherical structure due to the dynamical relaxation. This dynamical relaxation causes low-mass stars to preferentially settle in the outer part of the clusters. These outskirt stars are subjected to dynamic ejection from the cluster which lowers the gravitational potential and loosens the cluster (Gieles & Baumgardt, 2008; Tang et al., 2019). The external perturbations like tidal effects, disc crossing, and differential rotation also strengthen the disintegration process of the open clusters. These internal and external factors affect the dynamical evolution of the clusters which manifests in the morphology, shape, and spatial distribution of the stars of the clusters. Thus, open clusters with accurately determined physical parameters are useful in understanding the impact of the internal and external effects on the dynamical evolution of the cluster. The spatial distribution of stars in the clusters has been found to be dependent on the stellar masses. The massive stars are found to be preferentially concentrated in the inner region of the clusters compared to low-mass stars (Dib et al., 2018; Joshi et al., 2020; Maurya et al., 2021). This mass segregation phenomenon has been explained by two theories that attribute mass segregation either to the star formation process itself or dynamical evolution. The theory considering the star formation process responsible for the mass segregation proposes that massive stars are preferentially formed in the inner region of the cluster (Dib et al., 2008). However, the dynamical evolution theory of mass segregation suggests that the segregation happens due to an internal two-body dynamical relaxation process Allison et al. (2009a). The debate on the origin of mass segregation has not concluded and requires further analysis (Dib et al., 2018). The comprehensive study of the cluster parameters like core radii, tidal radii, half-mass radii, and ages helps in understanding the complete scenario of the interplay among structural evolution, dynamical evolution, and tidal interactions. Open clusters generally become more centrally concentrated due to dynamical evolution which makes mass loss caused by tidal interactions in these clusters less likely. In this two-body dynamical relaxation, massive stars sink towards the central re gion while low-mass stars preferentially shift towards the outer region which evaporates from the cluster with passing time. The impact of the tidal field on the cluster size can be characterized by using the ratio of half-mass radius to the tidal radius of the cluster i.e. R\({}_{h}\)/R\({}_{r}\). Previously, it has been found that the R\({}_{h}\)/R\({}_{r}\) ratio parameter is also related to the survival of the cluster as it has been found that the more compact clusters have smaller R\({}_{h}\)/R\({}_{r}\) values which favour the survival of the clusters located at smaller Galactocentric distances by reducing the mass loss due to tidal fields Angelo et al. (2021). To probe the role of dynamical evolution in defining the shape and morphology of the clusters, we present the analysis of the dynamical evolution of the ten open clusters. These clusters are parts of our previous studies (Joshi et al., 2020; Maurya & Joshi, 2020; Maurya et al., 2020, 2021) and their physical parameters such as reddening, age, and distance are listed in the Table 1. In the current study, we applied a statistical approach to study the dynamical evolution of this homogeneous sample of open clusters. ## 2 Data The observations for the clusters were taken using the 1.3-m Devasthal Fast Optical Telescope (DFOT) at Devasthal and 1.04-m Sampurnanand Telescope (ST) at Nainital. The telescopes were equipped with 2k\(\times\)2k CCDs having a field of view of \(\sim 18^{\prime}\times 18^{\prime}\) and \(\sim 13^{\prime}\)\(\times\) 13\({}^{\prime}\) for the DFOT and ST telescopes, respectively. The plate scales were 0.54 and 0.75 arcsec pixel\({}^{-1}\) for the DFOT and ST telescopes, respectively. The observations were taken on the nights of 30 November 2010, 24 & 25 March 2017, 21 October 2017, and 13 January 2018. We used the Image Reduction and Analysis Facility (IRAF) for cleaning the photometric data. We used the PSF technique to obtain the instrumental magnitudes using DAOPHOT \(II\) software packages. The instrumental magnitudes were calibrated to standard magnitudes through the process described by Stetson (1992). The conversion formula used for the standardization of instrumental magnitudes is given in our previous paper Maurya & Joshi (2020). The membership of these clusters was calculated using proper motions and parallaxes from Gaia DR3 archives. We first plotted the vector-point diagram (VPD) of these clusters using proper motions. We noticed over-density regions in the VPDs of the clusters which were found to be formed by probable cluster members on the basis of the location of the stars belonging to these regions on the Hertzsprung-Russell (H-R) diagrams. #### 2.2.1 J. Astrophys. Astr. (0000) **000**: **** To quantify the membership probabilities of stars of the clusters, we calculated the membership probabilities from the proper motions of the stars using a statistical approach originally suggested by Sanders (1971). The membership probability for the i\({}^{th}\) star was calculated as follows: \[P(i)=\frac{n_{c}\cdot\phi_{c}^{\nu}(i)}{n_{c}\cdot\phi_{c}^{\nu}(i)+n_{f}\cdot \phi_{f}^{\nu}(i)}\] , where \(n_{c}\) and \(n_{f}\) denote the number of stars belonging to field and cluster regions in the normalized form such that \(n_{c}+n_{f}=1\). The \(\phi_{c}^{\nu}(i)\) and \(\phi_{f}^{\nu}(i)\) in the above equation are the frequency distribution functions for the stars belonging to cluster and field populations, respectively. We calculated the \(\phi_{c}^{\nu}(i)\) and \(\phi_{f}^{\nu}(i)\) from the \(i^{th}\) stars' proper motions(\(\mu_{ox}\); \(\mu_{\delta i}\)), errors in the proper motions (\(\epsilon_{ox}\); \(\epsilon_{\delta i}\)), and the mean proper motion of probable cluster members or field stars (\(\mu_{ox}\); \(\mu_{\delta c}\)) with their dispersions (\(\sigma_{ox}\); \(\sigma_{\delta c}\)) as described in Maurya & Joshi (2020). We also used Gaia parallaxes to remove the possible contamination of member stars from field stars with similar proper motions. The number of stars, N, identified as member stars up to the completeness limit of the data for each cluster is given in Table 1. The ADDSTAR routine of DAOPHOT packages was utilized to estimate the completeness of the data by adding artificial stars in the original images of the clusters. The completeness of the data was measured based on the ratio of the number of recovered stars to the number of artificially added stars in each magnitude bin. The completeness determination process is described in our previous study (Maurya & Joshi, 2020). The completeness limit of the data in the V band is denoted by V\({}_{lim}\) in Table 1. The physical parameters of the studied clusters are derived through the samples of member stars identified by us. The photometric data using our observations are complemented by Gaia DR3 and Pan-STARSS data for this study. The physical parameters derived in our previous studies such as reddening, extinction law, distance, ages, and completeness limits were used for the present study to be uniform in our sample as well as our approach (Joshi et al., 2020; Maurya & Joshi, 2020; Maurya et al., 2020, 2021). The reddening of all the clusters except SAI 35, SAI 44, and SAI 45 was estimated through fitting zero-age main sequence isochrones given by Schmidt - Kaler (1982) on the (U-B)/(B-V) colour-colour diagrams. In the absence of the U band data, we calculated the reddening of the clusters SAI 35, SAI 44, and SAI 45 using a 3D reddening map given by Green et al. (2019). We calculated E(B-V) values for the clusters from the E(g r) values provided by the reddening map using the extinction ratios relations of Wang & Chen (2019). The clusters included in this study are associated with relatively low reddening for the detection of lower mass regime member stars. The total-to-selective extinction values for the clusters were determined using (V-\(\lambda\))/(B-V) two-colour diagrams where \(\lambda\) = R, I, J, H, K bands magnitudes. The near-IR bands' magnitudes were obtained from 2MASS archives. The distances to the clusters were estimated using _Gaia_ DR3 parallaxes through the parallax inversion method as described in our previous study (Maurya et al., 2020). The distances to the clusters are calculated to be in the range of 1072\(\pm\)44 to 3670\(\pm\)184 parsecs. The ages of the clusters were estimated by fitting Marigo et al. (2017) isochrones on the colour-magnitude diagrams for derived reddening and distance modulus. The ages of the clusters were estimated in the range of 25 \(\pm\) 19 Myr to 1.78\(\pm\)0.20 Gyr. The methods for the derivation of the physical parameters are briefly described in Maurya et al. (2020, 2021) and their values are given in Table 1. ## 3 Results ### Mass segregation The mass segregation can be attributed to the escape of the low-mass stars from the cluster besides the concentration of massive stars in the central region of the cluster. The mass segregation effect can be triggered due to the dynamical evolution through the equipartition of energy or maybe imprint of the star formation process itself (Sagar et al., 1988). The cumulative distribution of stars with a radius for various mass ranges is often used to study the mass segregation in star clusters. To study the effect of dynamical evolution and mass segregation, we determined cumulative radial distributions of member stars for different mass ranges in our previous studies (Joshi et al., 2020; Maurya & Joshi, 2020; Maurya et al., 2020). However, the method based on the cumulative distribution of stars with radial distance depends on the size of mass bins and cumulative radii which may give misleading results. Therefore, we used a method given by Allison et al. (2009) which is based on mass segregation ratio (MSR) for the study of mass segregation scenario in the clusters. The MSR is calculated from the mean edge length \(\gamma\) using a minimum spanning tree (MST) for the member stars. The MST for the member stars is the shortest path that connects all the member stars barring closed loops (Prim, 1957). We have also shown the minimum spanning tree (MST) for these clusters in Figure 1. The MST was generated using the Python package provided by Naidoo (2019). The densely located stars would have smaller edge lengths in the MST. These MST plots are helpful in the visual inspection of the spatial distribution of the member stars in the clusters. The clusters King 21 and NGC 1960 show the concentration of vertices in their central region, however, this is related to the distribution of stars in general and cannot be directly linked to mass segregation as the MSTs shown in Figure 1 do not have stellar mass information. The \(\gamma\) is calculated for n most massive stars \(\gamma_{mm}\) and n random stars chosen from all members of the cluster \(\gamma_{rand}\), separately. Then we repeated the same process 500 times to calculate the mean of the \(\gamma_{rand}\) i.e. \(\langle\gamma_{rand}\rangle\). The MSR value, \(\Gamma_{MSR}\), was calculated using the formula provided by Olczak et al. (2011) which is given below: \[\Gamma_{MSR}=\frac{\langle\gamma_{rand}\rangle}{\gamma_{mm}}\] We calculated the standard deviation \(\Delta\Gamma_{MSR}\) value in the mass segregation ratio \(\Gamma_{MSR}\) as follows: \[\Delta\Gamma_{MSR}=\Delta\gamma_{rand}\] The MSR method works on the principle that mass segregation will cause massive stars to have closer spatial distribution than low-mass stars. The \(\Gamma_{MSR}\) value for a cluster having similar spatial distribution for both the massive and low-mass stars will be \(\sim\) 1 which means the absence of mass segregation. The value of \(\Gamma_{MSR}\) greater than 1 would be interpreted as the relatively closer spatial distribution of the most massive stars than the rest of the stars which means the presence of mass segregation. We used only those member stars of these clusters to study the mass segregation whose magnitudes are within the completeness limit in the V band as mentioned in Table 1. The clusters included in this study have ages greater than 20 Myr and are not embedded anymore in the star-forming regions so we took star magnitudes as the proxy for their masses. In our calculation of the \(\Gamma_{MSR}\), we started with a minimum of 10 most massive stars in the clusters to have statistically significant calculations then we repeated the calculations of the \(\Gamma_{MSR}\) by increasing the number of massive stars in steps of one in each iteration up to 30% of the total stars. The mass range, denoted by M\({}_{30\%}\), corresponding to the most massive stars whose numbers are up to 30% of total stars is given in Table 2. We have shown the plots for the obtained values of \(\Gamma_{MSR}\) versus the number of massive stars N\({}_{massive}\) in Figure 2. We have reported the maximum value of \(\Gamma_{MSR}\) obtained for the clusters in Table 2. We found that the clus ters NGC 2360, NGC 1960, IC 1442, King 21, and SAI 35 exhibit moderate signatures of mass segregation having \(\Gamma_{MSR}\) to be 1.65\(\pm\)0.18, 1.94\(\pm\)0.22, 2.21\(\pm\)0.20, 1.84\(\pm\)0.23, and 1.96\(\pm\)0.25, respectively. The \(\Gamma_{MSR}\)\(\sim\) 2.0 values for IC 1442 are only for N\({}_{massile}\) values of 11, 12, and 13. After these values, the mass segregation ratio drops sharply to around 1.0 which indicates that IC 1442 exhibits mass segregation for only a small number of the most massive stars. The value of \(\Gamma_{MSR}\) is 1.26\(\pm\)0.18 for cluster NGC 381 which indicates the presence of weak mass segregation in the cluster. The remaining four clusters namely Berkeley 68, Trumpler 7, SAI 44, and SAI 45 exhibit \(\Gamma_{MSR}\) values within uncertainties to be 1, thus we do not find any evidence of mass segregation in these clusters. There is a bump around N\({}_{massile}\) values from 30 to 42 for the cluster Berkeley 68 which could be due to the presence of subgroups of massive stars (Yu et al., 2017). However, the mass segregation ratio for cluster Berkeley 68 is very low with \(\Gamma_{MSR}\) = 1.16\(\pm\)0.13. ### Half mass radius, tidal radius, and cluster structure It has been found that tidal interactions can influence the structure and dynamical evolution of the open clusters (Chumak et al., 2010). The ratio of half mass radius to the tidal radius, R\({}_{h}\), can be used as an indicator of the disruption of a cluster caused by tidal forces (Angelo et al., 2021). The radial distance from the cluster center containing half of the total mass of the clusters is defined as half mass radius. The tidal radius, R\({}_{t}\), is defined as the radial distance from the cluster center where the tidal field of the Galaxy is balanced by the gravitational field of the cluster. We used the relation given by Kim et al. (2000) to calculate the tidal radius as follows: \[R_{t}=\left(\frac{M_{C}}{2M_{G}}\right)^{1/3}\times R_{gc}\] The symbols M\({}_{C}\) and R\({}_{gc}\) in the above equation denote the total mass of the cluster and the distance to the cluster from the Galactic center, respectively. The mass of the Galaxy contained within the Galactocentric distance of the cluster is denoted by M\({}_{G}\) and calculated by using Genzel and Townes (1987) relation given below: \[M_{G}=2\times 10^{8}M_{\odot}\left(\frac{R_{gc}}{30pc}\right)^{1.2}\] We used the above two equations to calculate the tidal radius of the clusters. The R\({}_{gc}\) values for the clusters were calculated using the relation R\({}_{gc}^{2}\)=R\({}_{\odot}^{2}\)+(d\(\cos\mathbf{b}\))\({}^{2}\)-2R\({}_{\odot}\)d\(\cos\mathbf{b}\) cos\(\mathbf{I}\). The d and R\({}_{\odot}\) represent distances to clusters and the distance of the Sun from the Galactic center, respectively whereas \(\mathbf{I}\) and \(\mathbf{b}\) denote the Galactic longitude and latitude, respectively. We used R\({}_{\odot}\) as 2.8\(\pm\)0.10 kpc for the calculations as given by Bland-Hawthorn et al. (2019). The values of R\({}_{t}\) and R\({}_{gc}\) are given in Table 2. The stellar initial mass function peaks at 0.5 M\({}_{\odot}\) whereas mass completeness limits are above 0.5 M\({}_{\odot}\) for all the clusters studied in this analysis. Therefore, we estimated the total stellar mass, M\({}_{C}\), of the clusters utilizing the stellar mass function. The mass function slopes for these clusters obtained in our previous studies for the stars above 1 M\({}_{\odot}\) were mostly in agreement with the Kroupa (2001) mass-function slopes within the uncertainty values (Joshi et al., 2020; Maurya and Joshi, 2020; Maurya et al., 2020, 2021). So, we used Kroupa (2001) mass function of multiple-part power law form to estimate the total mass of the clusters including stellar populations up to 0.08 M\({}_{\odot}\). We applied a similar method as described by Snider et al. (2009). We are giving a brief description here. The number of stars in the mass range m\({}_{1}\) to m\({}_{2}\) will be: \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Cluster & RA & Dec & V\({}_{lim}\) & M\({}_{upper}\) & M\({}_{lower}\) & N & E(B-V) & log(Age) & D\({}_{BJ}\) \\ & (mag) & M\({}_{\odot}\) & M\({}_{\odot}\) & & (mag) & (Myr) & (pc) \\ \hline NGC 381 & 01:08:19.6 & +61:35:18.2 & 20 & 2.80 & 0.61 & 134 & 0.36\(\pm\)0.04 & 8.65\(\pm\)0.05 & 1147\(\pm\)38 \\ NGC 2360 & 07:17:43.5 & -15:38:39.8 & 18 & 2.21 & 0.68 & 276 & 0.07\(\pm\)0.03 & 8.95\(\pm\)0.05 & 1072\(\pm\)44 \\ Berkeley 68 & 04:44:30.0 & +42:05:55.8 & 19 & 1.75 & 0.97 & 229 & 0.52\(\pm\)0.04 & 9.25\(\pm\)0.05 & 3206\(\pm\)199 \\ NGC 1960 & 05:36:20.2 & +34:08:06.6 & 19 & 7.18 & 0.72 & 253 & 0.24\(\pm\)0.02 & 7.44\(\pm\)0.02 & 1169\(\pm\)54 \\ IC 1442 & 22:16:03.7 & +53:59:29.4 & 18 & 9.02 & 1.43 & 205 & 0.54\(\pm\)0.04 & 7.40\(\pm\)0.30 & 3492\(\pm\)30 \\ King 21 & 23:49:55.0 & +62:42:18.0 & 19 & 6.89 & 1.21 & 238 & 0.76\(\pm\)0.06 & 7.70\(\pm\)0.20 & 2953\(\pm\)174 \\ Trumpler 7 & 07:27:23.8 & -23:56:56.4 & 19 & 5.12 & 0.76 & 146 & 0.38\(\pm\) 0.04 & 7.85\(\pm\)0.25 & 170\(\pm\)098 \\ SAI 35 & 04:10:46.8 & +46:52:33.2 & 19 & 3.34 & 1.02 & 156 & 0.61\(\pm\) 0.04 & 8.50\(\pm\)0.10 & 2826\(\pm\)266 \\ SAI 44 & 05:11:10.5 & +45:42:10.2 & 19 & 2.47 & 1.00 & 171 & 0.34\(\pm\) 0.04 & 8.82\(\pm\)0.10 & 3670\(\pm\)184 \\ SAI 45 & 05:16:29.4 & +45:35:35.9 & 19 & 2.14 & 0.79 & 79 & 0.34\(\pm\) 0.02 & 9.07\(\pm\)0.10 & 1668\(\pm\)47 \\ \hline \end{tabular} \end{table} Table 1: The derived values of physical parameters of the clusters. The name, right ascension, declination, completeness limit in the V band, the mass of the most massive star, the mass completeness limit of the data, number of member stars up to the completeness limit of the data, reddening, logarithmic age, and distance calculated using the methods described by Bailer-Jones et al. (2018) are given in columns 1 to 10, respectively. . The x and y axes in the plot represent the right ascension and declination for the stars associated with the clusters. \[N=A\times\int_{m_{1}}^{m_{2}}M^{-\alpha}dM\] Using the above equation for the known number of stars from our samples, we calculated the values of the normalization constant A for the \(\alpha\) corresponding to the mass range of M/M\({}_{\odot}\geq 1\) given by Kroupa (2001). The total mass can be obtained using the following relation: \[M_{tot}=A\times\int_{0}^{N}MdN=A\times\int_{m_{1}}^{m_{2}}M^{1-\alpha}dM\] Using the above equation, we calculated the total mass of the clusters in the mass ranges M/M\({}_{\odot}\geq 1.0\), \(0.5\leq\) M/M\({}_{\odot}<1.0\), \(0.08\leq\) M/M\({}_{\odot}<0.5\). The normalization constant values for the other mass ranges namely \(0.5\leq\) M/M\({}_{\odot}<1.0\) and \(0.08\leq\) M/M\({}_{\odot}<0.5\) were calculated using the relations of the normalization constant values for different mass ranges given by Maschberger (2013). The estimated total mass values, M\({}_{C}\), through this method up to the lower stellar mass limit of 0.08 M\({}_{\odot}\) for the clusters are given in Table 2. The half-mass radii, R\({}_{h}\), of the clusters were calculated from the individual masses of member stars of the clusters. The individual masses of the member stars were determined by fitting the solar metallicity isochrones of Marigo et al. (2017) on the V/(B-V) colour-magnitude diagrams of the clusters. These isochrones were corresponding to the derived age, reddening, and distance of the clusters. The half-mass radii were estimated corresponding to the observed total masses of the clusters. We derived the half-mass radii of the clusters by including stars with magnitudes within the completeness limits of our data mentioned in Table 1. Generally, the half-mass radii estimated from deep photometric data tend to become larger compared to the half-mass radii estimated from shallow photometric data (Bonatto & Bica, 2008). However, half-mass radii and other structural parameters become insensitive to photometric depth in case of uniform mass function slopes i.e. absence of mass segregation (Bonatto & Bica, 2008). As the studied clusters do not show strong mass segregation and the \(\Gamma_{MSR}\) values reach around 1.0 even for most massive stars which are 30% of the total stars, the half-mass radii derived by us are reasonably good estimates. However, a larger sample having clusters with a high number of member stars would be better to constrain the correlation shown in Figure 3. The values of R\({}_{h}\) and R\({}_{h}\)/R\({}_{t}\) are given in Table 2. We plotted R\({}_{h}\)/R\({}_{t}\) versus R\({}_{gc}\) for the clusters as shown in Figure 3. We found that the R\({}_{h}\)/R\({}_{t}\) ratios are positively Figure 1: Plot of minimum spanning tree for the clusters. The vertices (dots) in the plots represent member stars. The lines connecting these vertices are generally referred to as edges which are parts of a spanning tree \begin{table} \begin{tabular}{c c c c c c c c} \hline Cluster & M\({}_{C}\) & M\({}_{30\%}\) & R\({}_{h}\) & R\({}_{t}\) & R\({}_{gc}\) & R\({}_{h}\)/R\({}_{t}\) & \(\Gamma_{MSR}\) \\ & (M\({}_{\odot}\)) & (M\({}_{\odot}\)) & (pc) & (pc) & (pc) & \\ \hline NGC 381 & 467.13\(\pm\)187.41 & 2.80-1.86 & 1.50\(\pm\)0.05 & 10.30\(\pm\)1.36 & 8.91\(\pm\)0.02 & 0.15\(\pm\)0.02 & 1.26\(\pm\)0.18 \\ NGC 2360 & 887.54\(\pm\)356.08 & 2.21-1.17 & 1.65\(\pm\)0.07 & 12.75\(\pm\)1.69 & 8.93\(\pm\)0.03 & 0.13\(\pm\)0.02 & 1.65\(\pm\)0.18 \\ Berkeley 68 & 1396.11\(\pm\)155.22 & 2.175-1.27 & 5.41\(\pm\)0.34 & 17.06\(\pm\)2.23 & 11.29\(\pm\)0.19 & 0.32\(\pm\)0.05 & 1.16 \(\pm\) 0.13 \\ NGC 1960 & 618.94\(\pm\)248.32 & 7.18-1.51 & 2.21\(\pm\)0.10 & 11.65\(\pm\)1.54 & 9.36\(\pm\)0.05 & 0.19\(\pm\)0.03 & 1.94\(\pm\)0.22 \\ IC 1442 & 1811.75\(\pm\)901.03 & 9.02-2.38 & 6.30\(\pm\)0.41 & 16.77\(\pm\)2.76 & 9.52\(\pm\)0.12 & 0.38\(\pm\)0.07 & 2.21\(\pm\)0.20 \\ King 21 & 1454.19\(\pm\)654.43 & 6.89-4.22 & 3.18\(\pm\)0.19 & 15.93\(\pm\)2.37 & 9.86\(\pm\)0.12 & 0.20\(\pm\)0.03 & 1.84\(\pm\)0.23 \\ Trumpler 7 & 508.00\(\pm\)203.81 & 5.12-1.52 & 1.78\(\pm\)0.10 & 10.80\(\pm\)1.43 & 9.21\(\pm\)0.06 & 0.16\(\pm\)0.02 & 1.04 \(\pm\) 0.19 \\ SAI 35 & 817.47\(\pm\)327.96 & 3.34-1.38 & 3.78\(\pm\)0.36 & 13.93\(\pm\)1.85 & 10.81\(\pm\)0.25 & 0.27\(\pm\)0.04 & 1.96\(\pm\)0.25 \\ SAI 44 & 881.70\(\pm\)353.73 & 2.47-1.39 & 4.59\(\pm\)0.23 & 15.01\(\pm\)1.99 & 11.74\(\pm\)0.18 & 0.31\(\pm\)0.04 & 1.19\(\pm\)0.18 \\ SAI 45 & 356.18\(\pm\)142.90 & 2.14-1.84 & 2.09\(\pm\)0.06 & 9.98\(\pm\)1.32 & 9.80\(\pm\)0.05 & 0.21\(\pm\)0.03 & 1.14\(\pm\)0.24 \\ \hline \end{tabular} \end{table} Table 2: The name of the clusters, the total mass of the clusters, the mass range of 30% most massive stars, R\({}_{h}\), R\({}_{t}\), R\({}_{gc}\), R\({}_{u}\)/R\({}_{t}\), and \(\Gamma_{MSR}\) are given in columns 1 to 8, respectively. Figure 2: Plot of \(\Gamma_{MSR}\) vs N\({}_{massive}\) for the clusters. The error bar in the plot denotes the standard deviation \(\Delta\Gamma_{MSR}\) values for each value of N\({}_{massive}\). correlated with R\({}_{gc}\) values as can be seen in the figure. The R\({}_{\rm{H}}\)/R\({}_{t}\) value of 0.38\(\pm\)0.07 with R\({}_{gc}\) = 9.52\(\pm\)0.12 for the cluster IC 1442 is an outlier in the above plot. The binary clusters have been found to show relatively larger R\({}_{\rm{H}}\)/R\({}_{t}\) values (Angelo et al., 2021). The slope of the linear fit was found to be 0.06\(\pm\)0.01 with linear regression coefficient r-square = 0.93 in the range of R\({}_{gc}\) from 8.9 to 11.7 kpc. The clusters located at the larger Galactocentric distances might be subjected to weaker tidal fields which could be a possible reason for these clusters to have larger R\({}_{\rm{H}}\)/R\({}_{t}\) ratios (Angelo et al., 2021). ## 4 Discussion We studied a sample of ten open clusters with photometric data mostly complete up to 19 mag in the V band to understand the dynamical evolution of the clusters. These clusters were chosen for being associated with comparatively lower reddening to assist the detection of fainter stars. We utilized precise astrometric data from Gaia DR3 to ascertain the membership of the stars and accurate measurements of the distances which helps in the precise estimation of distance modulus essential for mass determination through isochrone fitting on the CMDs of the clusters. We used a method based on the minimum spanning tree of the member stars as suggested by Allison et al. (2009b) to study mass segregation scenarios in the clusters. We find that the clusters NGC 2360, NGC 1960, IC 1442, King 21, and SAI 35 exhibit evidence of the presence of moderate mass segregation having mass segregation ratios to be 1.65\(\pm\)0.18, 1.94\(\pm\)0.22, 2.21\(\pm\)0.20, 1.84\(\pm\)0.23, and 1.96\(\pm\)0.25. The remaining clusters show weak or no mass segregation with \(\Gamma_{MSR}\) around 1 within the uncertainty level. The mass segregation in the open clusters has been thought to increase with age (Dib et al., 2018). In the present study, we could not find such a trend between mass segregation and the age of the clusters. Similarly, Tarricq et al. (2022) did not find any clear trend between the mass segregation ratios calculated from the ten most massive stars and the clusters' ages using a sample of 389 open clusters. The ratio of the half-mass radius to the tidal radius, R\({}_{\rm{H}}\)/R\({}_{t}\), is a good parameter to study the influence of the tidal interactions on the dynamical evolution and shapes of the clusters (Baumgardt et al., 2010; Angelo et al., 2020). This ratio measures the fraction of the tidal volume filled by the half-mass contents. We estimated the total mass of the clusters for the calculation of R\({}_{t}\) by using the multiple-part power law form of mass function given by Kroupa (2001) to include the masses of fainter stars up to 0.08 M\({}_{\odot}\). The ratios R\({}_{\rm{H}}\)/R\({}_{t}\) are found to be positively correlated to R\({}_{gc}\) with a slope of 0.06\(\pm\)0.01 and linear regression r-square coefficient of 0.93. This indicates a tendency that the clusters located at larger R\({}_{gc}\) had larger R\({}_{\rm{H}}\)/R\({}_{t}\) ratios. In a similar study with a larger sample of open clusters, Angelo et al. (2020) could not find any such trend, however, they noticed that the clusters located at R\({}_{gc}\) > 9 kpc had at least 50% larger dispersion in R\({}_{\rm{H}}\)/R\({}_{t}\) values. It has been suggested that the clusters located at larger R\({}_{gc}\) face lower external gravitational forces which allow the internal stellar content to fill a larger fraction of tidal volume without tidal disruption so these clusters exhibit larger R\({}_{\rm{H}}\)/R\({}_{t}\) values (Angelo et al., 2020, 2021). The correlation between R\({}_{\rm{H}}\)/R\({}_{t}\) and R\({}_{gc}\), found here, is based on a small sample of open clusters of a wide age range, which is an important caveat of the present study, therefore a study based on larger samples of open clusters of similar age across different galactic locations would give more insight into the dynamical evolution of the clusters. ## 5 Conclusions We investigate the dynamical evolution of ten open clusters whose physical parameters were previously determined by us using a homogeneous approach and data sets. In the present study, we identified member Figure 3: Plot of R\({}_{\rm{H}}\)/R\({}_{t}\) vs R\({}_{gc}\) for the clusters. The point shown by the square is an outlier. The outlier point belongs to cluster IC 1442. stars of the clusters using Gaia DR3 astrometric data. Our findings can be summarized as follows: * We find moderate mass segregation with \(\Gamma_{MSR}\) values to be 1.65\(\pm\)0.18, 1.94\(\pm\)0.22, 2.21\(\pm\)0.20, 1.84\(\pm\)0.23, and 1.96\(\pm\)0.25 for the clusters NGC 2360, NGC 1960, IC 1442, King 21, and SAI 35, respectively. * The cluster NGC 381 exhibits a weak signature of mass segregation with \(\Gamma_{MSR}\) is 1.26\(\pm\)0.18. * We find no evidence of mass segregation in the clusters Berkeley 68, Trumpler 7, SAI 44, and SAI 45. * The ratios R\({}_{k}\)/R\({}_{t}\) are found to be positively correlated to R\({}_{gc}\) with a slope of 0.06\(\pm\)0.01 and linear regression r-square coefficient of 0.93.
2304.08777
Probing vortical structures in heavy-ion collisions at RHIC-BES energies through helicity polarization
We investigate the hydrodynamic helicity polarization of $\Lambda$ hyperons, defined as the projection of the spin polarization vector along the directions of particle momenta, at RHIC-BES energies by utilizing the relativistic (3+1)D CLVisc hydrodynamics framework with SMASH initial conditions. As opposed to local spin polarization at high energy collisions, our hydrodynamic simulations demonstrate that the helicity polarization induced by the kinetic vorticity dominates over other contributions at intermediate and low collision energies. Our findings provide an opportunity to probe the fine structure of local kinetic vorticity as a function of azimuthal angle at intermediate and low collision energies by mapping our predictions to the future measurements in experiments.
Cong Yi, Xiang-Yu Wu, Di-Lun Yang, Jian-Hua Gao, Shi Pu, Guang-You Qin
2023-04-18T07:20:48Z
http://arxiv.org/abs/2304.08777v1
# Probing vortical structures in heavy-ion collisions at RHIC-BES energies ###### Abstract We investigate the hydrodynamic helicity polarization of \(\Lambda\) hyperons, defined as the projection of the spin polarization vector along the directions of particle momenta, at RHIC-BES energies by utilizing the relativistic (3+1)D CLVisc hydrodynamics framework with SMASH initial conditions. As opposed to local spin polarization at high energy collisions, our hydrodynamic simulations demonstrate that the helicity polarization induced by the kinetic vorticity dominates over other contributions at intermediate and low collision energies. Our findings provide an opportunity to probe the fine structure of local kinetic vorticity as a function of azimuthal angle at intermediate and low collision energies by mapping our predictions to the future measurements in experiments. ## I Introduction Spin, as a fundamental property of particles, plays a critical role in high-energy physics, e.g. proton spin puzzles (see recent reviews [1; 2] and references therein). Recently, a major breakthrough related to the spin polarization in the relativistic heavy ion collisions has drawn widespread attentions. In non-central heavy-ion collisions, two heavy nuclei are accelerated to nearly the speed of light and collide with each other. These collisions generates a large amount of orbital angular momentum, estimated to be on the order of \(10^{5}\hbar\). Such huge orbital angular momentum will be partially converted into spin polarization of the hadrons by spin-orbit coupling proposed by the pioneer works [3; 4; 5]. The STAR collaboration at the Relativistic Heavy-Ion Collider (RHIC) has measured the global polarization of \(\Lambda\) and \(\overline{\Lambda}\) hyperons [6]. The results show that the vorticity of the quark-gluon plasma (QGP) generated in the collisions is as large as \(\omega\sim 10^{22}\)\(s^{-1}\), making it the fastest vortical system observed in nature to date. The global polarization has been well-understood by various phenomenological models[7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Interestingly, the polarization along the beam and out-of-plane directions, namely the local spin polarization has been measured by STAR [24; 25] and ALICE [26], and are investigated by many different models, e.g. the statistical methods [27; 28; 29], quantum kinetic theory [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66], spin hydrodynamics [67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95], other effective theories [96; 97; 98] and phenomenological simulations [12; 16; 17; 19; 19; 19; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 190; 199; 191; 193; 195; 196; 197; 198; 199; 199; 199; 198; 199; 199; 199; 199; 199; 199; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 22; 22; 22; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 2444; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2778; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 288; 289; 291; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 330; 331; 332; 333; 341; 342; 343; 353; 354; 355; 366; 377; 388; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 44; 445; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 487; 489; 490; 491; 492; 493; 494; 500; 495; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 533; 536; 537; 538; 539; 540; 557; 559; 561; 570; 571; 572; 573; 58; 582; 583; 59; 590; 591; 592; 593; 594; 501; 595; 596; 597; 598; 599; 600; 601; 602; 603; 604; 605; 606; 607; 608; 609; 610; 611; 627; 611; 631; 632; 633; 644; 65; 66], other effective theories [96; 97; 98] and phenomenological simulations [12; 16; 17; 19; 199; 1907; 191; 192; 193; 194; 195; 196; 197; 198; 199; 197; 198; 199; 199; 199; 200; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213]. It is found that the local spin polarization can be induced by various sources, including the thermal vorticity, shear viscous tensor, fluid acceleration, gradient of baryon chemical potential over temperature, and electromagnetic fields [20; 53; 97; 108; 109; 110; 111; 114]. These studies have also been extended to low-energy collisions [19; 20; 23; 115; 115; 116; 117; 115; 117; 118; 119; 210; 223] and isobaric collisions [124], as well as to the discussion on the vortical smoke rings [125; 126]. Despite the global polarization has provided insight into the kinetic or thermal vorticity as a function of collision energies, the fine structure of the vorticity, such as its dependence on the azimuthal angle, have not been fully explored. These information may not been accurately captured by the local spin polarization due to the considerable influence of other sources beyond the thermal vorticity. In this work, we extend our previous studies [127] and demonstrate that the helicity polarization can help us to probe the fine structure of kinetic vorticity in low-energy collisions. Helicity polarization, defined as the projection of spin polarization vector onto the direction of the particles' momentum, is widely used for characterizing the spin polarization in high energy physics [128]. In many high-energy scattering processes, there are no preferred quantization directions for spin, unlike the case of global polarization where the direction of the initial orbital angular momentum is naturally chosen as the quantization direction. In such cases, helicity is often preferred over spin to describe the spin polarization. Back to the heavy ion collisions, the use of helicity polarization allows us to probe a distinct feature of the spin polarization for \(\Lambda\) and \(\overline{\Lambda}\) hyperons. As mentioned previously, it is challenging to distinguish the local polarization of \(\Lambda\) hyperons induced by different sources through the experimental measurements. Remarkably, our previous study [127] has found that the local helicity polarization induced by thermal vorticity dominates over other contributions at \(\sqrt{s_{NN}}=200\) GeV Au+Au collisions. We argue that helicity polarization induced by kinetic vorticity, as part of thermal vorticity, will play the crucial role to the total helicity polarization in the low-energy collisions. Consequently, we further proposed that this finding can be utilized to probe the local strength of kinetic vorticity as a function of azimuthal angle in low energy collisions by measuring helicity polarization. To verify our conjecture, we utilize the relativistic (3+1)D CLVisc hydrodynamics framework [23; 129; 130] to investigate the azimuthal angle dependence of hydrodynamic helicity polarization at RHIC-Beam Energy Scan (BES) energies in this work. We report the numerical simulation of hydrodynamic helicity polarization at \(\sqrt{s_{NN}}=7.7,19.6,39\) GeV Au+Au collisions with Simulating Many Accelerated Strongly-interacting Hadrons (SMASH) [131; 132; 133; 134; 135; 136; 137] initial condition. As anticipated, the helicity polarization induced by the kinetic vorticity is one order of magnitude larger than other contributions in low-energy collisions. This finding holds even when we choose A-Multi-Phase-Transport (AMPT) [138; 139; 140; 141] initial conditions or different baryon diffusion coefficients. Our study presents a novel approach to investigate the structure of kinetic vorticity in low-energy heavy-ion collisions by connecting hydrodynamic simulations with the measurable helicity polarization. The finding can also provide a baseline for the investigation on local parity violation through the correlations of helicity polarization proposed by [142; 143]. This paper is organized as follows. In Sec. II, we briefly introduce the theoretical framework and hydrodynamical setup for the helicity polarization. We present our numerical results of the helicity polarization at various collision energies, initial conditions, and baryon diffusion coefficients in Sec. III and summarize our findings in Sec. IV. Throughout this work, we adopt the Minkowski metric \(g_{\mu\nu}=\mathrm{diag}\{+,-,-,-\}\) and the projector \(\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}\) with \(u^{\mu}\) being fluid velocity. ## II Theoretical and numerical framework We follow Refs. [142; 143; 27; 127] to introduce the theoretical framework for the helicity polarization in relativistic heavy ion collisions. The helicity polarization for a relativistic particle with mass \(m\) is defined as \[S^{h} = \widehat{\mathbf{p}}\cdot\mathcal{S}(\mathbf{p}). \tag{1}\] Here, we parameterize the momentum of an on-shell particle as \(p^{\mu}=(\sqrt{|\mathbf{p}|^{2}+m^{2}},\mathbf{p})=(\sqrt{p_{T}^{2}+m^{2}}\cosh Y,p_{T}\cos\phi_{p},p_{T}\sin\phi_{p},\sqrt{p_{T}^{2}+m^{2}}\sinh Y)\), where \(p_{T}\) is the transverse momentum, \(Y\) is the momentum rapidity and \(\phi_{p}\) is the azimuthal angle, \(\widehat{\mathbf{p}}=\)p/\(|\mathbf{p}|\) is the unit vector along the direction of momentum, and \(\mathcal{S}(\mathbf{p})\) is the spatial component of the single-particle mean spin polarization vector \(S^{\mu}(p)\). The spin polarization vector \(S^{\mu}(p)\) for fermionic systems can be evaluated by using the modified Cooper-Frye formula [111; 10; 110; 9], under the assumption of local thermal equilibrium, \[\mathcal{S}^{\mu}(\mathrm{p}) = \frac{\int d\Sigma\cdot p\mathcal{J}_{5}^{\mu}(p,X)}{2m_{\Lambda} \int d\Sigma\cdot\mathcal{N}(p,X)}, \tag{2}\] where \(d\Sigma^{\mu}\) is the normal vector of the freeze-out hypersurface, \(m_{\Lambda}\) denotes the mass of \(\Lambda\) hyperons, and \(\mathcal{J}_{5}^{\mu}(p,X)\) and \(\mathcal{N}^{\mu}(p,X)\) stand for the axial-charge and number-density current in the phase space, respectively. Inserting the \(\mathcal{J}_{5}^{\mu}(p,X)\) obtained from chiral kinetic theory up to \(\mathcal{O}(\hbar)\)[56] into the spin polarization vector \(\mathcal{S}^{\mu}\), we derive the helicity polarization [23; 11; 127], \[S^{h}_{\mathrm{hydro}}(\mathbf{p}) = S^{h}_{\mathrm{thermal}}(\mathbf{p})+S^{h}_{\mathrm{shear}}( \mathbf{p})+S^{h}_{\mathrm{accT}}(\mathbf{p}) \tag{3}\] \[+S^{h}_{\mathrm{chemical}}(\mathbf{p}),\] where \[S^{h}_{\mathrm{thermal}}(\mathbf{p}) = \int d\Sigma^{\sigma}F_{\sigma}p_{0}\epsilon^{0ijk}\widehat{p}_{i }\partial_{j}\left(\frac{u_{k}}{T}\right),\] \[S^{h}_{\mathrm{shear}}(\mathbf{p}) = -\int d\Sigma^{\sigma}F_{\sigma}\frac{\epsilon^{0ijk}\widehat{p} ^{i}p_{0}}{(u\cdot p)T}(p^{\sigma}\pi_{\sigma j}u_{k}),\] \[S^{h}_{\mathrm{accT}}(\mathbf{p}) = \int d\Sigma^{\sigma}F_{\sigma}\frac{\epsilon^{0ijk}\widehat{p} ^{i}p_{0}u_{j}}{T}\left[(u\cdot\partial)u_{k}+\frac{\partial_{k}T}{T}\right],\] \[S^{h}_{\mathrm{chemical}}(\mathbf{p}) = -2\int d\Sigma^{\sigma}F_{\sigma}\frac{p_{0}\epsilon^{0ijk} \widehat{p}_{i}}{(u\cdot p)}\partial_{j}\left(\frac{\mu}{T}\right)u_{k}, \tag{4}\] stand for the contributions from thermal vorticity, the shear viscous tensor, the fluid acceleration minus the gradient of temperature \(T\), the gradient of baryon chemical potential \(\mu\) over temperature, respectively. Here, we introduce \(\pi_{\sigma j}=\partial_{\sigma}u_{j}+\partial_{j}u_{\sigma}-u_{\sigma}(u\cdot \partial)u_{j}\) and \(F^{\mu}=\hbar[8m_{\Lambda}\Phi(\mathbf{p})]^{-1}p^{\mu}f_{eq}(1-f_{eq})\), \(\Phi(\mathbf{p})=\int d\Sigma^{\mu}p_{\mu}f_{eq}\). We also assume that the system reaches the local thermal equilibrium for simplicity, i.e. we choose \(f_{eq}=1/\left[\exp([p^{\mu}u_{\mu}-\mu)/T]+1\right]\). For other decomposition of spin vector, we refer to Refs. [108; 109; 100; 101; 97; 98]. For convenience, we further decompose helicity polarization induced by thermal vorticity \(S^{h}_{\mathrm{thermal}}\) into two separate terms [127], \[S^{h}_{\nabla T}(\mathbf{p}) = \int d\Sigma^{\sigma}F_{\sigma}\frac{p_{0}}{T^{2}}\widehat{\mathbf{ p}}\cdot(\mathbf{u}\times\nabla T),\] \[S^{h}_{\omega}(\mathbf{p}) = \int d\Sigma^{\sigma}F_{\sigma}\frac{p_{0}}{T}\widehat{\mathbf{p}} \cdot\mathbf{\omega}, \tag{5}\] denoting the polarization related to the gradient of temperature, and caused by the kinetic vorticity \(\mathbf{\omega}=\nabla\times\mathbf{u}\), respectively. Later on, we will see that the above decomposition can improve our understanding of helicity polarization. Since the electromagnetic fields generating by the collisions decay rapidly [144; 145; 146; 147; 148; 149; 150; 151; 152] and are negligible at the late stage, we omit the helicity polarization induced by electromagnetic fields for simplicity. In general, an axial chemical potential, characterising local parity violation, near the freeze-out hypersurface also contributes to the helicity polarization [142; 143]. However, the event-by-event averaged axial density or axial chemical potential is almost vanishing. We therefore exclude these contributions in the analysis of helicity polarization. We also notice that in the absence of the axial charge the system has the space reversal symmetry, which leads to \(S^{h}_{\rm hydro}(Y,\phi_{p})=-S^{h}_{\rm hydro}(-Y,\phi_{p}+\pi).\) More discussions on the properties of helicity polarization can be found in Refs. [142; 143; 127]. Similar to the local spin polarization in heavy ion collisions, e.g. see Refs. [127; 127; 23; 19; 20], we propose a possible physical observable followed Ref. [127], \[P_{H}(\phi_{p}) = \frac{2\int_{Y_{\rm min}}^{Y_{\rm max}}dY\int_{p_{T\rm min}}^{p_{T \rm max}}p_{T}dp_{T}[\Phi(\mathbf{p})S^{h}_{\rm hydro}]}{\int_{Y_{\rm min}}^{Y_{ \rm max}}dY\int_{p_{T\rm min}}^{p_{T\rm max}}p_{T}dp_{T}\Phi(\mathbf{p})}. \tag{6}\] The prefactor 2 in numerator comes from experimental measurement of local spin polarization, which is proportional to \(\frac{1}{s}\mathcal{S}^{\mu}(\mathbf{p})\) with \(s=\frac{1}{2}\) for the \(\Lambda\) hyperons. Note that the \(S^{h}\) defined in Eq. (1) is not a Lorentz scalar. In the current study, we define the polarization three-vector \(\mathbf{\mathcal{S}}\) in Eq. (6) in the laboratory frame, while the one defined in the rest frame of \(\Lambda\) hyperons can easily be derived by the Lorentz transformation, \(\mathbf{\mathcal{S}}^{\prime}=\mathbf{\mathcal{S}}-\frac{(\mathbf{\mathcal{P}}\cdot\mathbf{ \mathcal{S}})}{E_{\Lambda}(E_{\Lambda}+m_{\Lambda})}\) with \(E_{\Lambda}\) being the energy of \(\Lambda\) hyperons. Similarly, the helicity polarization can also defined by \(\mathbf{\mathcal{S}}^{\prime}\), i.e. \(\widehat{\mathbf{p}}\cdot\mathbf{\mathcal{S}}^{\prime}=\frac{m_{\Lambda}}{E_{ \Lambda}}\widehat{\mathbf{p}}\cdot\mathbf{\mathcal{S}}=\frac{m_{\Lambda}}{E_{ \Lambda}}S^{h}.\) To investigate the helicity polarization originating from various sources, we utilize Eq. (3) to express \(P_{H}\) as the sum of four terms and further decompose the thermal-vorticity contribution in light of Eq. (5), \[P_{H}^{\rm total} = P_{H}^{\rm thermal}+P_{H}^{\rm shear}+P_{H}^{\rm accT}+P_{H}^{ \rm chemical}, \tag{7}\] \[P_{H}^{\rm thermal} = P_{H}^{\omega}+P_{H}^{\nabla T}, \tag{8}\] where the superscripts indicate the respective sources. We utilize the (3+1)D CLVisc hydrodynamics [129; 130; 23] to simulate the evolution of the QGP at different collision energies. We employ the SMASH model [131; 132; 133; 134; 135; 136; 137] for initial conditions and adopt the NEOS-BQS equations of state [153; 154]. Later on, we will also check the results from the AMPT initial model [138; 139; 140; 141]. We have chosen the simulation parameters according to the studies [130; 23], where the hydrodynamic model has been shown to successfully reproduce the pseudo-rapidity distribution of charged hadrons, as well as the transverse momentum spectra of protons, pions, and kaons measured in the experiments. We would like to emphasize that the final results for the spectra of mesons in low energy collisions are insensitive to a parameter \(C_{B}\), which connects to a baryon diffusion coefficient \(\kappa_{B}=\frac{C_{B}}{T}n\left[\frac{1}{3}\cot\left(\frac{\mu}{T}\right)- \frac{nT}{w}\right]\) with \(n\) being baryon number and \(w\) being enthalpy density [23; 130]. We set \(C_{B}=0\) in most of our simulations unless explicitly stated otherwise. From the simulations, we obtain the profile of temperature, chemical potential and fluid velocity at the chemical freeze-out hypersurface. By inputting these quantities into Eqs. (2, 3, 6), we derive the helicity polarization as a function of the azimuthal angle \(\phi_{p}\). The integration bounds for Eq. (6) is chosen as \(p_{T}\in[0.5,3]\) GeV and \(Y\in[-1,1]\). We take the mass of \(\Lambda\) hyperons, \(m_{\Lambda}=1.116\) GeV in Eq. (2), as well as the mass term in all \((u\cdot p)\) terms of Eq. (4). ## III Numerical results from hydrodynamics approaches We present the numerical results for helicity polarization \(P_{H}(\phi_{p})\) induced by various sources as a function of azimuthal angle in \(20-50\%\) centrality at \(\sqrt{s_{NN}}=7.7,19.6,39\) GeV Au+Au collisions. Similar to Refs. [24; 25; 26], we also compute the first Fourier sine coefficient of the helicity polarization of \(\Lambda\) hyperons, \(\langle P_{H}\sin\phi_{p}\rangle\), as a function of transverse momentum. Let us start by discussing the impact of collision energy on helicity polarization. As the collision energy decreases, nuclear stopping effects become more prominent and the larger portion of orbital angular momenta of colliding nuclei is transferred to the remnants. This leads to an expected growth of the kinetic vorticity with decreasing collision energies, as has been suggested by previous studies [155; 156; 16; 118]. The experimental measure Figure 1: Helicity polarization as a function of azimuthal angle \(\phi_{p}\) in \(20-50\%\) centrality at \(\sqrt{s_{NN}}=7.7,19.6,39\) GeV Au+Au collisions with SMASH initial condition. The red dashed-dotted lines denote the helicity polarization induced by the kinetic vorticity \(\omega\) only, i.e. \(P_{H}^{\omega}\). The green dashed lines represent the total helicity polarization excluding \(P_{H}^{\omega}\). ment of global polarization agrees with this expectation; that is, the global polarization increases as the collision energy decreases [6]. Naturally, one may expect that the helicity polarization induced by kinetic vorticity also follows the same trend, which is clearly observed in Fig. 1. Furthermore, another important observation from Fig. 1 is that the \(P_{H}^{\omega}\) dominates at intermediate and low collision energies. Unlike the global spin polarization successfully mainly described by the contribution from thermal vorticity in global equilibrium satisfying the Killing condition [157], other corrections such as the shear viscous tensor, gradient of baryon chemical potential over temperature should be incorporated in local thermal equilibrium, which play an important role to the local spin polarization [109, 110, 122, 20, 123, 108, 111, 124]. Based on our previous studies [23], the magnitude of local spin polarization along the out-of-plane direction for \(\Lambda\) hyperons induced by shear viscous tensor, fluid acceleration, and \(\nabla(\mu/T)\) at intermediate and low collision energies is much smaller than that caused by thermal vorticity. This conclusion holds if we choose the mass of particles in Eqs. (2) and (4) as the mass for \(\Lambda\) hyperons. Naturally, we anticipate that the helicity polarization induced by the thermal or kinetic vorticity dominates over other contributions. As an example, we examine the behavior of \(P_{H}\) in \(\sqrt{s_{NN}}=7.7\) GeV Au+Au collisions to demonstrate the contributions from various sources, as shown in Fig. 2(a). We observe that the magnitude of \(P_{H}^{\omega}\) is approximately 10 times greater than that induced by other sources, which is consistent with the above analysis for the corrections out of global equilibrium. Furthermore, we observe that the dependence of \(\{P_{H}^{\rm chemical},P_{H}^{\rm shear}\}\) or \(\{P_{H}^{\rm recT},P_{H}^{\rm VT}\}\) on \(\phi_{p}\) resembles that of the sine or negative sine function, respectively. Eventually, \(P_{H}^{\rm chemical},P_{H}^{\rm shear},P_{H}^{\rm accT},P_{H}^{\rm VT}\) nearly cancel each other out, highlighting the dominant role of kinetic vorticity in helicity polarization. A natural question that arises is whether the dominant role of kinetic vorticity in helicity polarization persists for different initial conditions or parameters. To verify it, we have studied the helicity polarization at \(7.7\)GeV Au+Au collisions as an example using the AMPT initial conditions in Fig. 2(b), and with a different baryon diffusion coefficient \(C_{B}=1.2\)[130, 23] in Fig. 2(c). We find that, regardless of the implemented initial conditions or the value of the baryon diffusion coefficient \(C_{B}\), \(P_{H}^{\omega}\) is always significantly larger than the helicity polarization induced by other sources even though the magnitudes of helicity polarization from all sources including kinetic vorticity are together enhanced by a nonzero \(C_{B}\). Therefore, our conclusion that \(P_{H}^{\omega}\) dominates in \(P_{H}^{\rm total}\) is independent of initial conditions or \(C_{B}\). The helicity polarization induced by other sources, excluding the kinetic vorticity, also approximately cancel out with each other. Another possible relevant observable in experiments is the first Fourier sine coefficient of the helicity polarization Figure 3: The first Fourier sine coefficient of the helicity polarization as a function of \(p_{T}\) in \(20-50\%\) centrality at \(\sqrt{s_{NN}}=7.7\) GeV Au+Au collisions with SMASH initial condition. The same color assignments as in Fig. 1. Figure 2: (a) Helicity polarization induced by various sources in \(20-50\%\) centrality at \(\sqrt{s_{NN}}=7.7\) GeV Au+Au collisions with SMASH initial condition. (b) The results were obtained using the same parameters as in Fig. 2(a) except for the initial condition given by AMPT model. (c) The results are obtained using the same parameters as in Fig. 2(a) except for \(C_{B}=1.2\). The shortened form "chemical", "shear", "accT", "\(\nabla T\)" and "\(\omega\times 0.1\)" stand for the \(P_{H}^{\rm chemical},P_{H}^{\rm shear},P_{H}^{\rm PrecCT},P_{H}^{\rm HT}\) and \(0.1\times P_{H}^{\omega}\), respectively. of \(\Lambda\) hyperons [24; 25; 26]. We plot \(\langle P_{H}\sin\phi_{p}\rangle\) as a function of \(p_{T}\) in \(20-50\%\) centrality at \(\sqrt{s_{NN}}=7.7\) GeV as an example in Fig. 3. We find that the magnitude of \(\langle P_{H}\sin\phi_{p}\rangle\) increases with growing \(p_{T}\). Moreover, we observe that helicity polarization induced by kinetic vorticity still dominates over other contributions, which is consistent with the results in Fig. 1. As a remark, it is noteworthy that \(P_{H}^{\omega}\) dominates the helicity polarization, especially in low energy collisions. Therefore, by mapping the hydrodynamic simulations to the helicity polarization measured in future experiments, one can extract the structure of kinetic vorticity. This provides us with a new opportunity to probe the structure of kinetic vorticity through helicity polarization. Before ending this section, we would like to discuss the potential impact of two crucial approximations for the local spin polarization, namely strange memory scenario [109] and isothermal equilibrium [110], on the helicity polarization. These two approximations are of great importance in delineating the local spin polarization of \(\Lambda\) hyperons at \(\sqrt{s_{NN}}=200\) GeV Au+Au collisions. However, in low-energy collisions, it is unclear whether the quark degrees of freedom are released from the hadrons in the fireball. Therefore, it is plausible to consider the helicity polarization of \(\Lambda\) hyperons rather than \(s\) quarks. We have also checked numerically that in strange memory scenarios, the helicity polarization induced by other effects, excluding kinetic vorticity, remains negligible and contributes only to a small percentage of the total helicity polarization. The sign of \(\langle P_{H}\sin\phi_{p}\rangle\) remains unchanged in the strange memory scenarios. In the isothermal equilibrium, the temperature gradient near the chemical freeze-out hypersurface is assumed to be vanishing. We have numerically checked that even if we drop contributions from temperature gradient, the dominance of \(P_{H}^{\omega}\) in the total helicity polarization still holds. ## IV Summary We have studied the helicity polarization of \(\Lambda\) hyperons and its first Fourier sine coefficient at RHIC-BES energies and observed that the helicity polarization induced by kinetic vorticity \(P_{H}^{\omega}\) dominates at intermediate and low collision energies. The helicity polarization led by other sources is one order of magnitude smaller than \(P_{H}^{\omega}\) and their net contributions approximately cancel out. Furthermore, the dominance of \(P_{H}^{\omega}\) remains unchanged by variations in initial conditions and baryon diffusion coefficient. Such a hierarchy for helicity polarization is also unchanged even when adopting the approximations of the strange memory scenarios and isothermal equilibrium in low-energy collisions. Based on our results, we propose a novel approach to probe the fine structure of kinetic vorticity by linking hydrodynamic simulations to the measurements of helicity polarization in future low-energy nuclear collision experiments. On the other hand, at low-energy collisions, the helicity polarization provides a robust baseline for the equilibrium contribution to spin polarization. The sizable mismatch for comparing our predictions with future experimental measurements could reveal the potential role of non-equilibrium contributions from collisional effects or even more exotic sources to local spin polarization. ###### Acknowledgements. This work is supported in part by the National Key Research and Development Program of China under Contract No. 2022YFA1605500 and is also supported by National Natural Science Foundation of China (NSFC) under Grants No. 12075235, 12135011, 12225503, 12175123, 11890710, 11890711, 11890713, 11935007 and 11475104. D.-L. Y. was supported by National Science and Technology Council (Taiwan) under Grant No. MOST 110-2112-M-001-070-MY3.
2306.10999
Concept Extrapolation: A Conceptual Primer
This article is a primer on concept extrapolation - the ability to take a concept, a feature, or a goal that is defined in one context and extrapolate it safely to a more general context. Concept extrapolation aims to solve model splintering - a ubiquitous occurrence wherein the features or concepts shift as the world changes over time. Through discussing value splintering and value extrapolation the article argues that concept extrapolation is necessary for Artificial Intelligence alignment.
Matija Franklin, Rebecca Gorman, Hal Ashton, Stuart Armstrong
2023-06-19T15:07:16Z
http://arxiv.org/abs/2306.10999v1
# Concept Extrapolation: A Conceptual Primer ###### Abstract This article is a primer on concept extrapolation - the ability to take a concept, a feature, or a goal that is defined in one context and extrapolate it safely to a more general context. Concept extrapolation aims to solve model splintering - a ubiquitous occurrence wherein the features or concepts shift as the world changes over time. Through discussing value splintering and value extrapolation the article argues that concept extrapolation is necessary for Artificial Intelligence alignment. Keywords:Concept Extrapolation AI alignment, and Model Splintering. ## 1 Introduction This article aims to provide a short primer on _concept extrapolation_ and its application to Artificial Intelligence (AI) Alignment. AI alignment as a research field aims to identify ways in which AI systems can reliably act in accordance with human values, either individually or corporately. Concept extrapolation is the ability to take a concept, a feature, or a goal that is defined in a narrow training context and extrapolate it safely to a more general context. This is necessary because the training data will be insufficient for a key concept to be extrapolated. People are able to concept extrapolate [6]. More crucially, we argued that an aligned AI would need to possess concept extrapolation. This article will introduce concept extrapolation as well as what it aims to solve - _model splintering_. It will also introduce the concept of _value splintering_ and its solution, _value extrapolation_. ## 2 Model splintering Before the 20\({}^{\text{th}}\) century, death was defined as the heart stopping. If we trained a police AI with this concept of death, it would go around...arresting heart transplant surgeons for the multiple 'deaths' they cause. What happened is that, in the past, many things were absolutely correlated: the heart stopping, the person becoming permanently unresponsive, their brain starting to decay, and so on. We could define death as the heart stopping, because it was clear and easy to measure and because all the other features of death would go along with it. But then medical science advanced, and the correlation broke down - we could now have people with stopped hearts who would be up and about the next day. And so we would need to update the concept of 'death', which we have, generally defining it as 'brain death' (with 'clinical death' corresponding to the old definition of the heart stopping). If in the future we had the technology to reconstruct human brains or heal them in some other way, then we'd have to shift our definition of death yet again. This change in the environment (due to technology or other reasons) is what we refer to as **model splintering**: the conditions of the environment have changed to such an extent that the definitions and concepts that used to be valid, no longer are. The way to fix this is with **concept extrapolation**: extending the concept to the new environment in a way that preserves as much of its meaning as possible. If this concept is critical to our values, we name it a **value extrapolation**. So if we had an AI designed to prevent deaths, we would want it to extrapolate the definition of death and prevention in a way that extends safely to new environments. Model splinterings and concept extrapolations are all around us. A significant part of the legal work of parliaments is dedicated to clarifying concepts when novel situations arise. Dictionaries update their definitions regularly, and changes in technology make old assumptions invalid. ### Model Splintering in AI Model splintering is a meta-issue in AI safety that refers to problems that arise when an AI system moves from one imperfect model to another. The problem affects various areas of AI safety. Model splintering occurs because, apart from mathematical formalizations, all human concepts refer to collections of correlated features rather than fundamental concepts. 1 Model splintering is when the correlated features come apart so that the label no longer applies so well. Footnote 1: Please note: The only concepts that do not splinter are the ones that can be formalized with numbers, mathematical operations, or other mathematical formulations. In the language of machine learning (ML), model splintering is related to _distribution shifts_ - when algorithms encounter data distributions different from the training set it was trained on [10], which occurs ubiquitously, thus leading to the result that all machine learning models degrade in deployment. Model splintering can be seen as a variation of "out-of-distribution" behavior in traditional machine learning, where algorithms encounter problems when the set they are operating on is drawn from a different distribution than the training set they were trained on. Humans can often recognize this and correct it because they have a more general distribution in mind than the one the algorithm was trained on. Humans tend to leverage their knowledge of fundamental principles and concepts to decipher unfamiliar scenarios. When confronted with a novel creature, for instance, humans tend to classify it as a mammal, bird, reptile, or fish based on observable traits. Analogously, an AI system that comprehends the basic principles underpinning its assigned tasks would be better equipped to handle novel scenarios. The AI system ought to be constructed in such a manner that it reasons about the principles and concepts of its task in a way that would be seen as appropriate by a human being, as opposed to relying on approximations that correlate only with the original training dataset. ### Value splintering Value splintering (or reward splintering) is another challenge that arises when the value function (or reward function) becomes invalid due to model splintering, leading to multiple ways of expressing rewards on labeled data and potentially different rewards in the real world. Value splintering refers to a situation where the value function, reward function, goal, preference, or other similar concept becomes invalid because of correlations that were present in the training set but are not present in the real world or stop being present in the real world. This can occur for various reasons, such as a change in the environment or a change in the agent's capabilities. If the value function becomes invalid, it can lead to unintended or even harmful behavior from the agent. For example, consider an AI system that is designed to optimize a particular objective, such as reducing carbon emissions. The value function of the AI might be to minimize the concentration of CO2 in the atmosphere as measured by a network of sensors. However, if the AI finds a way to hack its reward signal, it might start generating false readings or interfering with the sensors to maximize the reward. In this case, the correspondence between the reward signal and the objective breaks down. Classification models can produce many different solutions to a given problem but tend to preferentially learn certain ones due "inductive bias", a fundamental principle of machine learning inspired by Occam's Razor [9]. Additionally, simple features often tend to be weakly predictive whilst more complex features may be more strongly predictive. We can imagine two labeled datasets, one containing wolves on snow, and another containing foxes on grass. A classifier can be trained to distinguish the two datasets, but due to the inductive bias inherent in machine learning, it finds the simplest differences in these datasets and does not correspond accurately with the relevant human concepts. For instance, the classifier might end up achieving learning to distinguish white from green, as this is the simplest explanation of the datasets. The classifier thus won't be able to distinguish wolves and foxes if they appear in new habitats. One such classifier was thought to be a successful detector of pneumothorax until it was revealed that it was acting as a chest drain detector [7]. The chest drain is a treatment for pneumothorax, making that classification useless. Similarly, when agents are trained on CoinRun [2] - a platform game where the reward is given by reaching the coin on the right, and tested in environments that move the coin to another location, they tend to ignore the coin and go straight to the right side of the level [3]. ## 3 Concept extrapolation We can call the response to model splintering concept extrapolation - it is extending concepts beyond a model splinter. Concept extrapolation is the process of taking an existing concept, idea, or learned feature of data and extending it beyond its original scope or context. In relation to AI, concept extrapolation is the idea of taking features an agent has learned in training and extending them safely to new datasets and environments, which is basically all the time and everywhere - this is why models constantly degrade in deployment. Thus, _continual learning_ (i.e., retraining on new data during deployment) is essential for concept extrapolation. Extrapolating how these concepts change over time will thus be crucial to address model splintering over time because it enables AI systems to dynamically refine their understanding of concepts, fostering resilience against model degradation and enhancing their capacity to adapt to unforeseen circumstances. ### Value extrapolation Value extrapolation is the concept of a model or algorithm generalizing human values beyond its training data to new and unseen situations. It is concept extrapolation when the particular concept to extrapolate is a value, a preference, a reward function, an agent's goal, or something of that nature. In other words, it is the extension of a particular concept or feature related to value from a specific context or scenario to a new or more general context. To "solve" value splintering, the concept of the value function is extrapolated to new situations to ensure that it remains valid even when transitioning to a new world model. If a reward can be extended from one context to another, one has achieved value extrapolation. Value extrapolation's relevance for AI safety is that it can help to ensure that an AI's values remain consistent and aligned with human values, even as the system's environment or objectives change. By performing value extrapolation, an AI system can more reliably behave in ways that are beneficial to humans, even in situations that were not explicitly covered during its training. ## 4 Implications ### AI Safety Concept extrapolation has applications to many AI safety problems. One is goal misgeneralisation, where an AI agent has learned a goal based on a given environment but incorrectly transfers its knowledge to different environments [8]. This is because the AI agent has only been exposed to a limited set of scenarios and learns undesirable correlations, thus lacking the ability to generalise correctly from those scenarios to new ones. Another is Goodhart's Law problems, where the connection between measures used and desired behaviour breaks [1]. Goodhart's Law is the observation that "any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes [5]." Goodhart's Law can occur when selecting for a proxy measure, where one selects not only for the true goal but also for the difference between the proxy and the goal. For AI safety, Goodhart's Law is a significant concern since it may lead to unintended consequences, such as an AI system optimizing for a proxy measure instead of the intended goal. Similarly, in the phenomenon of wireheading, the link between the rewards channel and the desired behavior breaks [4]. Wireheading is a term used to describe the behavior of an artificial intelligence (AI) system that manipulates its own reward function or other feedback mechanisms to achieve a suboptimal or unintended outcome. In other words, the AI focuses on maximizing a proxy or substitute utility, rather than the intended objective. The most intuitive example of wireheading is when an AI manipulates a narrow measurement channel that is intended to measure some property of the world that we want to optimize, but fails to do so after the AI's manipulation. The measuring system is usually much smaller than the property it is measuring, and the AI takes control of this smaller system to obtain its own reward. ### Policy Policymakers should develop stringent testing standards and monitoring methods for the use of AI in order to reduce the risks brought on by model splintering, concept extrapolation, and value extrapolation. The responsible use of AI systems can be supported by the following recommendations. _Diverse Data Distribution Testing._ Policy should encourage the testing of AI systems across a wide range of data distributions, particularly those that differ noticeably from the training data and from one another. _Periodic Testing and Evaluation._ AI systems should be tested and evaluated on a regular basis. Regular evaluations can assist in identifying potential legal or safety problems that may develop when the environment or the AI's capabilities change. ## 5 Conclusion In conclusion, model splintering and value splintering are significant issues in AI safety that must be addressed for safe and effective AI development. To overcome these issues, AI systems must possess concept extrapolation. Value extrapolation can also be employed to overcome value splintering by inferring the true underlying reward function from limited data. Concept extrapolation has applications to many AI safety problems, from Goodhart's Law problems to goal misgeneralisation to wireheading. By scrutinizing model splintering and value splintering, we can improve the safety and efficacy of AI systems.
2305.16674
Quantum logical controlled-NOT gate in a lithium niobate-on-insulator photonic quantum walk
Quantum computers comprise elementary logic gates that initialize, control and measure delicate quantum states. One of the most important gates is the controlled-NOT, which is widely used to prepare two-qubit entangled states. The controlled-NOT gate for single photon qubits is normally realized as a six-mode network of individual beamsplitters. This architecture however, utilizes only a small fraction of the circuit for the quantum operation with the majority of the footprint dedicated to routing waveguides. Quantum walks are an alternative photonics platform that use arrays of coupled waveguides with a continuous interaction region instead of discrete gates. While quantum walks have been successful for investigating condensed matter physics, applying the multi-mode interference for logical quantum operations is yet to be shown. Here, we experimentally demonstrate a two-qubit controlled-NOT gate in an array of lithium niobate-on-insulator waveguides. We engineer the tight-binding Hamiltonian of the six evanescently-coupled single-mode waveguides such that the multi-mode interference corresponds to the linear optical controlled-NOT unitary. We measure the two-qubit transfer matrix with $0.938\pm0.003$ fidelity, and we use the gate to generate entangled qubits with $0.945\pm0.002$ fidelity by preparing the control photon in a superposition state. Our results highlight a new application for quantum walks that use a compact multi-mode interaction region to realize large multi-component quantum circuits.
Robert J. Chapman, Samuel Häusler, Giovanni Finco, Fabian Kaufmann, Rachel Grange
2023-05-26T06:48:45Z
http://arxiv.org/abs/2305.16674v1
# Quantum logical controlled-NOT gate in a lithium niobate-on-insulator photonic quantum walk ###### Abstract Quantum computers comprise elementary logic gates that initialize, control and measure delicate quantum states. One of the most important gates is the controlled-NOT, which is widely used to prepare two-qubit entangled states. The controlled-NOT gate for single photon qubits is normally realized as a six-mode network of individual beamsplitters. This architecture however, utilizes only a small fraction of the circuit for the quantum operation with the majority of the footprint dedicated to routing waveguides. Quantum walks are an alternative photonics platform that use arrays of coupled waveguides with a continuous interaction region instead of discrete gates. While quantum walks have been successful for investigating condensed matter physics, applying the multi-mode interference for logical quantum operations is yet to be shown. Here, we experimentally demonstrate a two-qubit controlled-NOT gate in an array of lithium niobate-on-insulator waveguides. We engineer the tight-binding Hamiltonian of the six evanescently-coupled single-mode waveguides such that the multi-mode interference corresponds to the linear optical controlled-NOT unitary. We measure the two-qubit transfer matrix with \(0.938\pm 0.003\) fidelity, and we use the gate to generate entangled qubits with \(0.945\pm 0.002\) fidelity by preparing the control photon in a superposition state. Our results highlight a new application for quantum walks that use a compact multi-mode interaction region to realize large multi-component quantum circuits. + Footnote †: preprint: APS/123-QED Photonic quantum information technologies use controlled superposition and entangled single photon qubits for applications in quantum computing [1; 2], simulation [3], communication [4] and sensing [5]. At the core of any quantum information processor is the ability to entangle qubits via logical operation such as the controlled-NOT (CNOT) gate, which is fundamental to many leading quantum computing algorithms. These operations are typically the most challenging to perform, as the qubits must interact with each other but be otherwise fully isolated to preserve coherence [6]. In linear optical quantum computing, photonic qubits are entangled by quantum interference at beamsplitters and single photon detection, either in post-selection or by heralding ancillary photons [7]. The typical linear optical CNOT gate is realized in a six-mode interferometer comprising five beamsplitters [8] and has been demonstrated in free-space optics with polarization encoding [9], and in integrated photonics with path encoding [10]. Four modes are for the two qubits and the extra modes provide loss channels that are necessary to balance the probabilistic logical operation. The need for high-fidelity entangling gates puts stringent requirements on the accuracy of the beamsplitter reflectivities, stability of the interferometer and indistinguishability of the single photons in all degrees of freedom. Photonic integrated circuits benefit from the precision and inherent phase stability of monolithic nanofabricated devices, and comprise optical waveguides for routing light on-chip, directional couplers (DCs) or multi-mode interferometers that act as two-mode beamsplitters, and thermo- or electro-optic phase control for reconfigurability. It is known that a nest of two-mode interferometers can be configured for any linear optical unitary, including the CNOT gate, however, this architecture is dominated by routing of light between separate beamsplitters, which does not contribute to the logical operation [11; 12]. Quantum walks (QWs) are an alternative photonics architecture, realized as arrays of coupled waveguides where light interferes along the entire propagation length instead of at individual beamsplitters. The optical evolution in the QW is described by a tight-binding Hamiltonian that can be engineered for many applications and for studying fundamental physics. This includes the experimental demonstration of topologically bound states [13; 14; 15; 16; 17; 18], Anderson localization [19; 20; 21; 22; 23], quantum transport [24; 25] and for preparing large single photon superposition states [26]. When pairs of indistinguishable photons propagate in a QW, they undergo bosonic bunching similar to the Hong-Ou-Mandel (HOM) effect, which has been observed in free-space optics [27], multi-mode fiber-optics [28] and integrated photonics [29]. It has been recently proposed that multi-mode quantum interference in a specifically parameterized six-waveguide array can implement the CNOT gate on path-encoded photonic qubits [30]. In this realization of the CNOT gate, the photons interfere continuously along the length of the array, removing the need for on-chip routing. However, such a device is challenging to produce because of the precise control required on the tight-binding Hamiltonian for each propagation and hopping term. While sophisti cated QWs have been demonstrated in several photonics technologies, controlled two-qubit gates are yet to be realized. Here, we experimentally demonstrate the two-qubit CNOT gate realized in a continuous time QW. We fabricated the six-waveguide "QW-CNOT" array in lithium niobate-on-insulator (LNOI) and fully control the on-site energies and nearest-neighbor hopping of the tight-binding Hamiltonian in the design of the individual waveguide widths and separations respectively. We measure the two-qubit CNOT transfer matrix with a fidelity of \(0.938\pm 0.003\) using photons generated by spontaneous parametric down-conversion (SPDC). The reduced fidelity is due to the limited indistinguishability of the SPDC source, bandwidth of the QW-CNOT chip and fiber dispersion. We also prepare the control qubit in the \(\frac{1}{\sqrt{2}}(\left|0\right\rangle+e^{i\phi}\left|1\right\rangle)\) superposition state using an on-chip directional coupler and use the QW-CNOT gate to generate the state \(\frac{1}{\sqrt{2}}(\left|00\right\rangle+e^{i\phi}\left|11\right\rangle)\) with fidelity \(0.945\pm 0.002\), measured in the computational basis. The addition of on-chip phase control would enable generation of maximally entangled Bell states, which are an important resource for quantum computing and quantum communication. Our results open a pathway towards implementing large multi-mode photonic circuits in a single step using QWs. ## Results ### Quantum controlled-NOT gate in a photonic quantum walk The quantum walk controlled-NOT (QW-CNOT) gate operates on path encoded control \(\left|c\right\rangle\) and target \(\left|t\right\rangle\) qubits, that are superposition states of photon occupation across pairs of neighboring waveguides. The evolution of the quantum walk is described by the tight-binding Hamiltonian \[h=\sum_{i=1}^{N}\beta_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}+\sum_{i=1}^{N-1} \kappa_{i}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i+1}+\hat{a}_{i+1}^{\dagger} \hat{a}_{i}\right), \tag{1}\] which is the sum of the propagation coefficients of each waveguide \(\beta_{i}=2\pi n_{\text{eff},i}/\lambda\) and the coupling rates between neighboring waveguides \(\kappa_{i}\). The unitary operation is described by Schrodinger's equation \(U=\exp\left(-iht\right)\) for an evolution time \(t\). Lahini _et al._ determined the specific six waveguide tight-binding Hamiltonian with an equivalent unitary to the traditional linear optical CNOT gate [30], which is usually realized as a network of five beamsplitters [8] (shown in Supplementary Section 1). The Figure 1: **Evolution of the QW-CNOT device.** Two-qubit evolution in the QW-CNOT waveguide array shown in the two-qubit logical subspace for input states \(\left|ct\right\rangle\): a) \(\left|00\right\rangle\), b) \(\left|01\right\rangle\), c) \(\left|10\right\rangle\), and d) \(\left|11\right\rangle\). Labels \(\left|c\right\rangle\) and \(\left|t\right\rangle\) indicate the control and target qubits, and the input waveguides labeled \(v\) are auxiliary modes. When the control qubit is in the \(\left|1\right\rangle\) state the photon interferes with the target qubit and causes a bit-flip operation. This operation is only possible when post-selecting the logical subspace. e) A microscope image of the QW-CNOT realized in the LNOI photonics platform. The labels on the left show the qubit encoding and on the right the waveguide number. The inset shows a scanning electron micrograph of the start of the coupling region. f) A microscope image of the QW-CNOT array with an additional DC that implements a Hadamard gate, labeled H, preparing the control qubit in a superposition state. QW-CNOT requires the Hamiltonian-time product \[ht=\pi\begin{pmatrix}0&-1.27&0&0&0&0\\ -1.27&-0.73&0&0&0&0\\ 0&0&0.67&-0.51&0&0\\ 0&0&-0.51&0.01&-1.69&0\\ 0&0&0&-1.69&-1.01&-0.52\\ 0&0&0&0&-0.52&-1.67\\ \end{pmatrix}, \tag{2}\] where diagonal terms correspond to the propagation coefficients \(\beta_{i}\) and the off-diagonal terms correspond to the nearest-neighbor hopping rates \(\kappa_{i}\). The two-photon evolution for each input state is shown in Figure 1 for the reduced logical subspace. Four waveguides are required to encode the two qubits and two additional auxiliary modes that allow the photons to propagate in a larger Hilbert space that is subsequently reduced by post-selection. Figures 1a and 1b show the control qubit in the \(\ket{0}\) state where it is decoupled from the rest of the array and does not interfere with the target qubit. Coupling to the auxiliary modes and to other non-logical modes are not shown in this evolution, but can be seen in the complete Hilbert space (shown in Supplementary Section 2). Figures 1c and 1d show the control qubit in the \(\ket{1}\) state, where it quantum interferes with the target qubit. Quantum interference between the two photons leads to a bit-flip operation on the target qubit. The visualization here does not consider the amplitude of the state within the two-qubit subspace where each of the logical transformations has a 1/9 success probability (shown in Supplementary Section 2). ### QW-CNOT design and fabrication We use the lithium niobate-on-insulator (LNOI) photonics platform to implement a QW-CNOT gate with waveguides designed for 1550 nm wavelength and transverse electric polarization. LNOI has a broad transparency range, low loss, and high electro-optic and \(\chi^{(2)}\) nonlinear coefficients, making it a leading technology for photonic quantum information processors [31]. LNOI has become a major focus in photonics research, with demonstrations of record high-speed and low-voltage modulation [32], ultra-efficient nonlinear frequency conversion [33], all-optical switching [34] and entangled photon pair generation [35]. We etch the waveguide array in a 300 nm LN film with a 4.7 um SiO\({}_{2}\) bottom oxide on a silicon handle wafer [36]. The etched waveguide height is 230 nm and we use the width and separation to control the propagation coefficients \(\beta_{i}\) and coupling rates \(\kappa_{i}\) in the tight-binding Hamiltonian. We initially simulate the mode for various waveguide widths using a finite-element solver and plot the corresponding \(\beta\) terms in Figure 2a. The inset in Figure 2a shows the typical waveguide cross section and solution of the fundamental transverse electric mode. We exchange time-evolution for equivalent length-evolution for designing the array and target an array length \(L=700\) um such that the Hamiltonian-length product \(hL\) matches Equation 2. We chose the on-site terms to be relative about the first waveguide which we set to 1.5 um width and indicate the necessary propagation coefficients as points in Figure 2a. Using the determined waveguide widths, we simulate the even and odd supermodes of neighboring waveguides to calculate the coupling rate as \(\kappa=\pi(n_{even}-n_{odd})/\lambda\), as shown in Figure 2b. The points indicate the necessary parameters for the Hamiltonian. Note that there is no hopping between waveguides 2 and 3 in the Hamiltonian as \(\kappa_{2}=0\). The waveguide numbering is shown in Figure 1e, and we position waveguides 2 and 3 with a 20 um gap to effectively decouple the modes. Figures 1e shows a microscope image of the QW-CNOT and Figure 1f shows another device on the same chip with an additional DC that prepares the control qubit in a superposition state. The different waveguide separations can be seen in the scanning electron micrograph and reflects the different coupling coefficients of the Hamiltonian. We couple light to the chip using grating couplers with \(\sim\)6 dB loss per coupler. Grating couplers offer several advantages over end-fire coupling, such as eliminating the need for dicing and polishing waveguide facets, larger mode field diameters, and polarization sensitivity, ensuring the excitation Figure 2: **Design of the QW-CNOT array in LNOI photonics.** a) The dependence of the propagation coefficient \(\beta\) on the waveguide width (top edge) with a fixed etch depth of 230 nm and remaining film of 70 nm. The required waveguide widths are indicated on the curve. The inset shows the fundamental transverse electric mode of the LNOI waveguide. b) The dependence of the coupling coefficient \(\kappa\) on the waveguide separation (edge to edge) for each of the coupled waveguides. Waveguides 2 and 3 are not coupled as \(\kappa_{2}=0\) in the Hamiltonian. of only the waveguide transverse electric mode. Higher efficiency grating couplers in LNOI have been reported [37, 38, 39] with losses as low as \(<\)1 dB per grating [40]. ### Classical characterization To classically characterize the QW-CNOT gate, we inject a 1550 nm wavelength continuous-wave laser into each input of the chip and record the intensity at all the outputs with a near-infrared camera. Figure 3 shows the propagation simulation based on the Hamiltonian in Equation 2, as well as measured output intensity distributions for each of the six inputs in the waveguide array. The propagation here differs from the two-photon evolution shown in Figure 1 as only a single mode is excited. We calculate the fidelity between the experimental transfer matrix \(\Gamma\) and the theoretical transfer matrix \(\Gamma^{\prime}\) as \[F(\Gamma,\Gamma^{\prime})=\sum_{i,j}\sqrt{\frac{\Gamma_{i,j}\Gamma^{\prime}_{ i,j}}{\left(\sum_{i,j}\Gamma_{i,j}\right)\left(\sum_{i,j}\Gamma^{\prime}_{ i,j}\right)}}. \tag{3}\] The fidelity between the classically characterized transfer matrix and the theoretical model is \(0.982\pm 0.007\) where the error is calculated from the variation in fidelity for each waveguide input. From this characterization, we can build a simulation of the device by finding the Hamiltonian that gives the closest match to the measured output. This simulation will be a tool for predicting the two-photon dynamics of the QW-CNOT chip. ### SPDC source and experimental setup In order to measure the two-qubit operation of the QW-CNOT gate, we require a source of indistinguishable photons at 1550 nm wavelength. To achieve this, we employ a collinear type-2 spontaneous parametric down-conversion (SPDC) process using a beta-barium borate crystal. A diagram of the complete experimental setup is depicted in Figure 4. We use a 775 nm continuous-wave laser to pump the nonlinear crystal, generating photon pairs which are filtered with a 12 nm bandpass filter to ensure both photons have the same bandwidth. Because we use a type-2 SPDC process, one photon is horizontally polarized and the other vertically polarized. We couple both photons into a polarization maintaining fiber and separate them with a fiber-pigtailed polarizing beamsplitter. One of the output fibers is pigtailed with the slow axis vertical and the other fiber with the slow axis horizontal, so that both photons exit the beamsplitter with the same polarization in each fiber. Motor-controlled optical delay lines compensate for mismatch in fiber lengths and birefringence before the photons are coupled to the chip. Photons are coupled to and from the chip with fiber arrays positioned above the grating couplers with precision translation stages. Finally, coincidence measurements are performed with superconducting nanowire single photon detectors and a time correlator to record arrival times. We verify the indistinguishability of the generated photons through the characteristic HOM dip, measured by interfering the photons in a fused-silica 50:50 beamsplitter with 94.6 % visibility (further details in Supplementary Section 3). Narrower bandpass filters can improve the source visibility by reducing fiber dispersion at the cost of lowering the photon count rate. ### QW-CNOT array two-qubit transfer matrix The operation of the two-qubit QW-CNOT chip is based on quantum interference between the control and target qubits. The control \(\ket{0}\) state does not interfere with the target qubit and thus the arrival time of the Figure 3: **Classical characterization.** The theoretical evolution intensity in the QW-CNOT chip for each of the six waveguide inputs. The output intensity for each evolution is shown for the theoretical model (blue) and the experimental result (red). The theoretical evolution is calculated from the Hamiltonian in Equation 1 with the values of \(\beta\) and \(\kappa\) shown in Figure 2. two photons has no impact on the transfer matrix. However, for the control \(\ket{1}\) state, the photon arrival times are critical to allow for interference. We use the motorized optical delay lines to match the arrival times of the control and target qubits. For the input state \(\ket{10}\), we observe HOM interference between waveguides 3 and 4, which results in suppression of the output state \(\ket{10}\) and therefore only the logical output \(\ket{11}\) is measured. Likewise for input state \(\ket{11}\), HOM interference is observed between waveguides 3 and 5, meaning only the logical output \(\ket{10}\) is measured. This interference gives rise to the logical CNOT operation in our chip. The raw HOM interference plots are shown in Supplementary Section 4. We measure the two-qubit transfer matrix of the QW-CNOT chip with and without a delay between the photons and plot the results in Figure 5. When the photons arrive at the chip with a time delay (\(\tau\)), there is no quantum interference and we observe classical propagation dynamics. In Figures 5a, 5b and 5c, we show the ideal theoretical, simulation and experimental results respectively. Because there is no quantum interference, the simulation is simply the product of the classical probability distributions shown in Figure 3. We calculate the fidelity using Equation 3 between the theoretical model and the experiment as \(0.994\pm 0.001\) and between the simulation and the experiment as \(0.991\pm 0.001\), where errors are calculated using Poissonian statistics. We scan the optical delay lines and measure HOM interference between the control and target photons in the QW-CNOT waveguide array. When the delay is \(\tau=0\), the photons arrive simultaneously and the CNOT operation is implemented by quantum interference. The theoretical, simulated and experimentally measured transfer matrices are shown in Figures 5d, 5e and 5f respectively. The fidelity between the theoretical model and the experiment is \(0.938\pm 0.003\) and between the simulation and the experiment is \(0.985\pm 0.002\). The reduced fidelity of the two-photon interference is attributed to the relatively broad bandwidth of the SPDC photons (\(12\,\mathrm{nm}\)), meaning optical dispersion in fibers and waveguide, and the sensitivity of the Hamiltonian to wavelength, decrease the visibility of the two-qubit interference. ### Entangled state generation The CNOT gate is a fundamental element for quantum computation and can be used to prepare maximally entangled Bell states when the control qubit is in the superposition state \(\ket{c}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})\) and the target qubit in either the \(\ket{0}\) or \(\ket{1}\) state. We fabricated a QW-CNOT gate with the addition of a DC that prepares the control qubit in the superposition state \(\ket{c}=\frac{1}{\sqrt{2}}(\ket{0}+e^{i\phi}\ket{1})\) and the target qubit is kept in the \(\ket{0}\) state. We do not control the phase \(\phi\) in this experiment, however, the addition of electro-optic or thermo-optic phase shifters [41] would enable the control necessary to prepare maximally entangled Bell states. When the two photons arrive with a time delay \(\tau\), the input state becomes an incoherent mixture \(\frac{1}{2}(\ket{00}\bra{00}+\ket{10}\bra{10})\) and no quantum interference is observed. The expected output of the QW-CNOT in this case is \(\frac{1}{4}(\ket{00}\bra{00}+\ket{2}\ket{10}\bra{10}+\ket{11}\bra{11})\). On Figure 4: **Experimental setup.** Orthogonally polarized photon-pairs are generated by type-2 spontaneous parametric down-conversion in a \(2\,\mathrm{mm}\)-thick beta barium borate (BBO) crystal and the pump is filtered with a longpass filter (LPF) and the photons with a \(12\,\mathrm{nm}\) bandpass filter (BPF). The photons are coupled to polarization maintaining fiber (PMF) and separated with a polarizing beamsplitter (PBS) where one fiber pigtail is rotated such that both photons have the same output polarization. The photons are injected into optical delay lines (ODLs) to compensate for fiber length mismatch and fiber birefringence and are coupled to the QW-CNOT chip with a fiber array positioned above on-chip grating couplers. After the quantum walk, the photons are collected in fiber and measured with superconducting nanowire single photon detectors (SNSPDs) and the arrival times are recorded with a time correlator (TC). Polarization controllers are used to maximize the efficiency of the SNSPDs. the other hand, when the photons arrive simultaneously, i.e \(\tau=0\), the input state is a coherent superposition \(\frac{1}{\sqrt{2}}(\ket{00}+e^{i\phi}\ket{10})\) and the expected output is the entangled state \(\frac{1}{\sqrt{2}}(\ket{00}+e^{i\phi}\ket{11})\). The entangled state preparation results are presented in Figure 6. Based on the two-photon transfer matrix measured previously, we construct a model of the two-qubit entangled state preparation. The theoretical, simulated and measured output for the incoherent input state, measured in the computational basis, are shown in Figures 6a, 6b, and 6c respectively. We calculate the fidelity between the theoretical state and the experiment as \(0.985\pm 0.001\) and between the simulated state and the experiment as \(0.987\pm 0.001\). The theoretical, simulated and measured output for the coherent superposition input state are shown in Figures 6d, 6e, and 6f respectively. The fidelity between the theoretical output and the experiment is \(0.945\pm 0.002\) and between the simulation and experiment is \(0.973\pm 0.001\). Although these results are from a different QW-CNOT device, the experiment and simulation show good agreement. The fidelity here assumes \(\phi=0\), which can be achieved with an on-chip phase shifter for the input control qubit. The two-qubit entangled state could be verified by performing two-qubit state tomography or violating the Clauser, Horn, Shimony and Holt (CHSH) inequality [42]. Nevertheless, this experiment shows the QW-CNOT device is capable of performing photonic entangling gates in a single interaction step, which could provide a significant improvement over the traditional multi-step circuit approach. ## Discussion We have successfully implemented the controlled-NOT gate in a continuous-time quantum walk on the lithium niobate-on-insulator photonics platform. Using indistinguishable photons from a SPDC source, we have measured the transfer matrix of the two-qubit CNOT gate, and prepared the control qubit in a superposition leading to the generation of entangled states. Our demonstration does not yet benefit from the key advantages of LNOI photonics, however, we aim to improve gate fidelity by including electro-optical tuning to locally modify the array Hamiltonian. With enough control, this could also enable us to reconfigure the array for different logical operations and provide a new platform for programmable integrated photonics. Another improvement would come from adding input and output single qubit gates for performing process and state tomography, violating the CHSH inequality and preparing arbitrary two-qubit states. In this work, we highlight the potential for photonic quantum gates to be implemented in continuously coupled waveguide arrays, offering a more compact solution compared to existing architectures and opening new pathways towards the development of more complex multi-photon circuits in a single interaction region. ### Data availability Raw data and evaluation code are available from the authors upon reasonable request. Figure 5: **Two-qubit transfer matrix.** a, b, c) The theoretical (ideal), simulated and experimental two-qubit transfer matrix when the photons arrive with a time separation \(\tau\). In this regime, the photons do not quantum interfere. d, e, f) The theoretical (ideal), simulated and experimental two-qubit transfer matrix with indistinguishable photons that arrive with zero time delay. ### Competing interests The authors declare no competing financial or non-financial interests. ### Author Contributions R.J.C. conceived the experiment. R.J.C. and S.H designed the waveguide array. G.F. and F.K fabricated the lithium niobate-on-insulator waveguide samples. R.J.C. performed the optical measurements, data analysis and wrote the original draft of the manuscript. R.G. supervised the project. All authors contributed to revising the manuscript. ###### Acknowledgements. We acknowledge support for characterization of our samples from the Scientific Center of Optical and Electron Microscopy ScopeM and from the cleanroom facilities BRNC and FIRST of ETH Zurich and IBM Ruschlikon. R.J.C. acknowledges support from the Swiss National Science Foundation under the Ambizione Fellowship Program (Project Number PZ00P2_208707). R.G. acknowledges support from the European Space Agency (Project Number 4000137426), the Swiss National Science Foundation under the Bridge Program (Project Number 194693), the European Research Council (Project Number 714837), and Horizon 2020 (ELENA Consortium, Project Number 101016138).
2310.03971
Quantized Transformer Language Model Implementations on Edge Devices
Large-scale transformer-based models like the Bidirectional Encoder Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications, wherein these models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task. One of the major limitations of these large-scale models is that they cannot be deployed on resource-constrained devices due to their large model size and increased inference latency. In order to overcome these limitations, such large-scale models can be converted to an optimized FlatBuffer format, tailored for deployment on resource-constrained edge devices. Herein, we evaluate the performance of such FlatBuffer transformed MobileBERT models on three different edge devices, fine-tuned for Reputation analysis of English language tweets in the RepLab 2013 dataset. In addition, this study encompassed an evaluation of the deployed models, wherein their latency, performance, and resource efficiency were meticulously assessed. Our experiment results show that, compared to the original BERT large model, the converted and quantized MobileBERT models have 160$\times$ smaller footprints for a 4.1% drop in accuracy while analyzing at least one tweet per second on edge devices. Furthermore, our study highlights the privacy-preserving aspect of TinyML systems as all data is processed locally within a serverless environment.
Mohammad Wali Ur Rahman, Murad Mehrab Abrar, Hunter Gibbons Copening, Salim Hariri, Sicong Shao, Pratik Satam, Soheil Salehi
2023-10-06T01:59:19Z
http://arxiv.org/abs/2310.03971v1
# Quantized Transformer Language Model Implementations on Edge Devices ###### Abstract Large-scale transformer-based models like the Bidirectional Encoder Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications, wherein these models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task. One of the major limitations of these large-scale models is that they cannot be deployed on resource-constrained devices due to their large model size and increased inference latency. In order to overcome these limitations, such large-scale models can be converted to an optimized FlatBuffer format, tailored for deployment on resource-constrained edge devices. Herein, we evaluate the performance of such FlatBuffer transformed MobileBERT models on three different edge devices, fine-tuned for Reputation analysis of English language tweets in the RepLab 2013 dataset. In addition, this study encompassed an evaluation of the deployed models, wherein their latency, performance, and resource efficiency were meticulously assessed. Our experiment results show that, compared to the original BERT large model, the converted and quantized MobileBERT models have 160\(\times\) smaller footprints for a 4.1% drop in accuracy while analyzing at least one tweet per second on edge devices. Furthermore, our study highlights the privacy-preserving aspect of TinyML systems as all data is processed locally within a serverless environment. IoT, Natural Language Processing, Machine Learning, BERT, Reputation Polarity, Social Media, Embedded Systems, TinyML, Privacy. ## I Introduction Pre-trained large-scale Natural Language Processing (NLP) models have been exhibiting remarkable performance in most NLP tasks using transformer-based architectures. By stacking multiple encoder/decoder layers, combined with attention mechanism [27], these architectures are producing promising results in the field of NLP. Models such as BERT [11], RoBERTa [18], XLNet [33], and GPT-4 [22] have been increasingly popular in the commercial development of various smart AI systems to analyze audio/text input. These services will be integrated into mobile computing and Internet of Things (IoT) devices to improve user experience, making it imperative to deploy such NLP services on resource-constrained edge devices to improve the service response times [20]. However, these transformer-based NLP models are pre-trained using TensorFlow (or similar) end-to-end machine learning platform and they are optimized for classification accuracy, making them contain thousands of layers of neurons with a large number of optimization parameters. Such models are large in size and require significant memory for storage and processing. [11, 18, 33, 23]. Accommodation of such large-scale models in edge devices with smaller storage is a major challenge. In addition to the storage needs, the latency, and the computational cost also prove to be huge obstacles to the deployment of traditional machine learning (ML) models [28]. Due to the increased latency resulting from the constrained resources of edge devices, the conventional deep learning models often fail to meet the real-time requirements [20]. To address these challenges, TinyML has proven to be a promising candidate. One of the main focuses of the field of TinyML is on developing and deploying ML models on resource-constrained, small, and low-power devices such as microcontrollers, sensors, and edge devices [30]. Traditionally IoT device services rely on sending data to a remote server for ML analysis (to provide services), adding performance delays, increasing the service's dependence on the availability and quality of the communication network, and posing security challenges, including those concerning user privacy [19]. Integration of TinyML-based models into the service allows the deployment of these ML-based services on the device itself or an edge node, mitigating the aforementioned challenges. TinyML allows for real-time data processing at the edge, enabling a more efficient and faster decision-making process, which can be crucial for some applications, such as autonomous systems, robotics, and industrial automation [25]. TensorFlow-Lite [7] is an example of a TinyML-based algorithm that is optimized for deployment on embedded devices [9]. It includes a number of features that make it well-suited for implementing TinyML, such as support for on-device ML, quantization and pruning of models to reduce their size and improve performance, and a small footprint that allows it to run on devices with limited memory and storage. To the best of our knowledge, no previous studies have thoroughly analyzed the performance and resource requirements of smaller BERT variants such as MobileBERT [26] on resource-constrained devices. Our research aims to
2304.01021
Prime subcomplexes
In 1859, Ernst E. Kummer \cite{erns} introduced the concept prime ideal and after that in 1983 this concept was generalized to prime submodules by R. McCasland \cite{mcca}. In this article, by considering complexes as a generalization of modules, I introduce the concept prime subcomplex and study some properties analogous to that of prime submodules.
Ismael Akray
2023-04-03T14:19:43Z
http://arxiv.org/abs/2304.01021v1
# Prime Subcomplexes ###### Abstract. In 1859, Ernst E. Kummer [7] introduced the concept prime ideal and after that in 1983 this concept was generalized to prime submodules by R. McCasland [8]. In this article, by considering complexes as a generalization of modules, I introduce the concept prime subcomplex and study some properties analogous to that of prime submodules. Key words and phrases:complex; prime submodule; prime subcomplex 2000 Mathematics Subject Classification: 16E05, 16P70, 11R44 ## 1. Introduction Throughout this article all rings are assumed to be commutative with identity and all modules will be unitary. The symbol \(\sqrt{P}\) and \(Ann(P)\) indicates to the radical and the annihilator of \(P\). E. E. Kummer [7] introduced the notion prime ideal and then many authors discussed their properties like Krull, Fitting and McCoy in [6, 5, 9]. A proper ideal \(P\) of a ring \(R\) is said to be prime if \(ab\in P\) for \(a,b\in R\) implies \(a\in P\) or \(b\in P\). In other words, an ideal \(P\) is prime if and only if \(R/P\) is an integral domain. E. Noether [10] introduced primary ideal in 1921, where a proper ideal \(P\) of a ring \(R\) is said to be primary if \(ab\in P\) for \(a,b\in R\) implies \(a\in P\) or \(b\in\sqrt{P}\). R. McCasland [8] in 1983 generalized this concept into module theory and introduced prime submodules where a proper submodule \(N\) of an \(R-\)module \(M\) is prime if \(rm\in N\) for \(r\in R\) and \(m\in M\) implies \(m\in N\) or \(r\in(N:_{R}M)=Ann_{R}(M/N)\). Equivalently, \(N\) is prime submodule if it is a primary submodule whose radical is identical with \((N:M)\), where a proper submodule \(K\) of \(M\) is primary (see [11]) if \(rm\in K\) for \(r\in R\) and \(m\in M\) implies \(m\in K\) or \(r\in\sqrt{(N:M)}\). In the theory of rings, prime ideals plays an important role everywhere and so does prime submodules in the theory of modules. In particular, primary submodules are the major characters in Noetherian modules. H. Cartan [4] introduced the notion \(R-\)complex in a siminar in 1959. An \(R-\)complex \(C=(C,d)_{i\in I}\) is a sequence \(\cdots\stackrel{{ d_{n+2}}}{{\rightarrow}}C_{n+1}\stackrel{{ d_{n+1}}}{{ \rightarrow}}C_{n}\stackrel{{ d_{n}}}{{\rightarrow}}C_{n-1} \stackrel{{ d_{n-1}}}{{\rightarrow}}\cdots\) of \(R-\)modules \(C_{n}\) and \(R-\)morphisms \(d_{n}\) such that \(d_{n}d_{n+1}=0\) for all \(n\in I\)[12]. Also, a complex \(S=(S_{i},d_{i})\) is called a subcomplex of an \(R-\)complex \(C=(C_{i},d_{i})\) briefly \(S\leq C\) if \(S_{i}\subseteq C_{i}\) and \(d_{i+1}(S_{i+1})\subseteq S_{i}\) for all \(i\in I\). Furthermore, if \(S_{i}\neq C_{i}\) for some \(i\in I\), then \(S\) is called a proper subcomplex of \(C\). We denote the set of all indices \(i\in I\) in which \(S_{i}\neq C_{i}\) by \(P_{I}\). Define the set of zero divisors \(Z_{R}(C)\) (briefly \(Z(C)\)) on an \(R-\)complex \(C\) to be the intersection of all zero divisors sets on the components of \(C\), that is, \(Z_{R}(C)=\cap_{i\in I}Z_{R}(C_{i})=\{r\in R:rm_{i}=0,\ \mbox{for some}\ m_{i}\in C_{i},\forall i\in I\}\). Moreover, the annihilator of an \(R-\)complex \(C\) is \(Ann_{R}(C)=\{r\in R:rC_{i}=0,\ \mbox{for all}\ i\in I\}\). So, the zero subcomplex of an \(R-\)complex \(C\) is prime subcomplex if \(Ann(C)=Z(C)\). Furthermore, we define the residual of a subcomplex \(S\) of a complex \(C\) by \((S:C)=\cap_{i\in I}(S_{i}:C_{i})=\{r\in R:rC_{i}\subseteq S_{i},\forall i\in I\}\). Every \(R-\)module \(M\) can be considered as a complex with \(C_{0}=M\) and \(C_{i}=0\) for all \((i\neq 0)\in I\) and all morphisms be zero. This motivate us to think about how can we introduce or generalize many notions in ring theory and module theory into \(R-\)complexes. The purpose of this paper is to introduce and investigate the notion of primeness in a more wide category, the category of \(R-\)complexes. A proper subcomplex \(S\) of an \(R-\)complex \(C\) is said to be a prime subcomplex if \(S_{i}\) is a prime submodule of \(C_{i}\) for all \(i\in P_{I}\). In other sense, A proper subcomplex \(S\) of a complex \(C\) is prime if for each \(i\in I\), either \(\frac{C_{i}}{S_{i}}\) is identically zero or a torsion-free \(\frac{R}{P_{i}}\)-module where \(P_{i}=(S_{i}:C_{i})\). In such case, we call \(S\) a \(P_{i}-\)prime subcomplex. A proper subcomplex \(S\) of an \(R-\)complex \(C\) is primary if \(S_{i}\) is primary for all \(i\in P_{I}\), that is, for all \(r\in R\), \(i\in P_{I}\) and \(m\in C_{i}\) with \(rm\in S_{i}\) implies \(m\in S_{i}\) or \(r\in\sqrt{(S_{i}:C_{i})}\). Equivalently, the zero-divisors of \(\frac{C_{i}}{S_{i}}\) are nilpotent \(\forall i\in P_{I}\). In this case we say \(S\) is \(P_{i}-\)primary where \(P_{i}=\sqrt{(S_{i}:C_{i})}\). It is clear that a proper subcomplex \(S\) of an \(R-\)complex \(C\) is prime if and only if the chain map \(f=<f_{i}>:\frac{C}{S}\rightarrow\frac{C}{S}\) defined by \(f_{i}(m+S_{i})=rm+S_{i}\), for \(m\in C_{i}\) is either monic or nilpotent for all nonzero \(r\in R\). In section two, we establish the most fundamental theorems of prime submodules to prime subcomplexes. We first point out that the notion of prime subcomplex have many equivalents (Theorem 2.9). We show that a subcomplex \(S\) of an \(R-\)complex \(C\) is prime if \((S_{i}:C_{i})\) is a maximal ideal of \(R\) for all \(i\in P_{I}\) (Theorem 2.10). Also, for any faithfully flat \(R-\)complex \(F\), a subcomplex \(S\) of a complex \(C\) is prime if and only if \(F\otimes S\) is a prime subcomplex of \(F\otimes C\) (Theorem 2.17). Finally, the prime avoidance theorem was generalized to prime subcomplexes (Theorem 2.20). ## 2. Prime subcomplexes In this section we introduce prime subcomplex and investigate some properties and results on such concept. We begin with their definition. **Definition 2.1**.: A proper subcomplex \(S\) of an \(R-\)complex \(C\) is said to be a prime subcomplex if \(S_{i}\) is a prime submodule of \(C_{i}\) for all \(i\in P_{I}\). Let \(\mathbb{N}\) be the set of natural numbers and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). It is well-known that for any non-nilpotent element \(a\in R\), the localization of a ring \(R\) at the multiplicative subset \(S=\{a^{n}:n\in\mathbb{N}_{0}\}\) is \(R_{(a)}=\{\frac{r}{a^{n}}:r\in R,n\in\mathbb{N}_{0}\}=R[\frac{1}{a}]=R[a,\frac {1}{a}]\), which is also called the Laurent polynomial ring. This localized ring has a principal prime ideal \(<\frac{1}{a}>=\{\frac{r}{a^{n}}:r\in R,n\in\mathbb{N},a\nmid r\}\) and a primary ideal \(<\frac{1}{a^{2}}>\). We denote such ideals by \(P_{a}(R)\) and \((\frac{1}{a})P_{a}(R)\) respectively. Now, we give an example on prime and primary subcomplexes in \(\breve{C}\)ech complexes [3, 5.1.6 Example, pp. 84]. **Example 2.2**.: The \(\breve{C}\)ech complex of the ring of integers \(\mathbb{Z}\) as a module over itself with respect to integers 3, 5 and 7 is \[C^{\bullet}:0\rightarrow\mathbb{Z}\overset{d^{0}}{\rightarrow}\mathbb{Z}_{(3 )}\oplus\mathbb{Z}_{(5)}\oplus\mathbb{Z}_{(7)}\overset{d^{1}}{\rightarrow} \mathbb{Z}_{(35)}\oplus\mathbb{Z}_{(21)}\oplus\mathbb{Z}_{(15)}\overset{d^{2 }}{\rightarrow}\mathbb{Z}_{(105)}\to 0\] where \[d^{0}(n)=(\frac{n}{1},\frac{n}{1},\frac{n}{1})\] \[d^{1}((\frac{n_{1}}{3^{m_{1}}},\frac{n_{2}}{5^{m_{2}}},\frac{n_{3}}{7^{m_{3}} }))=(\frac{5^{m_{3}}n_{3}}{35^{m_{3}}}-\frac{7^{m_{2}}n_{2}}{35^{m_{2}}},\frac {3^{m_{3}}n_{3}}{21^{m_{3}}}-\frac{7^{m_{1}}n_{1}}{21^{m_{1}}},\frac{3^{m_{2}} n_{2}}{15^{m_{2}}}-\frac{5^{m_{1}}n_{1}}{15^{m_{1}}})\] \[d^{2}((\frac{n_{1}}{35^{m_{1}}},\frac{n_{2}}{21^{m_{2}}},\frac{n_{3}}{15^{m_{3} }}))=\frac{7^{m_{3}}n_{3}}{105^{m_{3}}}-\frac{5^{m_{2}}n_{2}}{105^{m_{2}}}+ \frac{3^{m_{1}}n_{1}}{105^{m_{1}}}\] Then the subcomplex \[S:0\to 0\overset{\bar{d^{0}}}{\rightarrow}P_{3}\oplus\mathbb{Z}_{(5)} \oplus\mathbb{Z}_{(7)}\overset{\bar{d^{1}}}{\rightarrow}\mathbb{Z}_{(35)} \oplus\mathbb{Z}_{(21)}\oplus\mathbb{Z}_{(15)}\overset{\bar{d^{2}}}{\rightarrow }\mathbb{Z}_{(105)}\to 0\] is a prime subcomplex of \(C^{\bullet}\) while the subcomplex \[T:0\to 0\overset{\bar{d^{0}}}{\rightarrow}(\frac{1}{3})P_{3}\oplus \mathbb{Z}_{(5)}\oplus\mathbb{Z}_{(7)}\overset{\bar{d^{1}}}{\rightarrow} \mathbb{Z}_{(35)}\oplus\mathbb{Z}_{(21)}\oplus\mathbb{Z}_{(15)}\overset{\bar{ d^{2}}}{\rightarrow}\mathbb{Z}_{(105)}\to 0\] is a primary subcomplex of \(C^{\bullet}\) not a prime. A proper subcomplex \(S=(S_{i},d_{i})\) of a complex \(C=(C_{i},d_{i})\) is said to be a direct summand of \(C\) if either \(S_{i}=C_{i}\) or \(S_{i}\) is a direct summand of the \(R-\)module \(C_{i}\) and denoted by \(S\leq_{\oplus}C\). The torsion subcomplex \(S=T(C)\) of an \(R-\)-complex \(C\) is a subcomplex \(S=\{S_{i}\}_{i\in I}\), where \(S_{i}=T(C_{i})\) is the torsion submodule of \(C_{i}\), for all \(i\in I\). We say that a complex \(C\) is torsion if \(C=T(C)\) and torsion-free if \(T(C)=0\) is a zero complex. **Proposition 2.3**.: _Every direct summand subcomplex \(S\) of a torsion-free complex \(C\) is prime._ Proof.: Assume \(S=(S_{i},d_{i})\) is a direct summand of a torsion-free complex \(C=(C_{i},d_{i})\). Hence there exists a direct summand subcomplex \(T\) such that \(C=S\oplus T\), that means \(C_{i}=S_{i}\oplus T_{i}\) for all \(i\). Suppose \(rm\in S_{j}\) for all \(j\in P_{I}\), \(r\in R\) and \(m\in C_{j}=S_{j}\oplus T_{j}\). So, \(m=a+b\) for \(a\in S_{j}\), \(b\in T_{j}\) and \(rm-ra=rb\in S_{j}\) but \(rb\in T_{j}\), hence \(rb=0\) and torsion-freeness of \(M_{j}\) gives us either \(r=0\) or \(b=0\). If \(r=0\), then \(rM_{j}=0\subseteq S_{j}\), while \(b=0\) implies that \(m=a+0\in S_{j}\). Therefore \(S\) is a prime subcomplex of \(C\). A subcomplex \(S\) of a complex \(C\) is pure if \(rC_{j}\cap S_{j}=rS_{j}\), for \(r\in R\) and \(j\in P_{I}\). Now, we give a connection between primeness and pureness of subcomplexes in torsion-free complexes. **Proposition 2.4**.: _A proper subcomplex \(S\) of a torsion-free \(R-\)complex \(C\) is prime with \((S_{i}:C_{i})=0\), for all \(i\in P_{I}\) if and only if \(S\) is pure subcomplex of \(C\)._ Proof.: Let \(S\) be a prime subcomplex of \(C\) with \((S_{i}:C_{i})=0\), for all \(i\in P_{I}\). Take \(r\in R\) and \(m\in C_{j}\) for \(j\in P_{I}\). Then primeness of \(S_{j}\) implies \(r\in(S_{j}:C_{j})=0\) or \(m\in S_{j}\). In both cases we will have \(rS_{j}=S_{j}\cap rC_{j}\) and \(S_{j}\) is pure submodule of \(C_{j}\). So \(S\) is pure subcomplex of \(C\). Now, suppose \(S\) is a pure subcomplex of \(C\) and \(rm\in S_{j}\) for \(r\in R\) and \(m\in C_{j}\) for all \(j\in P_{I}\). From the identity \(rC_{j}\cap S_{j}=rS_{j}\), we obtain \(rm\in rS_{j}\), so there exists \(n\in S_{j}\) with \(rm=rn\) and that \(r(m-n)=0\). Torsion-freeness of \(C\) implies either \(r=0\) or \(m=n\), that is, \(rC_{j}\subseteq S_{j}\) or \(m\in S_{j}\). Thus \(S_{j}\) is a prime submodule of \(C_{j}\) and that \(S\) is prime subcomplex of \(C\). **Proposition 2.5**.: _Suppose that \(R\) is an integral domain and \(C\) is an \(R-\)complex with \(T(C)\neq C\). Then \(T(C)\) is a prime subcomplex of \(C\)._ Proof.: Suppose \(rm\in T(C_{j})\) for nonzero \(r\in R\) and \(m\in C_{j}\) for \(j\in I\) such that \(T(C_{j})\neq C_{j}\). As \(T(C_{j})\) is a torsion submodule of \(C_{j}\) and is an integral domain, there exists a nonzero \(s\) in \(R\) such that \(srm=0\) with \(sr\neq 0\). Hence \(m\in T(C_{j})\) and the result follows. Let \(f:R\to S\) be a ring homomorphism and \(C\) be an \(S-\)complex. Then one can consider \(C\) as an \(R-\)complex and the inverse image of each prime subcomplex of an \(S-\)complex \(C\) is a prime subcomplex of \(C\) as \(R-\)complex. For a primary subcomplex \(S\) of \(C\), \(S\) is prime if and only if all proper ideals \((S_{i}:C_{i})\) are prime \(\forall i\in P_{I}\). Hence we have the following result. **Proposition 2.6**.: _Suppose \(S\) is \(P_{i}-\)primary subcomplex of \(C\) containing a \(P_{i}-\)prime subcomplex of \(C\). Then \(S\) is a \(P_{i}-\)prime._ Proof.: Consider a \(P_{i}=(T_{i}:C_{i})-\)prime subcomplex \(T\) of \(C\) with \(T\subset S\) and \(S\) a \(P_{i}=\sqrt{(S_{i}:C_{i})}-\)primary subcomplex. To prove \(S\) is \(P_{i}-\)prime, we have to prove \((S_{i}:C_{i})\) is prime foe all \(i\in P_{I}\). For this purpose, let \(abm\in S_{i}\), for all \(m\in C_{i}\), \(a,b\in R\) and \(i\in P_{I}\). The primaryness of \(S\) gives us \(a\in\sqrt{(S_{i}:C_{i})}\) or \(b\in(S_{i}:C_{i})\). If \(a\in\sqrt{(S_{i}:C_{i})}=(T_{i}:C_{i})\), then \(aC_{i}\subseteq T_{i}\subseteq S_{i}\) and so \(a\in(S_{i}:C_{i})\). Let \(P\) be a prime ideal of \(R\) and \(S\) a subcomplex of \(C\). The localization of \(S\) at \(P\) is \(S_{P}=((S_{i})_{P},\bar{d}_{i})\) and the saturation of \(S\) at \(P\) is the subcomplex \(\mathbb{S}_{P}(S)=(f^{-1}((S_{i})_{P}),\bar{d}_{i})\), where \(f=<f_{i}>\) is the natural chain map defined by \(f_{i}:S_{i}\to(S_{i})_{P}\), \(f_{i}(x)=\frac{x}{1}\), for \(x\in S_{i}\). **Proposition 2.7**.: _Suppose \(S\) is a proper subcomplex of an \(R-\)complex \(C\) such that \((S_{i}:C_{i})=P_{i}\) is a prime ideal of \(R\) for all \(i\in P_{I}\) and set \(P=<P_{i}>\). Then the saturation \(\mathbb{S}_{P}(S)=f^{-1}(S_{P})\) of \(S\) is a \(P_{i}-\)prime, where_ \[(S_{P})_{i}=\begin{cases}S_{iP_{i}},&\text{if $P_{i}\neq R$}\\ (C_{i})_{P_{i}},&\text{otherwise}.\end{cases}\] _and \(f\) is the chain map \(f_{i}:C_{i}\to(C_{i})_{P_{i}}\), \(f_{i}(m)=\frac{m}{1}\)._ Proof.: Let \(rm\in f^{-1}(S_{iP_{i}})\), for \(r\in R\), \(m\in C_{i}\) and \(i\in P_{I}\). Then \(\frac{rm}{1}\in S_{iP_{i}}\) and so \(rm\in S_{i}\) for \(t\notin P\). If \(r\notin P_{i}\), then \(f_{i}(m)=\frac{m}{1}=\frac{rm}{r}{r}\in S_{iP_{i}}\) and \(m\in f_{i}^{-1}((S_{i})_{P_{i}})\). Therefore, \(\mathbb{S}_{P}(S)\) is a \(P_{i}-\)prime subcomplex of \(C\). A free complex of finite rank is a complex whose terms are free \(R-\)modules of finite rank. **Theorem 2.8**.: _Let \(S\) be a subcomplex of a free complex \(F=(F_{i},d_{i})\) of finite rank. Then \(S\) is a \(P_{i}-\)prime if and only if \(S=\mathbb{S}_{P}(T)\), for some subcomplex \(T=(N_{i}+P_{i}F_{i},d_{i})\) of \(F\), where \(N=f^{-1}(A)\) for a direct summand \(A\) of \(F_{P}=((F_{i})_{P_{i}},\bar{d}_{i})\) and the canonical chain map \(f:F\to F_{P_{i}}\), \(f_{i}(x)=\frac{x}{1}\), \(\forall x\in F_{i}\)._ Proof.: Clearly \((N_{i}+P_{i}F_{i}:F_{i})=P_{i}\) and so by Proposition 2.7, \(\mathbb{S}_{P}(T)\) is a \(P_{i}-\)prime subcomplex of \(F\). Now, let \(S\) be a \(P_{i}-\)prime subcomplex of \(F\). So, \(P_{i}F_{i}\subset S_{i}\) and we have the composition of chain maps \(F\xrightarrow{f}F_{P}\xrightarrow{g}\dfrac{F_{P}}{PF_{P}}\). Choose a basis \(\{e_{1},\cdots e_{v_{i}}\}\) for \(g_{i}(S_{iP_{i}})\) and a set of elements \(m_{1},\cdots,m_{v_{i}}\in S_{iP_{i}}\subseteq(F_{i})_{P_{i}}\) such that \(g_{j}(m_{i})=e_{i}\) for \(i=1,\cdots,v_{j}\) and \(j\in I\). Then one can check that \((S_{i})_{P_{i}}=(m_{1},\cdots,m_{v_{i}})+P_{i}F_{iP_{i}}\). Since \(P_{i}(F_{i})_{P_{i}}\subseteq(S_{i})_{P_{i}}\), we have an exact sequence \(0\to P_{i}(F_{i})_{P_{i}}\to S_{i}\to\frac{S_{i}}{P_{i}(F_{i})_{P_{i}}}\to 0\). Thus Nakayama lemma ensure that \(<m_{1},\cdots,m_{v_{i}}>\) is a direct summand of \(F_{iP_{i}}\). Put \(N_{i}=f_{i}^{-1}(<m_{1},\cdots,m_{v_{i}}>)\) and \(N=(N_{i},d_{i})\). So, we have \(S_{i}\subset N_{i}+F_{i}\subset\mathbb{S}_{P}(N+P_{i}F)\) and \(\mathbb{S}_{P}(C+P_{i}F)\) is a \(P_{i}-\)prime subcomplex of \(F\). By [1, Corollary 3], there exists a bijection correspondence between the \(P-\)prime subcomplexes of \(F\) and those of \(F_{P_{i}}\). Since \(S_{P}=C_{P}+P_{i}F_{P}\), we get that \(S=\mathbb{S}_{P}(C+P_{i}F)\). From [2, Page 169, Ex. 12], every primary ideal in a von Neumann regular ring is a prime. So we have every primary subcomplex of a complex over von Neumann regular ring is prime. **Theorem 2.9**.: _Suppose \(S\) is a proper subcomplex of an \(R-\)complex \(C\) and \((S_{i}:C_{i})=P_{i}\) for \(i\in P_{I}\). Then the following assertions are equivalent:_ 1. \(S\) _is a prime subcomplex of_ \(C\)_;_ 2. _every component_ \(\frac{C_{i}}{S_{i}}\) _of the quotient complex_ \(\frac{C}{S}\) _is a torsion-free_ \(\frac{R}{P_{i}}-\)_module;_ 3. _for each subcomplex_ \(T\) _of_ \(C\) _and a family of ideals_ \(\{J_{i}\}_{i\in I}\) _of_ \(R\) _with_ \(J_{i}T_{i}\subseteq S_{i}\)_,_ \(T_{i}\subseteq S_{i}\) _or_ \(J_{i}\subseteq(S_{i}:C_{i})\)_,_ \(\forall i\in P_{I}\)_;_ 4. \((S_{i}:r)=S_{i}\) _for all_ \(r\in R-P_{i}\) _and_ \(i\in P_{I}\)_;_ 5. \((S_{i}:J)=S_{i}\)_,_ \(\forall i\in P_{I}\) _and any ideal_ \(J\) _of_ \(R\)_;_ 6. \((S_{i}:m)=P_{i}\)_, for all_ \(m\in C_{i}-S_{i}\) _and_ \(i\in P_{I}\)_;_ 7. \((S_{i}:N_{i})=P_{i}\)_, for all subcomplex_ \(N=(N_{i},d_{i})\) _of_ \(C\) _with_ \(S\subset T\) _and_ \(i\in P_{I}\)_;_ 8. \(Ass(\frac{C_{i}}{S_{i}})=\{P_{i}\}\)_,_ \(\forall i\in P_{I}\)_, where_ \(Ass(\frac{C_{i}}{S_{i}})\) _indicates the set of associative primes of_ \(\frac{C_{i}}{S_{i}}\)_;_ 9. \(Z_{R}(\frac{C_{i}}{S_{i}})=P_{i}\)_,_ \(\forall i\in P_{I}\)_;_ 10. _every nonzero subcomplex_ \(\frac{T}{S}\) _of_ \(\frac{C}{S}\) _has the same annihilator, that is,_ \((T:C)=(S:C)\)_;_ 11. _for any family of ideals_ \(J=<J_{i}>\) _of_ \(R\) _with_ \((S_{i}:C_{i})\subsetneq J_{i}\) _and all subcomplex_ \(T\) _of_ \(C\) _with_ \(S_{i}\subsetneq T_{i}\)_,_ \(J_{i}T_{i}\nsubseteq S_{i}\)_,_ \(\forall i\in P_{I}\)_._ Proof.: \((1\to 10):\) It suffices to show for any \(m+S_{i}\in\frac{C_{i}}{S_{i}}\) with \(m\in C_{i}-S_{i}\), \(Ann(m+S_{i})=(S_{i}:C_{i})\). Taking \(r\in(S_{i}:<m+S_{i}>)\) gives \(rm\in S_{i}\) and we obtain from part (1) that \(r\in(S_{i}:C_{i})\). \((10\to 1):\) Suppose \(rm\in S_{i}\), for \(i\in P_{I}\), \(m\in C_{i}\)\(S_{i}\) and \(r\in R\). Part (10) gives \(r\in(S_{i}:<m+S_{i}>)=(S_{i}:C_{i})\). \((11\to 3):\) Suppose \(J_{i}T_{i}\subseteq S_{i}\) for any subcomplex \(T\) of \(C\) and a family of ideals \(\{J_{i}\}_{i\in I}\). Set \(A_{i}=S_{i}+T_{i}\) and \(B_{i}=J_{i}+(S_{i}:C_{i})\). Then \(T_{i}\nsubseteq S_{i}\) if and only if \(A_{i}\nsubseteq S_{i}\) and \(J_{i}\nsubseteq(S_{i}:C_{i})\) if and only if \(B_{i}\nsubseteq(S_{i}:C_{i})\). Hence \(B_{i}A_{i}\subseteq S_{i}\) and we conclude from part (10) that \(T_{i}\subseteq S_{i}\) or \(J_{i}\subseteq(S_{i}:C_{i})\). The remains are clear. **Theorem 2.10**.: _A subcomplex \(S\) of an \(R-\)complex \(C\) with \(P_{i}=(S_{i}:C_{i})\) a maximal ideal of \(R\) for all \(i\in P_{I}\) is a prime subcomplex of \(C\)._ Proof.: The hypothesis gives us that \(\frac{C_{i}}{S_{i}}\) is a vector space over the field \(\frac{R}{P_{i}}\) and so a torsion-free \(\frac{R}{P_{i}}-\)module. The rest comes from Theorem 2.9. **Corollary 2.11**.: _For each maximal ideal \(m\) of \(R\) with \(mC\neq C\), \(mC\) is a prime subcomplex of \(C\)._ **Corollary 2.12**.: _Let \(\{m_{i}\}_{i\in I}\) be a family of maximal ideals of \(R\). A proper subcomplex \(S\) is \(m_{i}-\)prime if and only if \(m_{i}C_{i}\subseteq S_{i}\). In particular, if \(S\) is an \(m_{i}-\)prime subcomplex of \(C\), then each proper subcomplex \(T\) of \(C\) containing \(S\) is prime._ A proper subcomplex \(S\) of an \(R-\)complex \(C\) is said to be a maximal subcomplex if there is no proper subcomplex contains \(S\) properly, or equivalently, if \(S_{i}\) is a maximal submodule of \(C_{i}\) for all \(i\in P_{I}\). In commutative ring with identity, every maximal ideal is prime. Now, we prove this for subcomplexes. **Proposition 2.13**.: _Suppose \(S\) is a maximal subcomplex of a complex \(C\). Then \(S\) is a prime subcomplex of \(C\) and \((S_{i}:C_{i})\) is a maximal ideal of \(R\)._ Proof.: For \(i\in I\), \(S_{i}\) is maximal submodule of \(C_{i}\) if and only if \(\frac{C_{i}}{S_{i}}\) is a simple \(R-\)module. Thus \(\frac{C_{i}}{S_{i}}\) a cyclic \(R-\)module say \(\frac{C_{i}}{S_{i}}=Rx_{i}\), \(x_{i}\in C_{i}\) and \(Ann(Rx_{i})=Ann(\frac{C_{i}}{S_{i}})=(S_{i}:C_{i})\) is a maximal ideal of \(R\) by [1, Proposition 3]. Hence \(S\) is a prime subcomplex of \(C\) due to Proposition 2.10 A finitely generated complex is a complex whose terms are finitely generated. So we have the following. **Corollary 2.14**.: _If \(C\) is a finitely generated \(R-\)complex, then every proper subcomplex of \(C\) is contained in a prime subcomplex._ An \(R-\)complex \(F=\{F_{i},d_{i}\}_{i\in I}\) is called flat if all its component are flat over \(R\). **Lemma 2.15**.: _Let \(C\) and \(F\) be \(R-\)complexes and \(S\) be a subcomplex of \(C\). Then \(F_{i}\otimes(S_{i}:r)\subseteq(F_{i}\otimes S_{i}:r)\), for any \(r\in R\). The equality holds if \(F\) is a flat complex._ Proof.: Let \(f\otimes c\in F_{i}\otimes(S_{i}:r)\) for \(f\in F_{i}\), \(rc\in S_{i}\) and \(i\in I\). Then \(r(f\otimes c)=f\otimes rc\in F_{i}\otimes S_{i}\). Now, suppose \(F\) is flat. Then clearly \(F_{i}\otimes\frac{C_{i}}{S_{i}}\cong\frac{F_{i}\otimes C_{i}}{F_{i}\otimes S_ {i}}\). Also, exactness of \[0\to F_{i}\otimes(S_{i}:r)\to F_{i}\otimes C_{i}\stackrel{{ i \otimes f_{r}}}{{\to}}\frac{F_{i}\otimes C_{i}}{F_{i}\otimes S_{i}}\] gives that \(Im(i\otimes f_{r})=F_{i}\otimes(S_{i}:r)=(F_{i}\otimes S_{i}:r)\), \(\forall i\in I\). **Theorem 2.16**.: _Let \(S\) be a prime subcomplex of \(R-\)complex \(C\) and \(F\) be flat \(R-\)complex with \(F\otimes S\neq F\otimes C\). Then \(F\otimes S\) is a prime subcomplex of \(F\otimes C\)._ Proof.: We prove this theorem by the equivalence (4) of prime subcomplexes in Theorem 2.9. For this purpose, let \(r\in R-(S_{i}:C_{i})\) for \(i\in P_{I}\). Then the primeness of \(S\) and Theorem 2.9 (4) gives us \((S_{i}:r)=S_{i}\) and we obtain from Lemma 2.15 that \((F_{i}\otimes S_{i}:r)=F_{i}\otimes(S_{i}:r)=F_{i}\otimes S_{i}\). Therefore by Theorem 2.9 again, we conclude that \(F\otimes S\) is a prime subcomplex of \(F\otimes C\). The converse of Theorem 2.16 is true in the case where \(F\) is faithfully flat, that is, all components of \(F\) are faithfully flat. **Theorem 2.17**.: _Let \(S\) be a subcomplex of \(R-\)complex \(C\) and \(F\) be a faithfully flat complex over \(R\). Then \(S\) is a prime subcomplex of \(C\) if and only if \(F\otimes S\) is a prime subcomplex of \(F\otimes C\)._ Proof.: Since the exactness of the sequence \[0\to F_{i}\otimes S_{i}\to F_{i}\otimes C_{i}\to 0\] implies the exactness of of the sequence \[0\to S_{i}\to C_{i}\to 0,\] so we can assume that \(F_{i}\otimes S_{i}\neq F_{i}\otimes C_{i}\), \(\forall i\in P_{I}\) and Theorem 2.16 tells us that \(F\otimes S\) is a prime subcomplex of \(F\otimes C\). Now, let \(F\otimes S\) be a prime subcomplex of \(F\otimes C\) and so \(S\) is a proper subcomplex of \(C\). Thus for any \(r\in R-(S_{i}:C_{i})\), \(\forall i\in P_{I}\), we have \(r\in R-(F_{i}\otimes S_{i}:F_{i}\otimes C_{i})\) and Lemma 2.15 implies that \[F_{i}\otimes(S_{i}:r)=(F_{i}\otimes S_{i}:r)=F_{i}\otimes S_{i}\] As \(F\) is faithfully flat complex, we have \((S_{i}:r)=S_{i}\), \(\forall i\in P_{I}\). Hence by Theorem 2.9 (4) again, \(S\) is a prime subcomplex of \(C\) as desired. **Corollary 2.18**.: 1. _Let_ \(T\) _be a multiplicative subset of_ \(R\) _and_ \(S=(S_{i},d_{i})\) _be a proper subcomplex of an_ \(R-\)_complex_ \(C=(C_{i},d_{i})\) _with_ \(T\cap(S_{i}:C_{i})=\phi\)_,_ \(\forall i\in P_{I}\)_. Then_ \(T^{-1}S=(T^{-1}S_{i},T^{-1}d_{i})\) _is a prime subcomplex of_ \(T^{-1}C=(T^{-1}C_{i},T^{-1}d_{i})\) _if and only if_ \(S\) _is a prime subcomplex of_ \(C\)_._ 2. _Let_ \(T=\{T_{i}:i\in I\}\) _be a family of multiplicative subsets of_ \(R\) _and_ \(S=(S_{i},d_{i})\) _be a proper subcomplex of_ \(C=(C_{i},d_{i})\) _such that_ \((S_{i}:C_{i})\cap T_{i}=\phi\)_,_ \(\forall i\in P_{I}\)_. Then_ \(T^{-1}S=(T^{-1}_{i}S_{i},T^{-1}_{i}d_{i})\) _is a prime subcomplex of_ \(T^{-1}C=(T^{-1}_{i}C_{i},T^{-1}_{i}d_{i})\) _if and only if_ \(S\) _is a prime subcomplex of_ \(C\)_._ Proof.: The proof come from Theorem 2.17 and the fact that \(T^{-1}S=T^{-1}R\otimes S\). **Corollary 2.19**.: _Let \(S=(S_{i},d_{i})\) be a prime subcomplex of an \(R-\)complex \(C=(C_{i},d_{i})\) and \(x\) be an indeterminate. Then \(S[x]=(S_{i}[x],d^{*}_{i})\) is a prime subcomplex of \(C[x]=(C_{i}[x],d^{*}_{i})\), where \(d^{*}_{i}(\sum_{i=1}^{n}m_{i}x^{i})=\sum_{i=1}^{n}d_{i}(m_{i})x^{i}\), for \(m_{i}\in S_{i}\)._ Proof.: The proof follows from the fact that \(S[x]=S\otimes R[x]\) and Theorem 2.17. We now generalize the prime avoidance theorem to prime subcomplexes. **Theorem 2.20**.: _Let \(T_{1},\cdots,T_{n}\) be subcomplexes of an \(R-\)complex \(C\) and let \(S\) be a prime subcomplex of \(C\) with \(T_{1}\cap T_{2}\cap\cdots\cap T_{n}\subseteq S\). Then there exists an \(i\in\{1,\cdots,n\}\) such that either \(T_{i}\subseteq S\) or \((T_{i}:C)\subseteq(S:C)\)._ Proof.: Assume by contrary that \(T_{1}\nsubseteq S\) and \(((T_{i})_{j}:C_{j})\nsubseteq(S_{j}:C_{j})\) for some \(j\in P_{I}\). So, there exist \(m\in(T_{1})_{j}-S_{j}\) and \(r_{i}\in((T_{i})_{j}:C_{j})-(S_{j}:C_{j})\) for all \(i=1,\cdots,n\). Thus \(r_{i}m\in(T_{1})_{j}\cap(T_{i})_{j}\) for \(i\in\{2,\cdots,n\}\), so \(r_{1}r_{2}\cdots r_{n}m\in(T_{1})_{j}\cap(T_{2})_{j}\cap\cdots\cap(T_{n})_{j}= (T_{1}\cap\cdots\cap T_{n})_{j}\subseteq S_{j}\). The primeness of \(S\) gives us \(m\in S_{j}\) or \(r_{2}r_{3}\cdots r_{n}\in(S_{j}:C_{j})\) which contradicts our assumption. At the end of the paper I want to mention that there are a lot of open questions on such subject. For example, one can do research on how to generalize the supplemented modules and its generalizations into the category of \(R-\)complexes. **The author declare that he has no conflict of interest.** **This manuscript does not include any data.**
2305.01308
Retrospectively Diagnosing Einstein with Asperger's Syndrome and the Dismal Failure of Debunking Myths
In 2003, Simon Baron-Cohen, a world expert on autism, diagnosed Einstein posthumously with Asperger's syndrome. I think we cannot diagnose a dead person. Historians of science have fiercely objected to this trend of diagnosing deceased scientists by reconstructing from scant evidence, calling these diagnoses myths. Despite the historians' efforts at demolishing myths, Einstein has been constantly diagnosed with Asperger's syndrome. I will stick my neck out and suggest in this paper that although historians' critique of Baron-Cohen and others includes debunking myths, it piggybacks on another myth that uses the following metaphors: a dull and socially adept Einstein who worked at Zurich, Prague, Berlin, and Princeton, an industrious scientist who earned his living through his work as a professor at the university; he had a special gift of friendship and collegiality, and he was deeply embedded in the academic community. These explanations do not make sense from the perspective of Einstein sitting in his office at Princeton, let alone Einstein sitting in the patent office. This perhaps explains the tendency of people to find counterclaims and myths more persuasive than historians' explanations which seem deeply problematic.
Galina Weinstein
2023-05-02T10:32:40Z
http://arxiv.org/abs/2305.01308v1
Retrospectively Diagnosing Einstein with Asperger's Syndrome and the Dismal Failure of Debunking Myths ###### Abstract In 2003, Simon Baron-Cohen, a world expert on autism, diagnosed Einstein postlumously with Asperger's syndrome. I think we cannot diagnose a dead person. Historians of science have fiercely objected to this trend of diagnosing deceased scientists by reconstructing from scant evidence, calling these diagnoses myths. Despite the historians' efforts at demolishing myths, Einstein has been constantly diagnosed with Asperger's syndrome. I will stick my neck out and suggest in this paper that although historians' critique of Baron-Cohen and others includes debunking myths, it piggybacks on another myth that uses the following metaphors: a dull and socially adept Einstein who worked at Zurich, Prague, Berlin, and Princeton, an industrious scientist who earned his living through his work as a professor at the university; he had a special gift of friendship and collegiality, and he was deeply embedded in the academic community. These explanations do not make sense from the perspective of Einstein sitting in his office at Princeton, let alone Einstein sitting in the patent office. This perhaps explains the tendency of people to find counterclaims and myths more persuasive than historians' explanations which seem deeply problematic. ## 1 Introduction With some apprehension, I lectured about retrospectively diagnosing Einstein with autism (Asperger's syndrome) at the conference "Actually Autism" at Ben-Gurion University. I was reluctant to meddle in Einstein's private life. I prefer to occupy myself with Einstein's physics. I decided to get involved in this topic and give this lecture because in trying to debunk myths, people simply spread stigmas and prejudices about Asperger's syndrome and twist Einstein's biography. This paper presents an adapted version of my lecture. Why does everybody ask if Albert Einstein had Asperger's syndrome? Because - as I show in section 2 - in 2003, Simon Baron-Cohen made a posthumous diagnosis of Einstein, suggesting he showed many signs of Asperger's syndrome.1 Baron-Cohen has argued that in several biographies of Einstein, it was written that Einstein was a loner and could not speak until he was three. Baron-Cohen did not base his retrospective diagnosis on historical medical records written by physicians. He instead diagnosed Einstein based on fragments of evidence mainly found in biographies. Footnote 1: Here is a list of signs of twice-exceptionality and Asperger’s syndrome in adults (including the following but not limited to these): 1. Lack of cognitive empathy (finding it hard to understand what others are thinking and feeling) and high empathic concern (desire to reduce other people’s suffering and help them at your own expense). 2. Have the same daily routine and get anxious if it changes. 3. Have meltdowns (tantrums), difficulty managing emotions leading to outbursts, and may seem blunt and rude. 4. Are autodidacts (can teach themselves just about anything). 5. May have weak short-term memory and very strong long-term memory. May have a visual memory. 6. Can be atheistic or leave one religion and enter another. 7. Have a spiky profile of strengths and weaknesses. 8. May have dropped out of high school and returned later; may have unfinished or partial degrees. May have dropped out of classes and not attended classes. 9. May have difficulty accepting criticism, stubbornness, and fixation on topics. 10. Having problems expressing themselves clearly, and talking and laughing very loudly, though there is little to laugh about. 11. Attention to details. 12. Tends to burn bridges with people and may rebel against authority. 13. Considered a "loner" and needs more time than others to be alone; has difficulty socializing for extended periods; wants to spend time socializing but is overestimated by too many people. Does not just focuses around the clock on a special interest that may involve the person's career. Tends to be easily overwhelmed by noises and in social situations and then withdraws to a quiet spot; needs more time alone than others. 14. Has a high sense of justice and fairness and is too honest. 15. On the one hand, they can be witty and sarcastic and tend to be more childish; on the other, they can take things very literally and may not understand sarcasm, irony, or humor from other people. May use humor to overcome social obstacles by playing tricks on others. 16. Have a remarkable power of concentration and can focus for long periods of time on the special interest without eating or drinking. 17. Unusual tolerance for unconventional and unorthodox thinking and people. 18. May dress differently than their peers; often appear eccentric; dress more for comfort than appearance. 19. May invent limericks and send them to people they care about. 20. May feel things profoundly and experience intense emotions. From: Autism Awareness Centre Inc. As I discuss in section 3, several caveats exist in diagnosing a dead person retrospectively. The danger with historical retro-diagnosis is that the details of Einstein's thinking process are not always precise, and we cannot extract a few details from a biography. Indeed, as I show in this paper, medical historians object to this practice saying that experts usually read a biography, extract some passages dealing with medicine, and come up with a diagnosis. What is even more insidious is that retro-diagnosis is made by people who are amateur historians rather than historiographical-trained scholars. They, therefore, even often draw on dubious sources to make historical diagnoses. Further, people retrospectively diagnosing historical figures are susceptible to biases. A controversy has erupted over the retro-diagnosis of Einstein. Historians of science have fiercely objected to this trend of diagnosing deceased scientists retrospectively by reconstructing from scant evidence. Despite the historians' warning, Einstein has been constantly diagnosed with Asperger's syndrome. In the end, in section 4, I show that there is a twist to the story. It turns out that historians are susceptible to biases, precisely like any other scholar. ## 2 Baron-Cohen diagnoses Einstein It all started when Simon Baron-Cohen, a professor of developmental psychopathology and a world expert on autism at Cambridge University, suggested that Einstein might have shown signs of Asperger's syndrome [3]. Baron-Cohen supports his thesis by 1) quoting an account by Einstein's son Hans Albert written by Peter Michelmore in his biography of Einstein, _Einstein, Profile of the Man_,2 and 2) cherry-picking Einstein's idiosyncratic habits from Roger Highfield and Paul Carter's book, _The Private Lives of Albert Einstein_.3 Baron-Cohen writes [3]: Footnote 2: Michelmore begins his book by saying: ”In February, 1962, I spent two days with Hans Albert Einstein in his home overlooking San Francisco Bay. Hans Albert, fifty-seven, the older son of Albert Einstein [...]. He had never discussed his father before with any writer, at least not in depth. But he answered all my questions and waited while I wrote down the answers.” Footnote 3: The authors base their stories on family members’ reports and memories of people who knew Einstein. Einstein was described as "lonely and dreamy" as a child, with difficulty in making friends. He was said to prefer ”solitary and taxing” games, such as complex constructional play with blocks or making houses of cards up to fourteen stories high. He would ”softly repeat every sentence he uttered—a habit he continued until the age of seven.” He was still not considered fluent in speech at the age of nine. He was also a loner: ”I’m not much with people,” he would say. ”I do not socialize because social encounters would distract me from my work and I really only live for that, and it would shorten even further my very limited lifespan.” This is almost verbatim what Highfield and Carter write in their book. 4 Footnote 4: The first part of Baron-Cohen’s quotation is based on the following passage from Highfield and Carter [20]: By his description [to Bela Kornitzer], Einstein was a lonely and dreamy child who did not easily find companions. He would avoid the rough-and-tumble games when the children of relatives came to play in the Einsteins’ garden, [...]. Besides assembling complicated constructions with his building blocks, he made houses of cards up to fourteen stories high. [...] After starting to talk comparatively late, he would softly repeat every sentence he uttered - a habit that continued until he was seven. Even when he was nine he lacked fluency of speech. The problem seems to have been as much a reluctance to communicate as any inability to do so. The second part of Highfield and Carter’s paragraph ("Besides assembling [...]") is based on Maja Winteler-Einstein's (Einstein's sister) biography of her brother [39]. The second part of Baron-Cohen's quotation is based on the following passage from Highfield and Carter [20]: 'I'm not much with people, and I'm not a family man.' he told Salaman. ’I want In his lecture "Scientific talent and autism: is there a connection?", Baron-Cohen merely repeats the arguments put forward in his previous text 5 and adds [4]: Footnote 5: In the case of Einstein, says Baron-Cohen, "if you look at his childhood, he is described as being alone, having no friends, and not being interested in mixing and socializing. Apparently, he was late to talk, a language delay. Apocryphally he didn't speak until he was five years old. And even then, when he did start to speak, his speech showed echolalia. And as an adult and as a distinguished scientist, he said: 'I do not socialize because that would distract me from my work, and I really only live for that.' So he wanted to be away from people and to only focus (some people would say obsessionally) on the world of physics, and he also had a strong interest in music when he wasn't doing physics. And he said: 'Music is a way for me to be independent of people.' So you can see that this was a man who didn't want to spend time socializing and instead wanted to focus on physics and music. Again, Baron-Cohen cherry-picks quotations from Highfield and Carter [20] and Michelmore [29].6 Footnote 6: The quotation: ”[...] music is a way for me to be independent of people” was taken from Michelmore [29]: Einstein worked alone now, so music was the only way to get close to him. That was how Manfred Clynes ”reached” Einstein. [...] Helen Dukas called to invite me [Cylnes] to supper. [...] Einstein himself was rather jovial. Over the meal of meat and vegetables and for two hours afterward, we talked about music. He Footnote 1: The _Hamiltonians_ are the first and second order differential equations, which are the first and second order differential equations. Barbara Wolff - who had a long tenure at the Albert Einstein Archives and is an editor in the Einstein Papers Project - and Hananya Goodman write [40]: Some of the characterizations of AS [Asperger's syndrome] described in the paragraph above actually apply well to the young Albert as we know him from Maja's and Max Talmey's recollections. Both Maja and Talmey describe a boy who took little interest in boisterous games and, in general, in his peers, a boy who would concentrate patiently on elaborate constructions with building blocks or playing cards, delve into books and tricky arithmetic problems or play the violin. A sort of glass pane, as he called it many years later, separated him from his fellow human beings. Had such "social phobia" then been classified as a personality disorder, and had his parents and doctors felt the need to 'heal' the boy by making him conform to some norm, Albert might not have become Einstein. It seems that Wolff and Goodman agree with Baron-Cohen because, according to the latter, Albert has become Einstein because he is an Asperger. ## 3 Historians object to diagnosing Einstein with autism I think we cannot diagnose a dead person. Historians of science and medicine have fiercely objected to Baron-Cohen's retrospective diagnosis and have generally leveled criticism at clinicians, psychologists, and non-experts for retrospectively diagnosing disorders and diseases in deceased scientists. Below I provide the major objections made by historians: 1. Historians argue that people sometimes draw on dubious sources to make retrospective diagnoses [26]. For instance, Daniela Caruso, professor of law from Boston University, diagnosed Einstein with Asperger's syndrome based on a children's book. She writes [8]: Post-mortem diagnoses are doubtful, but Albert Einstein's life story, which begins with tales of delayed speech and abysmal performance at school, suggests that the most accomplished scientist of all time might have suffered from Asperger-like symptoms. Caruso refers in a footnote to: "Don Brown, _Odd Boy Out: Young Albert Einstein_ (2004)" [8]. _Odd Boy Out: Young Albert Einstein_ is a children's book by Don Brown. I am disappointed with the cheapening of history. 2. The retrospective diagnosis is a hobby of clinicians interested in history who enjoy testing their diagnostic acumen on famous historical figures. The physicians base their diagnosis on scant evidence, usually described in a non-medical context. They uncover intriguing medical secrets in reports that belong to an accessory section for incidental topics or letters to the editor in medical journals. This practice raises ethical issues; in particular, the dead scientists cannot say no. Retro-diagnosis can expose personal details that no one in Einstein's time could have known or understood. The retrospective diagnoses could pose ethical problems or questions that, if revealed, may cause harm to the scientist's living descendants and relatives. The diagnosing physicians strongly violate the principles of the medical profession because they give their retrospective opinion on a patient they have never seen nor examined [31], [26], [25]. Baron-Cohen replied to the above criticism in his lecture "Scientific talent and autism: is there a connection?" [4]: This biographical approach is very interesting, but [...] I've suggested that this might be a very unreliable way to approach the question because biographies are always fragmented. We only have partial information about what the person was like, information that has survived historically. The person isn't here to speak up for themselves, so we don't really know whether, if we saw them today alive, they would meet the criteria of autism or Asperger's syndrome. But certainly, it's pointing at the connection between great scientific talents and autism or Asperger's". There are two problems here: the first problem is the impossibility of verifying or falsifying the retrospective diagnosis for obvious reasons: we cannot examine and test the historical subject. And the second problem is ethical: publishing a diagnosis of a patient with whom the medical expert never had a physician-patient relationship and without consent. One could argue that historical celebrities are immune from privacy protection because their lives are open to the public. And because they are long dead, the subject of any potential harm no longer exists. But certain defenders of the celebrity's image and descendants of the historical figure might object to the publication of a diagnosis that potentially stains the reputation of the historical figure [31]. John Stachel gave a name to people who care about Einstein's legacy (what he considered as the staunchest defenders of his reputation); he called them "keepers of the flame".7 Footnote 7: Stachel writes in his book, _Einstein From ’B’ to ’Z’_: ”I soon became aware of another peril involving loss of boundaries: the danger of becoming a ‘keeper of the flame’ rather than a seeker of the truth.” [37] As a result of criticism, Baron-Cohen admitted there was a problem diagnosing a person after his death because one needs the person's consent. In his 2020 book, _The Pattern Seekers: A New Theory of Human Invention_, Baron-Cohen attenuates his argument [5]: "Some hyper-systemizers,8 in a range of fields, have been described [by the mathematician Ioan Mackenzie James from Oxford University] as autistic. For example, [...] Albert Einstein and Henry Cavendish in the field of physics have all been described [by James] as autistic. In my view, it is unhelpful to speculate if someone - living or not - might be autistic, since a diagnosis is only useful if the person is seeking help and is struggling to function. Diagnosing someone - living or not - on the basis of fragmentary biographical information is unreliable and arguably unethical since diagnosis should always include the consent of the person and be initiated by them. And from a scientific perspective, hyper-systemizing does not automatically mean you're autistic". Footnote 8: Hyper-systemizing is part of the cognitive style of autistic people. [6] Indeed, on January 17, 1927, Hugo Freund wrote to Einstein to ask if he would allow himself to be psychoanalyzed ([12], Doc. 457). It is not known if an answer was sent, but on the letter, in German in Einstein's handwriting, there is the following draft of a reply [12], Doc. 458; [14]: "I am sorry not to be able to comply with your request, as I would very much like to remain in the darkness of not having been analyzed." In other words, Einstein refused to be diagnosed and psychologically analyzed. Notwithstanding that, Baron-Cohen and others have retrospectively diagnosed Einstein. But in the above text, Baron-Cohen tries to shrug off the diagnosis of Einstein with Asperger's syndrome, saying that James described Einstein and others as autistic. In 2012, Baron-Cohen went so far as to publish a paper with several Cambridge University students having Asperger's syndrome and write: "Einstein, Mozart, Newton, Wittgenstein, and others (predominately male, white, and deceased) are identified posthumously with AS and brilliance" [19]. This sentence is disturbing because it insinuates that the brilliant people are predominantly white males. Baron-Cohen and his students intertwine two diagnoses - Asperger's syndrome and brilliance - into white males. 3. Let us move on to the third argument. Historical diseases and conditions must be interpreted in their historical context. Retrospective diagnosis is anachronistic because people try to diagnose a disease or disorder of the past in contemporary terms. This is called "anachronistic diagnosis." Critics of clinicians making retro-diagnosis say this kind of diagnosis can be misleading. First, medical experts arrive at a diagnosis of a historical figure using modern diagnostic criteria; they reason that if historical figures had all these symptoms, then they could have had these specific conditions. But in the past, people had different habits and lifestyles than we do [31]. Hence, we should investigate social diagnosis, i.e., the name that people in the past allocated to a condition [13]. If the social circumstances allowing us to perceive autism did not exist before some point relatively early in the twentieth century, then to what extent can we say the condition existed previously? [28] Second, medical knowledge itself changes over time. Writing historical papers on famous patients is problematic because clinicians regard the level of scientific knowledge at the time of writing but assess the course of an illness in the past.9 Footnote 9: For instance, the historical figure, Polish composer Frééric Chopin was first (1899) diagnosed as having tuberculosis; then, in 1961, allergic conditions and valvular stenosis were offered as a better explanation for his condition. In 1987 he was diagnosed with cystic fibrosis; in 1994, other genetic defects were suggested, such as alpha-1-antitrypsin-deficiency [25]. For example, Temple Grandin is diagnosed with autism and has a Ph.D. in animal science. She concludes, "Einstein had many traits of an adult with mild autism, or Asperger's syndrome." And [18]: As a child, Einstein had many of these traits. He did not learn to speak until he was three. In a letter to a mother of an autistic child, he admitted to not being able to learn to speak until late and that his parents had been worried about it. This passage is anachronistic. How does Grandin know that both the child and Einstein were autistic? Children were not yet diagnosed as autistic in Einstein's lifetime. 4. Historians object to a retrospective diagnosis that does not provide new medical knowledge. This is as opposed to research based on historical sources that will help modern medical researchers understand the spread of diseases around our planet during human evolution and help us plan for unexpected health events. There are many good reasons to study diseases and disorders in the past. For instance, epidemiological analysis of data from historical smallpox outbreaks has helped appropriate authorities plan for possible outbreaks in the future [30]. To this, clinicians have responded by saying that studying historical figures might help to determine how prevalent autism was in previous generations [1]. But then historians try to rebut this claim - see objection 3. Concerning autism in previous generations, let us return to Temple Grandin. Grandin compares herself to Einstein: "Like Einstein, I am motivated by the search for intellectual truth." She then further elucidates on this point [18]: When he developed the theory of relativity, he imagined himself on a beam of light. His visual images were vaguer than mine, and he could decode them into mathematical formulas. My visual images are extremely vivid, but I am unable to make the connection with mathematical symbols. Einstein's calculation abilities were not phenomenal. He often made mistakes and was slow; but his genius lay in being able to connect visual and mathematical thinking. Einstein's dress and hair were typical of an adult with autistic tendencies, most of whom have little regard for social niceties and rank. When he worked at the Swiss patent office, he sometimes wore green slippers with flowers on them". The above passage abounds in errors. But I nevertheless want to raise two comments on this passage: First, the people purporting to compare themselves to Einstein fail to mention that his disheveled exterior reflected his inner humility. As he once said: "I speak to everyone in the same way, whether he is the garbage man or the university president" [24]. This is typical of people with Asperger's syndrome, but by the same token, this behavior is also typical of humble people. Hence, I suggest _not_ comparing yourself to Einstein if only for the many things you don't know about him. You can't know everything about Einstein because it is impossible to read _all_ the primary sources. And even if you manage to read all that, many documents have gone missing, and there is always hope of finding new lost letters, diaries, and manuscripts. I will give you an example. Einstein once told Salaman: "Very few women are creative. I should not have sent a daughter of mine to study physics. I'm glad my wife doesn't know any science, although my first wife did." Grandin (a female scholar), do you still want to compare yourself to Einstein? This might be a slip of the tongue because Einstein admired several women and female physicists and mathematicians of his time, e.g., the brilliant genius physicist Marie Curie and genius mathematician Emmy Noether. Over the years, Einstein made different statements that depended on the people he spoke with. He would say one thing in the presence of distinguished scientists, cooperating in tributes without questioning the narratives, and would say quite the opposite on other occasions. Thus, I would be extremely careful when comparing someone to Einstein. Second, after Einstein's rise to fame, Max Fluckiger wrote in his book _Einstein in Bern_ that Einstein often appeared in green slippers trimmed with flowers at work. The patent office officials called him "The man with the green slippers." A colleague also recalled that Einstein showed up at the patent office one day with a saw and proceeded to shorten the legs of his chair because it was not adjustable and was too high for him [17]. Abraham Pais described Fluckiger's book as containing several reproductions of rare documents about Einstein's younger days. But "The text contains numerous inaccuracies" [32]. In other words, Fluckiger concocted the above stories out of things that never happened. 5. Historians argue that it is often unclear how a highly specific diagnosis would make any difference in the scholarship of the historical figure in question [31]. It is asked: what is the objective of retrospectively diagnosing Einstein? What goals does it serve? What justifies the retrospective diagnosis of historical figures? What is the reason modern researchers retrospectively diagnose a historical figure as autistic? Does the diagnosis of Einstein as autistic deliver new medical insight into autism? Does it help us understand how Einstein created the theory of relativity and how he arrived at his significant discoveries? Ioan James writes [23]: "In the case of Einstein, we can conclude that he did have Asperger's syndrome." James has further stated: "Although it seems to be widely accepted that Einstein had the syndrome, none of the many detailed biographies mentions this" [22]. First, such a diagnosis does not make any difference in the scholarship of Einstein and the pathway to his theories. Second, not only do none of the "biographies mention this," several biographies even try to denounce it, as I show in the next section, claiming it is an utter myth. ## 4 Historians try to debunk myths about Einstein It seems to me that in trying to debunk myths, some historians hobble the narrative of extraordinary and creative scientists in that they extend the life of other myths. The issue is that it is not so much that historians endeavor to debunk myths that bothers me; it is how they do so. Although historians' critique of Baron-Cohen and others includes debunking myths, it piggybacks on another myth: presenting Einstein as a dull, conscientious, agreeable person and an industrious and socially adept individual better suited to Thomas Kuhn's incremental "normal science." First, when I wrote my first book, I was told that Albert Einstein, his sister Maja Winteler-Einstein, and the family members had exaggerated the story of Albert, who developed slowly, learned to talk late, and whose parents thought something was wrong with him. I was further told that, of course, there is no doubt that these stories have a grain of truth, and Maja and Albert recount their recollections in all sincerity, but these stories sound like family tales and may be exaggerated. Moreover, Einstein's friends and assistants (in interviews and correspondences) contributed to spreading this myth. They thus inspired biographers to create a widespread mythical public image of Albert Einstein that embodies stories about Einstein, "the retarded genius." We first conjecture that Einstein and his family members had exaggerated the story of Albert, "the retarded genius," and then show that the "real" Albert was a different matter. For instance, Albert's mother recognized that her son was talented, and it does not seem that she thought he would develop into some eccentric. But seen from a distance of time, almost ten years after writing my book [38], the above argument sounds improbable because Einstein and his family are accused of inventing stories to create a myth that serves a specific purpose. A different way of considering the situation is presenting Einstein as having a special gift of friendship. He may not have been working with a large group of scientists. Still, he was deeply embedded in the academic community and was a modern professional scientist who earned his living through his work as a professor at the university. Historians debunk myths by saying that Einstein was not an isolated genius "working by himself in an attic with pen and paper" [16]. Einstein had close friendships with physicists and a special gift of collegiality and was very well-adjusted. Then the argument goes on like this. Although Einstein was sometimes rude or insensitive, he was not unlike many individuals. Some of his social behaviors and attitudes may seem unconventional today, but they were not unusual in his day and milieu. He was very sociable and was emotionally involved with friends and family, even if he savored solitude when it was within his reach to pursue his science [7].10 Footnote 10: I would like to correct some of the stigmas and prejudices about Asperger’s syndrome found in _An Einstein Encyclopedia_. Asperger’s syndrome (autism) is neither a disease nor a mental or emotional disorder. Asperger’s syndrome (autism) is also not a disorder people ”suffer” from. Asperger’s syndrome is _not_ any of the following: a defect, developmental anomaly, pathological conduct, a mental illness, schizophrenia, and bouts of severe depression. In his biography of Einstein, _His Life and Universe_, Walter Issaacson elucidates why he is not convinced of the diagnosis of Einstein with Asperger's syndrome: "Even as a teenager, Einstein made close friends, had passionate relationships, enjoyed collegial discussions, communicated well verbally, and could empathize with friends and humanity in general" [21]. 11 Footnote 11: Issacson writes: ”A Google search of Einstein + Asperger’s results in \(146,000\) pages. I do not find such a long distance diagnosis to be convincing” [21]. I did the exact search, and I found only \(25\) pages. Still, it does not mean that these \(25\) pages are convincing. In my opinion, the above narrative, very unfortunately, strips off Einstein from his great sense of humor, from being stubborn and unconventional and sticking out his tongue at his pursuers to express his annoyance (the photograph that has been reproduced endlessly). Einstein's former Cantonal School classmate, Hans Byland, one year older than Einstein, had painted a verbal portrait of Einstein as a teenager: Einstein as a young man could not be fitted into any pattern, an impudent Swabian, a restless spirit; nothing escaped the sharp gaze of his large bright brown eyes. "A sarcastic curl of his rather full mouth with the protruding lower lip did not encourage Philistines to fraternize with him." His attitude towards the world was that of a laughing philosopher, and his witty mockery pitilessly lashed out at any conceit or pose. Einstein made no bones about voicing his personal opinions, whether offensive or not, and he had courageous adherence to the truth [36]. Of course, one can call this "communicated well verbally." We have Einstein's rude letter in which he hashed out at the editor of _The Physical Review_: "We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it was printed. I see no reason to address your anonymous expert's - in any case, erroneous - comments. Based on this incident, I prefer to publish the paper elsewhere" [27]. And Einstein attacked his opponents at the Hebrew University and termed Judah Leon Magnes and his supporters "downright vermin." On another occasion, he called the university a "bug-infested house" [34]. It, therefore, seems that Einstein made no bones about voicing his opinions, whether offensive or not. It seems to me that the supporters of the view of Einstein as a very well-adjusted individual not only fail in demolishing myths, this narrative repudiates Einstein's patent office years. Einstein's friends and colleagues at the patent office, his table and drawer, pen and sheets of paper, and especially the thoughts he hatched there were antipodal to an academic community. Einstein later wrote that, unlike himself, Marcel Grossmann "was not a vagabond and loner"[15]. Einstein wrote to Mileva Maric: "I always find that I am in the best company when I am alone, except when I am with you" [10] (Einstein to Maric, Doc 128, Dec 17, 1901). Indeed, unlike Einstein, Grossmann was quickly given an assistant post under Prof. Wilhelm Fiedler; Grossmann succeeded Fiedler as a professor at the Polytechnic in 1907. On the other hand, three years after the publication of the miraculous year papers - special relativity, \(E=mc^{2}\) first paper, quanta and Brownian motion - Jakob Laub wrote to Einstein, "I must confess to you that I was surprised to read that you have to sit in an office for eight hours a day. But, history is full of bad jokes" (Laub to Einstein, [11], Doc. 91, March 1, 1908). Suppose Einstein had this unique quality of being well-adjusted and deeply embedded in the academic community, as purported. Why did he sit in the patent office for so long? None of this makes sense in the picture of the well-adjusted and embedded-in-the-science-community Einstein. A narrative of a socially adept Einstein does not even make sense in the context of the later Einstein, sitting in his office in Princeton, let alone sitting in the patent office. We are trying to press a conscientious and agreeable Einstein into a rebellious Einstein. But we don't need Karl Popper to tell us that Einstein was one of the most exciting and imaginative scientists ever to live; he was intelligent and creative, a vocational revolutionary scientist. And this is of utmost importance. He was brilliant, inspirational, abrasive, and rebellious, precisely the type of truth-seeker to create a scientific revolution. 12 Footnote 12: See the discussion in [9]. Einstein thought that certain personal matters of his life should not be made public. That is the reason he did not like biographies that dealt too much with his personal life [38]. But private letters are profusely found in the _Collected Papers of Albert Einstein_ (_CPAE_); for instance, Einstein's love letters to his first wife, Mileva Maric; Einstein's later correspondence with Maric and his two sons, Hans Albert, and Eduard; Einstein's exchange of letters with his second wife and first cousin, Elsa Lowenthal Einstein; correspondence with Einstein's sister, Maja; letters Einstein wrote to his extra-marital lover Betty Neumann, etc. The reasons behind the decision to publish Einstein's private correspondence are that Einstein is not a private person anymore; his life is open to the public. Einstein has ceased being a private individual and belongs to history. Since Einstein is long dead, publishing his letters cannot cause him any harm. On the contrary, he is a historical scientist in the same category as Newton and Galileo. We do not think of the privacy of such historical figures. We want to know as much as we can about them because, without full knowledge, one cannot fully understand the evolution of their ideas [2]. But then, based on Einstein's private correspondence, biography of his sister Maja (an abridged version of which was published in the _CPAE_), and other first-hand evidence, people have written biographies and books on Einstein's private life. And based on these books, other people, in turn, are retrospectively diagnosing Einstein with all sorts of things. Historians who care about Einstein's legacy then fiercely object to this abuse of Einstein's privacy and, in turn, engage in (futile) efforts to protect his privacy. Hence, we have vicious circularity. ## Acknowledgement This work is supported by ERC advanced grant number 834735.
2306.10755
Unsupervised Open-domain Keyphrase Generation
In this work, we study the problem of unsupervised open-domain keyphrase generation, where the objective is a keyphrase generation model that can be built without using human-labeled data and can perform consistently across domains. To solve this problem, we propose a seq2seq model that consists of two modules, namely \textit{phraseness} and \textit{informativeness} module, both of which can be built in an unsupervised and open-domain fashion. The phraseness module generates phrases, while the informativeness module guides the generation towards those that represent the core concepts of the text. We thoroughly evaluate our proposed method using eight benchmark datasets from different domains. Results on in-domain datasets show that our approach achieves state-of-the-art results compared with existing unsupervised models, and overall narrows the gap between supervised and unsupervised methods down to about 16\%. Furthermore, we demonstrate that our model performs consistently across domains, as it overall surpasses the baselines on out-of-domain datasets.
Lam Thanh Do, Pritom Saha Akash, Kevin Chen-Chuan Chang
2023-06-19T07:57:13Z
http://arxiv.org/abs/2306.10755v1
# Unsupervised Open-domain Keyphrase Generation ###### Abstract In this work, we study the problem of unsupervised open-domain keyphrase generation, where the objective is a keyphrase generation model that can be built without using human-labeled data and can perform consistently across domains. To solve this problem, we propose a seq2seq model that consists of two modules, namely _phraseness_ and _informativeness_ module, both of which can be built in an unsupervised and open-domain fashion. The phraseness module generates phrases, while the informativeness module guides the generation towards those that represent the core concepts of the text. We thoroughly evaluate our proposed method using eight benchmark datasets from different domains. Results on in-domain datasets show that our approach achieves state-of-the-art results compared with existing unsupervised models, and overall narrows the gap between supervised and unsupervised methods down to about 16%. Furthermore, we demonstrate that our model performs consistently across domains, as it overall surpasses the baselines on out-of-domain datasets.1. Footnote 1: Code and data will be available at [https://github.com/ForwardDataLab/UKG](https://github.com/ForwardDataLab/UKG). ## 1 Introduction Keyphrases are short word sequences that describe the core concepts of the text. The prediction of keyphrases for a text is a task that has received much attention recently. It is a crucial problem as its outputs can be useful for a variety of downstream tasks such as building digital libraries Gutwin et al. (1999); Witten et al. (2009), document summarization Litvak and Last (2008), document visualization Chuang et al. (2012) and so on. There are mainly two approaches to keyphrase prediction, namely _keyphrase extraction_Mihalcea and Tarau (2004); Florescu and Caragea (2017); Bennani-Smires et al. (2018) and _keyphrase generation_Meng et al. (2017); Chen et al. (2019); Yuan et al. (2020); Shen et al. (2022). Keyphrase extraction _highlights_ keyphrases that appear within the text. On the other hand, keyphrase generation _generates_ keyphrases based on the understanding of the given text and therefore allows predicting absent keyphrases alongside present ones Meng et al. (2017). This ability has made keyphrase generation receive more attention than keyphrase extraction in recent years, as human also tend to use keyphrases that are absent from the text. Most of the existing keyphrase generation models use manually labeled data for training Meng et al. (2017); Chen et al. (2018, 2019); Yuan et al. (2020); Ahmad et al. (2021). However, obtaining labeled data is often the most expensive component of any machine learning model, and this is the same for keyphrase generation. Compared to labeled data, access to unlabeled data is easier and mostly available. For example, the arXiv dataset Clement et al. (2019) containing metadata (e.g., title, abstract) of 1.7 million research articles is readily available on Kaggle. Therefore, it is more desirable to construct a keyphrase generation model in an unsupervised fashion. Furthermore, in practice, the model may have to encounter texts that come from various domains or even unseen ones. Therefore, another attractive property of a keyphrase generation model is the ability to handle open-domain documents. Considering the above scenario, we propose a new problem called **Unsupervised Open-domain Keyphrase Generation**. Similar to every keyphrase generation methods, the model of our objective is given a text \(\mathbf{x}\) as input, and as output, it generates a set of keyphrases \(\{\mathbf{y}\}\). Both \(\mathbf{x}\) and \(\mathbf{y}\) are word sequences. Furthermore, the model of our objective should satisfy two requirements: 1) it can be built using only an unlabeled corpus, denoted as \(D\); 2) it can effectively handle inputs from across domains. This is a challenging task because we do not patterns for keyphrases. Additionally, we also need our model to work across domains. This is difficult because there might exist different patterns for keyphrases for different domains. None of the existing work addresses these challenges. For instance, supervised keyphrase generation models Meng et al. (2017); Chen et al. (2018, 2019); Yuan et al. (2020); Ahmad et al. (2021) not only require manually labeled data for training but are also known to perform poorly when being moved out-of-domain. On the other hand, Shen et al. (2022) propose AutoKeyGen, which uses pseudo-labeled data to train a seq2seq model in a weakly-supervised fashion, thereby removing the need for human annotation effort. However, similar to supervised models, the weakly-supervised approach taken by AutoKeyGen does not enable it to maintain performance in unseen domains. Therefore, to solve our problem, we propose an unsupervised keyphrase generation model that can work across domains. The **key idea** is to modularize a seq2seq model into two modules. The motivation for modularizing is to decompose keyphrase generation into two simpler problems where each of which can be addressed in an unsupervised and open-domain setting. The first module, named the _phraseness_ module, is responsible for generating phrases, while the second module, named the _informativeness_ module, guides the generation toward the phrases that represent the most crucial concepts of the text. The phraseness module is a retrieval-augmented seq2seq model, where the retriever assists the seq2seq component in generating absent phrases alongside present ones. This module can be built in an unsupervised fashion because it leverages noun phrases to index the retriever and to train the seq2seq model, which can easily be obtained using open-sourced software libraries such as NLTK Bird et al. (2009), and therefore does not require human annotation effort. Furthermore, the phraseness module can also be built in an open-domain fashion, thanks to 1) the part-of-speech information incorporated into the seq2seq model, which allows copying words to form grammatically correct noun phrases regardless of domains; 2) the fact that the retriever can be further indexed with domain-relevant information, to provide reliable references. The informativeness module is another seq2seq model, where a phrase is likely to be generated if it contains words that are informative to the given text. Inspired by embedding-based unsupervised keyphrase extraction (UKE) methods, we quantify informativeness of a word and a text based on their closeness in meaning, which is measured via the similarity between their embeddings. We choose this method of evaluating informativeness over other UKE methods (e.g. graph-based, statistics based) since it supports not only present phrases, but also absent ones. Similar to the phraseness module, the informativeness module can also be built in an unsupervised and open-domain fashion. Figure 1: Overview of our proposed model This is obtained by using a domain-general, unsupervised text embedding model (e.g. Sent2Vec [14]). We summarize the contributions of our paper. **Firstly**, we propose a new problem called _unsupervised open-domain keyphrase generation_. **Secondly**, we design a model for solving the problem. Our proposed model is a seq2seq model that consists of two modules, one is responsible for generating phrases and the other guides the generation towards the phrases that represent the core concepts of the text. **Finally**, we conduct extensive experiments on multiple datasets across domains to demonstrate the effectiveness of our model as we contrast it against multiple strong baselines. ## 2 Proposed method Figure 1 illustrates our proposed framework. We propose a seq2seq model that consists of two modules, namely _phraseness_ and _informativeness_ module. We adopt the two terms _phraseness_ and _informativeness_ from [10], to describe the desirable criteria a keyphrase should satisfy. _Phraseness_ refers to the degree to which a word sequence is considered a phrase, and _informativeness_ refers to how well the phrase illustrates the core concepts of the text. Each of the two modules guarantees a criterion mentioned above. In particular, the phraseness module generates (present and absent) phrases, while the informativeness module guides the generation toward phrases that describe the core concepts of the text. In the following sections, we will describe in detail the two modules, as well as how they are combined to generate keyphrases. ### Phraseness module In order to generate keyphrases, it is crucial to know how to first generate phrases. We emphasize the difference between a keyphrase and a phrase - the former needs to be informative to the given text, while the latter does not. It has been shown that keyphrases mostly take the form of noun phrases [11]. Also, recent work on keyphrase generation has shown that absent keyphrases can often be retrieved from other texts [23], suggesting that absent phrases can be found similarly. Therefore, a simple solution to obtaining phrases is to extract noun phrases as present phrases and retrieve related noun phrases as absent ones. However, this simple solution may not be optimal. Since the retrieved phrases are originally used in other texts, they may not be suitable to describe the concepts of the given text. We demonstrate this limitation using the example in Figure 2(a). In this example, the absent phrases obtained via retrieval describe concepts related to "topic modeling". However, our desired outputs need to also describe concepts related to "author modeling". The above problem could be mitigated if we also consider the given text alongside the retrieved noun phrases. In the example above, relevant phrases such as "author topic distributions" can be generated by combining "author", which is from the given text, and "topic distributions", which is one of the retrieved phrases. With this in mind, we employ a _retrieval-augmented seq2seq model_ as the phraseness module. First, a set of related but absent noun phrases is retrieved, which we will now refer to as _references_. Then, a seq2seq model generates noun phrases based on both the text and the references. #### 2.1.1 Retriever Figure 2 describes the retrieval of references given a text. To obtain references for the input, we leverage existing noun phrases observed in other documents. We assume that a noun phrase is related to a text if it occurs in contexts similar to that text. With this in mind, we collect noun phrases from documents in the unlabeled corpus \(D\) to form a phrase bank \(B\). We index each noun phrase \(\mathbf{z}\in B\) with a _context embedding_, denoted as \(c_{\mathbf{z}}\), which is obtained by averaging the embeddings of the documents in which \(\mathbf{z}\) appears in. We obtain the embeddings of texts by using Sent2Vec [15], an unsupervised sentence embedding model. To retrieve references for a text \(\mathbf{x}\), we first use Sent2Vec to compute its embedding, denoted as \(v_{\mathbf{x}}\), and then retrieve the top-\(k\) phrases \(z\) based on the following retrieval score \(R_{\mathbf{x}}(\mathbf{z})=\cos(c_{z},v_{\mathbf{x}})\) Figure 2: Illustration of the retrieval of references. Furthermore, in order to prevent retrieving unreliable references, we filter those whose retrieval scores are smaller than a threshold \(\tau\). We denote the set of references for \(\mathbf{x}\) as \(Z_{\mathbf{x}}\). As mentioned above, we can further index the retriever with other corpora, denoted as \(D^{\prime}\), from different domains. To do this, all we need to do is to update the phrase bank \(B\) with new phrases from \(D^{\prime}\) and update the context embeddings of every phrase that occur in both \(D\) and \(D^{\prime}\). #### 2.1.2 Seq2Seq model **Input representation.** The seq2seq model takes as inputs not only the source text \(\mathbf{x}\) but also its references \(Z_{\mathbf{x}}\), to generate phrases. The text and its references are combined into a single input \(\mathbf{\tilde{x}}\), defined as \[\mathbf{\tilde{x}}=\texttt{[BOS]}\ \mathbf{x}\ \texttt{[BOR]}\ Z_{\mathbf{x}}\ \texttt{[ EOR]}\ \texttt{[EOS]} \tag{1}\] where [BOS] and [EOS] are respectively the beginning and end of sentence token. The two tokens [BOR] and [EOR] signals the start and end of the reference block. In addition, the references are separated by a [SEP] token. **Model architecture.** We employ Transformer Vaswani et al. (2017) with copy mechanism Gu et al. (2016); See et al. (2017) as the architecture of our seq2seq model. First, the encoder receives the word embeddings of the tokens in \(\mathbf{\tilde{x}}\), producing a sequence of encoder hidden states \(\mathbf{h}=\{h_{i}\}_{i=1}^{|\mathbf{\tilde{x}}|}\). The decoder takes the embeddings of the previously generated words \(\mathbf{y}_{<t}\) and the encoder hidden states, outputting the decoder hidden state \(s_{t}\). For each input, we build an extended vocabulary \(V_{\mathbf{\tilde{x}}}\), which is the union of the decoder's vocabulary \(V\) and the words in the augmented input \(\mathbf{\tilde{x}}\). Finally, we compute the phrases probability of predicting a word from \(V_{\mathbf{\tilde{x}}}\) as \(P_{\text{pn}}(y_{t}|\mathbf{y}_{<t},\mathbf{\tilde{x}})=p_{\text{gen}}P_{\text{pn}}^{V }(y_{t}|\mathbf{y}_{<t},\mathbf{\tilde{x}})+(1-p_{\text{gen}})P_{\text{pn}}^{C}(y_{t}| \mathbf{y}_{<t},\mathbf{\tilde{x}})\). Here, \(P_{\text{pn}}^{V}(y_{t}|\mathbf{y}_{<t},\mathbf{\tilde{x}})=\texttt{softmax}(W^{V}s_{t})\) is the distribution over the word vocabulary \(V\), \(p_{\text{gen}}=\texttt{sigmaid}(W_{s}^{\mathbf{\tilde{x}}}s_{t}+W_{y}^{\mathbf{\tilde{x }}}y_{t-1})\) is the soft switch between generating and copy. All the \(W\) terms are trainable parameters, and we omit the bias terms for less cluttered notation. We incorporate part-of-speech information to copy words from \(\mathbf{\tilde{x}}\). More formally, \(P_{\text{pn}}^{C}(y_{t}=w|\mathbf{y}_{<t},\mathbf{\tilde{x}})=\sum_{\tilde{x_{i}}=w}a _{i}^{t}\), where \(a_{i}^{t}=\texttt{softmax}(e_{i}^{t})\), and \(e_{i}^{t}=\texttt{FF}_{h}(\tilde{h_{i}}^{T})\texttt{FF}_{s}(\tilde{s_{t}})\). Here, \(\tilde{h}_{i}=\texttt{concat}(h_{i},l_{\tilde{x}_{i}})\) is the encoder hidden state of \(\tilde{x}_{i}\) enhanced with its part-of-speech embedding \(l_{\tilde{x}_{i}}\). Similarly, \(\tilde{s_{t}}=\texttt{concat}(s_{t},l_{y_{t}})\) is the decoder hidden state enhanced with the part-of-speech embedding of the previously generated word. \(\texttt{FF}_{h}\) and \(\texttt{FF}_{s}\) denotes the feedforward neural networks, whose purposes are to help project \(\tilde{h_{i}}\) and \(\tilde{s_{t}}\) into the same semantic space. **Model training.** For every document \(\mathbf{x_{i}}\in D\), we maximize \(\log P_{\text{pn}}(\mathbf{y}=\mathbf{z}|\mathbf{\tilde{x}})\), where \[P_{\text{pn}}(\mathbf{y}=\mathbf{z}|\mathbf{\tilde{x}})=\prod_{t=1}^{T}P_{\text{pn}}(y_{t }=z_{t}|\mathbf{y}_{<t},\mathbf{\tilde{x}}) \tag{2}\] for the phrases \(\mathbf{z}=\{z_{t}\}\), which include the present noun phrases and the references. To encourage the model to generate absent phrases instead of just copying it from the references, we randomly mask some references and train the model to generate them. ### Informativeness module Knowing to generate phrases is not sufficient to obtain keyphrases. It is also important to guide the generation towards the phrases that are informative to the input. Previous work on unsupervised keyphrase extraction offer multiple classes of methods, namely graph-based, statistics-based and embedding-based, for evaluating informativeness Figure 3: Examples of phrases (for more details, see Section 5). Graph-based and statistics-based methods are not suitable in our setting. These methods utilize only in-text information and therefore cannot determine the informativeness of absent phrases. On the other hand, embedding-based methods evaluate informativeness of a phrase based on its closeness in meaning with the input text. As a result, these methods can support both present and absent phrases. We therefore adopt the idea of embedding-based methods in building our informativeness module. Let us define \(S(\mathbf{a},\mathbf{b})=\max(0,v_{\mathbf{a}}^{T}v_{\mathbf{b}})\) as the similarity score between two pieces of text, where \(v_{\mathbf{a}}\), \(v_{\mathbf{b}}\) are embeddings obtained using Sent2Vec. Using this score, we define the informativeness distribution \(P_{\text{in}}(\mathbf{y}|\mathbf{x})\), by decomposing it into conditional distributions of each word given the previous context. More formally, \(P_{\text{in}}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{T}P_{\text{in}}(y_{t}|\mathbf{y}_{<t}, \mathbf{x})\), where \[P_{\text{in}}(y_{t}=w|\mathbf{y}_{<t},\mathbf{x})\propto\begin{cases}S(w,\mathbf{x}),& \text{if }w\neq\texttt{[EOS]}\\ S(\mathbf{y}_{<t},\mathbf{x}),&\text{otherwise}\end{cases} \tag{3}\] The probability \(P_{\text{in}}(y_{t}=w|\mathbf{y}_{<t},\mathbf{x})\) is normalized over the extended word vocabulary \(V_{\mathbf{\tilde{x}}}\), which is the same one used by the phraseness module. Intuitively, a word has high probability of being generated if that word has close meaning to the text. The [EOS] token is likely to be generated if the currently generated phrase \(\mathbf{y}_{<t}\) already form an informative phrase. ### Combining phraseness and informativeness Generating keyphrases require us to enforce both phraseness and informativeness on the output sequence. A simple solution is to adopt the approaches taken by existing unsupervised keyphrase extraction methods, which enforce the two criteria sequentially. In particular, they either 1) form phrases first, then choose those that are most informative as keyphrases; or 2) choose informative words first, then form keyphrases using these words. However, both approaches may not be optimal. The first approach may include uninformative words in the prediction, while the second rigidly assume that a keyphrase should only contain keywords. We illustrate the limitation of these approaches using an example, shown in Figure 2(b). Here, we show the predictions of EmbedRank (Bennani-Smires et al., 2018), which takes approach 1) and TextRank (Mihalcea and Tarau, 2004), which takes approach 2). Both of them fail to predict the golden keyphrase "global illumination". EmbedRank redundantly include the word "algorithms", while TextRank only outputs "illumination", as "global" is not predicted as a keyword. This problem could be alleviated if both phraseness and informativeness is considered when forming the keyphrase. In the example above, the word "algorithms" should be excluded, since it neither contributes to the informativeness of the phrase, nor it is required to make the phrase understandable. On the other hand, the word "global" may not be among the most informative words to the text, however, this word is essential as excluding it results in a phrase with a different concept. In light of this, we propose to generate keyphrases, one word at a time, where each word is generated if it is predicted by both the phraseness and informativeness module. To this end, we propose to combine the two modules in a product-of-experts fashion (Hinton, 2002). In particular, the conditional distribution of a keyphrase given a text is defined as follows \[\begin{split} P_{\text{kp}}(\mathbf{y}|\mathbf{x})&\propto P_{\text{in}}(\mathbf{y}|\mathbf{\tilde{x}})^{\lambda}\cdot P_{\text{in}}(\mathbf{y}|\mathbf{x}) \\ &\propto\prod_{t=1}^{T}P_{\text{pn}}(y_{t}|\mathbf{y}_{<t},\mathbf{\tilde{x} })^{\lambda}\cdot P_{\text{in}}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\end{split} \tag{4}\] where \(\lambda\) is a hyperparameter for balancing the two modules. The idea of combining two language models using the product-of-experts has previously been studied for the task of unsupervised abstractive summarization (Zhou and Rush, 2019). To the best of our knowledge, we are the first to use this idea in unsupervised keyphrase generation. In the above paragraphs, we also discussed why it is a suitable choice. ### Keyphrase decoding To decode keyphrases, we employ beam search to search for keyphrases based on \(s(\mathbf{y})=-\log P_{\text{kp}}(\mathbf{y}|\mathbf{x})\). As beam search tend to favor shorter keyphrases, we employ the length normalization strategy similarly to that described in (Sun et al., 2019), which is to divide \(s(\mathbf{y})\) by \(|\mathbf{y}|+\alpha\), where \(\alpha\) is a length penalty factor. It has been shown in previous work that positional information is useful for the prediction of present keyphrases (Florescu and Caragea, 2017; Gallina et al., 2020). Therefore, it is desirable to incorporate this feature into our model. Furthermore, we found that the model tends to generate absent keyphrases that are entirely new. This behavior may not be desirable for downstream tasks such as document retrieval, where we need to associate documents with common keyphrases. Based on the above discussion, we propose to rerank the beam search results using the following score \[\hat{s}(\mathbf{y})=\frac{s(\mathbf{y})}{|\mathbf{y}|+\alpha}\times b(\mathbf{y}) \tag{5}\] \[b(\mathbf{y})=\begin{cases}\beta,&\text{if $\mathbf{y}$ is absent and $\mathbf{y}\in B$}\\ 1,&\text{if $\mathbf{y}$ is absent and $\mathbf{y}\not\in B$}\\ \frac{\log_{2}(1+\mathcal{P}_{\mathbf{x}}(\mathbf{y}))}{\log_{2}(1+\mathcal{P}_{\mathbf{ x}}(\mathbf{y}))+1},&\text{if $\mathbf{y}$ is present}\end{cases} \tag{6}\] where \(b(\mathbf{y})\) is an adjustment weight, \(\beta\) is a hyper-parameter for adjusting the scores of absent phrases that exist in the phrase bank \(B\) (\(\beta<1\) indicates that we favor \(\mathbf{y}\in B\)), and \(\mathcal{P}_{\mathbf{x}}(\mathbf{y})\) is the word offset position of the phrase \(\mathbf{y}\) in the text \(\mathbf{x}\). Intuitively, \(b(\mathbf{y})\) favors present keyphrases that appear earlier in the text, and absent keyphrases that exist in the phrase bank \(B\). ## 3 Experiments ### Datasets We use the documents from the training set of KP20K Meng et al. (2017) to train our model and to index the retriever in the training phase. It contains the abstracts and titles of 514k scientific articles. In the testing phase, we utilize 8 datasets, namely SemEval Kim et al. (2013), Inspec Hulth (2003), NUS Nguyen and Kan (2007), Krapivin Krapivin et al. (2009), DUC-2001 Wan and Xiao (2008), OpenKP Xiong et al. (2019), StackExchange Yuan et al. (2020) and KPTimes Gallina et al. (2019). The title and abstract of an article are concatenated to form a testing document. The testing datasets are categorized into _in-domain_ and _out-of-domain_, by measuring the percentage of keyphrase overlap with the training corpus, i.e. the percentage of golden keyphrases in the testing dataset that also appear in some documents in KP20K. We choose the mean value of \(\sim 33\) as a threshold to classify the testing datasets. As a result, the in-domain datasets include SemEval, Inspec, NUS, Krapivin and StackExchange, while the other three are out-of-domain. In the testing phase, besides using KP20K, we also use the training set of StackExchange (300k documents) and KPTimes (260k documents) to further index the phrase bank and the retriever. The purpose of adding these additional sources in the testing phase is to test whether or not our model can easily integrate additional information to work in domains unseen during training, without having it re-trained. ### Baselines & evaluation metrics **Baselines.** We adopt five unsupervised keyphrase extraction (UKE) algorithms, namely TF-IDF, TextRank2 Mihalcea and Tarau (2004), MultiPartit-eRank3Boudin (2018), EmbedRank Bennani-Smires et al. (2018) and Global-Local Rank4Liang et al. (2021) as baselines. Footnote 2: [https://github.com/boudinfl/pke](https://github.com/boudinfl/pke) Footnote 3: See footnote 4: [https://github.com/xmliang98/uke_ccrank](https://github.com/xmliang98/uke_ccrank) Footnote 5: [https://github.com/mltk/nltk/blob/develop/nltk/stem/porter.py](https://github.com/mltk/nltk/blob/develop/nltk/stem/porter.py) We also compare our model with AutoKeyGen Shen et al. (2022), which is the only previous work on unsupervised keyphrase generation. With the permission from the authors, we implemented and report the AutoKeyGen-Copy version. Furthermore, we present CopyRNN Meng et al. (2017) as a supervised baseline. We employ the Transformer-based pointer-generator network for both AutoKeyGen and CopyRNN, with the same settings as described in A.1. Both AutoKeyGen and CopyRNN are trained using KP20K. **Evaluation metrics.** We follow the widely-used strategy and separate the evaluation of present and absent keyphrase generation. We employ macro-average F1 and macro-average Recall for evaluating present and absent keyphrase generation, respectively. We evaluate present keyphrases at top 3 and 5 predictions; and absent keyphrases at top 5 and 10. The predictions as well as the groundtruths are stemmed using Porter Stemmer5Porter (1980) and duplicates are removed before evaluation. Footnote 5: See footnote 4: [https://github.com/xmliang98/uke_ccrank](https://github.com/xmliang98/uke_ccrank) ### Keyphrase generation for in-domain cases Table 2 illustrates the performance of our proposed model and the baselines for the five in-domain \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline Dataset name & Language & Type & unified index & Inputs & Subject & Viewpoint \\ \hline Smaller & English & Scientific & 140/10 & 15.4 & 83.2 & 38.8 \\ Image & English & Scientific & 1500/500 & 9.7 & 22.7 & 40. \\ NUS & English & Scientific & 50/7/16 & 11.6 & 53.1 & 53.3 \\ Kaggiun & English & Scientific & 18/4/400 & 5.3 & 50.5 & 53.5 \\ StackExchange & English & Technical & 1600/1600/16000 & 2.7 & 48.9 & 45.1 \\ DUC-2001 & English & News & 50/2/68 & 8.1 & 2.7 & 14.7 \\ KPTimes & English & News & 1000/20000 & 5.5 & 46.2 & 7.5 \\ OpenKP & English & News & 66/16/66/614 & 2.2 & 10.9 & 12.7 \\ \hline \end{tabular} \end{table} Table 1: Statistics of testing datasets. datasets. We also display the average performance across datasets. **Present keyphrase generation.** For predicting present keyphrases, our model is best or second-best on most datasets. On SemEval, our model is slightly inferior to TF-IDF and Global-Local Rank. The results on Inspec are worth noting, as our proposed model is significantly outperformed by UKE methods. This inferior performance may be due to this dataset not favoring generative methods, as even CopyRNN, the supervised baseline, failed to compete with UKE methods on Inspec. This behavior has also been observed in a recent work [1]. Although not being able to outperform existing methods on all datasets, our proposed model still achieves the best weighted-average results, outperforming the second-best by about 14% for top 3 predictions and 10% for top 5. **Absent keyphrase generation.** For predicting absent keyphrases, our proposed model outperforms existing work on all datasets. UKE methods cannot be compared with our model, as they only extract present keyphrases. When comparing with AutoKeyGen, we observe that our proposed model have significantly better performance, except for the Inspec dataset where the results are on par. On average, we outperform AutoKeyGen by nearly twice for both top 5 and top 10 predictions. #### 3.3.2 Keyphrase generation for out-of-domain cases One important objective of this work is the proposed model's capability to perform in out-of-domain settings. We show present and absent keyphrase generation performance for out-of-domain datasets in table 3. For absent keyphrase generation, we only report results on KPTimes, as DUC-2001 and OpenKP mainly contain present keyphrases. **Present keyphrase generation.** Our model achieves the best or second-best results on all out-of-domain datasets. Similar to the in-domain cases, our model achieves the best weighted-average results despite not being able to outperform all baselines on all datasets. Of the two unsupervised keyphrase generation methods, our proposed model achieves significantly better results than AutoKeyGen in the out-of-domain setting. **Absent keyphrase generation.** In the out-of-domain setting. It can be seen that AutoKeyGen fails to generate absent keyphrases, with the re \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{1}{c|}{} & \multicolumn{6}{c|}{**Present keyphrase generation**} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{SemEval} & \multicolumn{2}{c|}{Inspec} & \multicolumn{2}{c|}{NUS} & \multicolumn{2}{c|}{Krapivin} & \multicolumn{2}{c|}{StackExchange} & \multicolumn{2}{c}{Average} \\ \cline{2-13} \multicolumn{1}{c|}{} & F1@3 & F1@5 & F1@3 & F1@5 & F1@3 & F1@5 & F1@5 & F1@3 & F1@5 & F1@5 & F1@3 & F1@5 \\ \hline TF-IDF & 19 & **23.9** & 18.7 & 24.8 & 22.7 & 25.9 & 15.9 & 15.7 & 18.8 & 16.5 & 19 & 21.4 \\ TextRank & 13.8 & 17.2 & 17.7 & 25 & 14.9 & 18.9 & 11.1 & 13.3 & 8.5 & 8.4 & 13.2 & 16.6 \\ MultipartiteRank & 18.9 & 21.4 & 23.1 & 26.5 & 22.7 & 24.9 & 19.3 & 18.5 & 13.9 & 13.6 & 19.6 & 21 \\ EmbedRank & 17.9 & 21.2 & **26.2** & **32.6** & 17.5 & 20.8 & 13.5 & 15.2 & 11.8 & 12.6 & 17.4 & 20.5 \\ Global-Local Rank & **20.4** & 23.6 & 24.5 & 30.6 & 22.4 & 23.7 & 15 & 15.2 & 10.2 & 9.8 & 18.5 & 20.6 \\ \hline AutoKeyGen & 16.6\({}_{4}\) & 22.1\({}_{4}\) & 19.4\({}_{2}\) & 23.1\({}_{3}\) & 23.2\({}_{4}\) & 25.7\({}_{3}\) & 19.5\({}_{7}\) & 20.6\({}_{5}\) & 14\({}_{8}\) & 14.9\({}_{6}\) & 18.5\({}_{3}\) & 21.3\({}_{2}\) \\ **Ours** & 19.1\({}_{8}\) & 22.9\({}_{9}\) & 19.8\({}_{8}\) & 23.3\({}_{1}\) & **26.4\({}_{7}\)** & **27.8\({}_{2}\)** & **22.2\({}_{8}\)** & **21.4\({}_{9}\)** & **27.1\({}_{2}\)** & **25.1\({}_{2}\)** & **23.4\({}_{4}\)** & **24.4** \\ \hline Supervised - CopyRNN & 26.1\({}_{4}\) & 29.7\({}_{5}\) & 19.1\({}_{5}\) & 28.35 & 35.3\({}_{1}\) & 37.94 & 30.3\({}_{9}\) & 30.1\({}_{6}\) & 24.1\({}_{6}\) & 22.4\({}_{5}\) & 27\({}_{6}\) & 28.6\({}_{3}\) \\ \hline \multicolumn{1}{c|}{} & \multicolumn{6}{c|}{**Absent keyphrase generation**} \\ \hline & \multicolumn{2}{c|}{SemEval} & \multicolumn{2}{c|}{Inspec} & \multicolumn{2}{c|}{NUS} & \multicolumn{2}{c|}{Krapivin} & \multicolumn{2}{c|}{StackExchange} & \multicolumn{2}{c}{Average} \\ \cline{2-13} \multicolumn{1}{c|}{} & R@5 & R@10 & R@5 & R@10 & R@5 & R@10 & R@5 & R@10 & R@5 & R@10 \\ \hline UKE methods & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline AutoKeyGen & 0.72 & 1.2\({}_{3}\) & 1.7\({}_{2}\) & 2.8\({}_{4}\) & 1\({}_{2}\) & 1.9\({}_{5}\) & 2.4\({}_{2}\) & 3.8\({}_{5}\) & 1.2\({}_{2}\) & 1.9\({}_{1}\) & 1.4\({}_{1}\) & 2.3\({}_{2}\) \\ **Ours** & **14.2** & **2.3\({}_{4}\)** & **2.1\({}_{2}\)** & **3.2\({}_{3}\)** & **1.8\({}_{3}\)** & **31.5\({}_{5}\)** & **4.5\({}_{5}\)** & \({}_{7}\) & **2.4\({}_{6}\)** & **6.3\({}_{2}\)** & **2.9\({}_{2}\)** & **4.3\({}_{2}\)** \\ \hline Supervised - CopyRNN & 2.1\({}_{3}\) & 2.7\({}_{3}\) & 3.7\({}_{3}\) & 5.3\({}_{3}\) & 4.4\({}_{4}\) & 6.4\({}_{7}\) & 7.9\({}_{2}\) & 10.7\({}_{7}\) & 2.3\({}_{2}\) & 3.5\({}_{3}\) & 4.1\({}_{1}\) & 5.7\({}_{3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Keyphrase generation performance for in-domain datasets. The best and second-best results for each category are bold and highlighted, respectively. For AutoKeyGen, CopyRNN and our model, we run experiments five times with different random seeds and report the average. The subscript denotes the corresponding standard deviation (e.g. 26.4\({}_{7}\) indicates 26.4\(\pm\) 0.7). We report both F1 and Recall in percentage points. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multicolumn{1}{c|}{} & \multicolumn{6}{c|}{**Present keyphrase generation**} \\ \cline{2-13} \multicolumn{1}{c|}{} & DUC-2001 & \multicolumn{2}{c|}{**HTune**} & \multicolumn{2}{c|}{**OpworkP**} & \multicolumn{2}{c}{_Average_} \\ \hline TF-IDF & 19.3 & 17.65 & 18.0 & 19.75 & F1@5 & F1@5 & F1@5 & F1@5 & F1@5 & F1@5 & F1@5 \\ \hline TextRank & 8.4 & 11.3 & **21.2** & **12.5** & 13.5 & 12.9 & 14.4 & 15.2 \\ MultipartiRank & 9.8 & 14.0 & 10.1 & 9.7 & **9.8** & 9.3 & 19.1 & 11.4 & 14.6 \\ MultipartiRank & 13.7 & 18.7 & 18.7 & 18.7 & 12.7 & 11.9 & 14.8 & 16.4 \\ EndRank & **20.0** & **14.5** & 10.1 & 11.8 & 9.3 & 12.3 & 14.5 \\ Global Local Rank & 14.6 & 21.8 & 10.3 & 10.3 & 10.7 & 11.0 & 12.2 & 14.2 \\ \hline AudioGen & 7.4 & 9.6 & 19.3 & 19.6 & **16.3** & **8.5** & **9.9** & 10.6 & **11.8** \\ **Ours** & 15.2 & 18. call of only \(0.3\%\) for top 10 predictions. On the other hand, our model can recall 3.6% of absent keyphrases. This improvement is significant considering that absent keyphrase generation has been pointed out to be a "very challenging task" [14]. #### 3.3.3 Comparison to supervised baseline Although not being able to compete with the supervised baseline on the in-domain datasets, our model has narrowed the gap between supervised and unsupervised keyphrase generation methods. In addition, our model shows remarkable performance on out-of-domain datasets, while the supervised baseline shows poor generalization. It can be seen from Table 2 and 3 that the performance of the supervised baseline plummets on out-of-domain datasets. On the other hand, our model is able to retain performance across domains. ### Ablation study We perform an ablation study to further understand the role of the components of our proposed model. In particular, we test our model with some components removed, namely the adjustment weight defined in Equation 6, the references and the part-of-speech information. We report the results in Table 4. For KPTimes and OpenKP, we sample 200 and 500 documents from their original validation and test set to perform the ablation study. We observe that no model in the ablation study achieve the best performance in all cases. However, the full version of our model shows to be more well-rounded compared to its ablations. **Firstly**, the adjustment weight \(b(\mathbf{y})\) proves to be crucial, as removing it cause our model's performance to drop in most cases. This confirms that the positional information is useful in predicting present keyphrases, as has been pointed out by previous work [13, 12]. Moreover, prioritizing phrases that exist in the phrase bank also proves to be effective for predicting absent keyphrases. **Next**, removing the references shows to heavily affect absent keyphrase generation, especially on the out-of-domain dataset KPTimes. On the other hand, present keyphrase generation seems not to be affected without using references. **Finally**, the version of our model without part-of-speech information is able to maintain present keyphrase generation performance for the in-domain dataset (Krapivin), but slightly worsens when being moved out-of-domain. For absent keyphrase generation, it seems that part-of-speech information does not help for KPTimes. A possible explanation is that KPTimes mostly contains single-word keyphrases and therefore grammatical information can offer little help in this case. ## 4 Case study We display two examples of generated keyphrases from AutoKeyGen and our proposed model in Fig Figure 4: Two examples of the generated keyphrases from AutoKeyGen and our proposed model. We illustrate the top 5 predictions. Correctly predicted keyphrases are underlined. ure 4. The first example is from Krapivin, an in-domain dataset, while the second one is from KP-Times, an out-of-domain dataset. For the first example, we observe that both the proposed model and AutoKeyGen correctly predict the groundtruth (present and absent) keyphrases. However, it can be seen that, for generating absent keyphrases, AutoKeyGen only reorders words that are present in the given text. On the other hand, our model can generate keyphrases whose component words are absent, such as "relational models", "categorical models" and "kripke models". In the second example, it is clear that our model predicts more correct keyphrases. We observe that the absent keyphrases generated by AutoKeyGen are highly irrelevant. On the other hand, our model successfully predicts "storms" and also outputs other absent keyphrases that are relevant, although not being within the ground truth keyphrases. This example help shows that our model is better at handling documents from different domains. ## 5 Related work ### Unsupervised keyphrase extraction Unsupervised keyphrase extraction (UKE) aims at identifying keyphrases within the text. Currently, there are three main classes of UKE methods, namely statistics-based, graph-based and embedding-based. Statistics-based methods Campos et al. (2018) employ features such as TF-IDF, word position and casing aspect, to determine the relevance of a candidate phrase. Graph-based methods typically build a graph from the source text, where a node could be a word or a phrase. Then, different graph-theoretic measures are used to estimate the importance of nodes, and finally phrases are formed based on the top ranked nodes. TextRank Mihalcea and Tarau (2004) builds a word graph where a link between two words exists if they co-occur within a window. SingleRank Wan and Xiao (2008), CiteTextRank Gollapalli and Caragea (2014) employs related documents to better measure similarity between word nodes. TopicRank Bougouin et al. (2013), Topical PageRank Liu et al. (2010) incorporate topical information in the graph ranking algorithm. Positional information is used in PositionRank Florescu and Caragea (2017) to favor keyphrases that appear earlier in the text. Boudin (2018) utilizes the structure of multi-partite graphs to extract diverse keyphrases. Embedding-based methods utilize embedding spaces to measure informativeness of candidates. EmbedRank Bennani-Smires et al. (2018) rank candidates by measuring their distance to the source text in a pretrained sentence embedding space, then an optional diversification step is performed using maximal-marginal relevance to ensure diversity of extracted keyphrases. Liang et al. (2021) jointly models local and global context of the document when ranking candidates. ### Unsupervised keyphrase generation Keyphrase generation aims at predicting both present and absent keyphrases for the source text. To our best knowledge, AutoKeyGen Shen et al. (2022) is currently the only unsupervised keyphrase generation method. AutoKeyGen trains a seq2seq model on automatically generated silver labeled document-keyphrase pairs. The silver keyphrases are both present and absent, where present ones are extracted, and the absent ones are constructed from the words present in the text. ## 6 Conclusions In this paper, we propose a new problem called unsupervised open-domain keyphrase generation. We propose a seq2seq model that consists of two modules, one is responsible for generating phrases while the other guides the generation towards phrases that reflect the core concepts of the given text. Our experiments on eight benchmark datasets from multiple domains demonstrate that our model outperforms existing unsupervised methods and narrows the gap between unsupervised and supervised keyphrase generation models. Furthermore, we demonstrate that the proposed model can perform consistently across domains. \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline \multicolumn{5}{c}{**Present keyphrase generation**} \\ \hline & Krapivin & DUC-2001 & KPTimes & OpenKP \\ \cline{2-5} & F@5 & F@5 & F@5 & F@5 \\ \hline No adjustment weight & 17.4 & **19.3** & 21.3 & 10 \\ No references & 20.7 & 18.8 & 21.4 & **14.6** \\ No POS & **21.1** & 17.6 & 21.5 & 13.1 \\ Full & 20.5 & 18.2 & **21.8** & 14 \\ \hline \multicolumn{5}{c}{**Absent keyphrase generation**} \\ \hline & Krapivin & DUC-2001 & KPTimes & OpenKP \\ \cline{2-5} & R@10 & R@10 & R@10 & R@10 \\ \hline No adjustment weight & 5.8 & - & 2.5 & - \\ No references & 5.5 & - & 1 & - \\ No POS & 7.1 & - & **3.5** & - \\ Full & **7.2** & - & 3.2 & - \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study. ### Limitations One limitation of the proposed method is that it does not consider domain-specific information to evaluate informativeness. The phraseness module has access to domain-specific knowledge, which are the phrases that occur in similar contexts, i.e. the references. On the other hand, the informativeness module only employs a domain-general sentence embedding model to measure informativeness of phrases. Therefore, the integration of both domain-specific and domain-general information for the evaluation of informativeness may be worth further investigation. Another limitation of this work is that we only tested the proposed method on short texts. Therefore, it is uncertain of the proposed framework's performance on long text documents. Handling long texts could be significantly more difficult than short text, as long texts contain much more information (can discuss a variety of topics). The final limitation of this work is the absence of experiments on using different sentence embedding models to construct the informativeness module. Therefore, it might be useful to explore the impact of different sentence embedding models on keyphrase generation performance. We leave this for future work.
2305.09754
Controlling entanglement in a triple-well system of dipolar atoms
We study the dynamics of entanglement and atomic populations of ultracold dipolar bosons in an aligned three-well potential described by an extended Bose-Hubbard model. We focus on a sufficiently strong interacting regime where the couplings are tuned to obtain an integrable system, in which the time evolution exhibits a resonant behavior that can be exactly predicted. Within this framework, we propose a protocol that includes an integrability breaking step by tilting the edge wells for a short time through an external field, allowing the production of quantum states with a controllable degree of entanglement. We analyze this protocol for different initial states and show the formation of highly entangled states as well as NOON-like states. These results offer valuable insights into how entanglement can be controlled in ultracold atom systems that may be useful for the proposals of new quantum devices.
Karin Wittmann W, Leandro H. Ymai, Bruno H. C. Barros, Jon Links, Angela Foerster
2023-05-16T19:01:38Z
http://arxiv.org/abs/2305.09754v1
# Controlling entanglement in a triple-well system of dipolar atoms ###### Abstract We study the dynamics of entanglement and atomic populations of ultracold dipolar bosons in an aligned three-well potential described by an extended Bose-Hubbard model. We focus on a sufficiently strong interacting regime where the couplings are tuned to obtain an integrable system, in which the time evolution exhibits a resonant behavior that can be exactly predicted. Within this framework, we propose a protocol that includes an integrability breaking step by tilting the edge wells for a short time through an external field, allowing the production of quantum states with a controllable degree of entanglement. We analyze this protocol for different initial states and show the formation of highly entangled states as well as NOON-like states. These results offer valuable insights into how entanglement can be controlled in ultracold atom systems that may be useful for the proposals of new quantum devices. ## I Introduction Quantum entanglement is a phenomenon discovered in the foundations of quantum physics that paved the way for a new era of technological advances. It represents non-local correlations between separate parts of a quantum system. As a resource, entanglement has been proven to be very useful for performing numerous tasks that face barriers in a classical setting, finding broad applications in quantum information processing [1; 2; 3; 4; 5; 6], quantum teleportation [7; 8; 9; 10; 11], and quantum metrology and sensing [12; 13; 14]. Entangled states are key ingredients in the proposals of protocols for the development of new quantum devices [15; 16; 17; 18; 19; 20; 21], and hence understanding the mechanisms for producing and controlling entangled states with a high degree of precision is of fundamental importance. In this context, the search for highly entangled states is the aim of many technological quantum applications [22], which can be exploited through different platforms. Among these, ultracold atoms are especially interesting because they enable the manipulation of atoms arranged in optical potentials, with astonishing precision and versatility of the operating control [23; 24; 25]. In recent experiments on ultracold quantum gases, dipolar bosons are loaded into optical lattices to generate long-range dipole-dipole interaction (DDI), allowing access to fascinating novel quantum properties and phases [26]. The dynamics of such dipolar boson systems have been intensively studied and described, with good results, by an extended Bose-Hubbard model (EBHM) [27; 28]. One interesting feature of the EBHM with few bosonic modes is that the couplings of interactions can be tuned to achieve an integrable regime, which is particularly suited for the design of quantum devices. For instance, in [29], the conserved charge provided by integrability plays a crucial role when examining the quantum dynamics of a dipolar Bose-Einstein condensate (BEC) on a three-well aligned system, making it a potential candidate for constructing an atomic transistor [30]. Other integrable quantum systems are being recently utilized to support the development of quantum technologies. These include quantum circuits created from transfer matrices [31] and those created through the star-triangle relation [32], central spin models for quantum sensors [33], and the preparation of Bethe states on a quantum computer [34; 35; 36; 37]. Here we consider an integrable triple-well model of dipolar bosons and propose a protocol to create quantum states with controllable entanglement level. The control is realized by breaking the integrability for a short time and the resulting entanglement is characterized by the von Neumann entropy and correlation functions. We test the protocol for a range of different initial states, demonstrating how to produce highly entangled states as well as other important quantum states such as NOON-like states [38] The paper is organized as follows. In section II, we describe the system and discuss the conditions for obtaining an effective description of the integrable system in the resonant regime. In section III, we analyse the dynamics of the system and the entanglement behaviour in the resonant regime. In Section IV, we propose a protocol for controlling entanglement by briefly tilting the edge sites of the system. In sections V-VII, we analyse the action of the protocol on different initial states. A discussion on interferometric applications of the protocol and details on the ground state structure are given in the appendices. The conclusions are given in section VIII. System description We consider a system of dipolar atoms in an aligned triple-well potential described by the following extended Bose-Hubbard model: \[H = \frac{U_{0}}{2}\sum_{i=1}^{3}N_{i}(N_{i}-1)+\sum_{i=1}^{3}\sum_{j=1; j\neq i}^{3}\frac{U_{ij}}{2}N_{i}N_{j} \tag{1}\] \[-\frac{J}{\sqrt{2}}(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1}+a_ {2}^{\dagger}a_{3}+a_{3}^{\dagger}a_{2}),\] where \(a_{i}\), \(a_{i}^{\dagger}\) and \(N_{i}=a_{i}^{\dagger}a_{i}\) are the bosonic annihilation, creation, and number operators of the well (or site) \(i=1,2,3\), respectively. The coupling \(J\) denotes the hopping rate of atoms between neighboring wells and \(U_{0}\) and \(U_{ij}=U_{ji}\) set the on-site and long-range interactions, respectively. The on-site interaction \(U_{0}=U_{sr}+U_{dd}\) results from short range interaction \(U_{sr}\) and on-site dipole-dipole interaction (DDI) \(U_{dd}\). The short-range interaction \(U_{sr}\propto 4\pi\hbar^{2}a/m\) is determined by the \(s\)-wave scattering length \(a\) which is controlled through a magnetic field via Feshbach resonance and \(m\) is the mass of atom. The on-site DDI \(U_{dd}\propto\mu^{2}\) and long range interactions \(U_{ij}\propto\mu^{2}\) obey an inverse cubic law whose strength is determined by the permanent magnetic dipole moment \(\mu\) of dipolar atom considered and highly depends on the geometry of potential trap and the polarization direction of dipoles [27; 39]. A schematic representation of this system is presented in Figure 1. For the particular case when \(U_{12}=U_{23}\) and \(U_{13}=U_{0}\), the Hamiltonian given by Eq. (1) is integrable [40] and can be reduced, up to a global constant, to \[H = U(N_{1}-N_{2}+N_{3})^{2} \tag{2}\] \[-\frac{J}{\sqrt{2}}(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1}+a_ {2}^{\dagger}a_{3}+a_{3}^{\dagger}a_{2}),\] where \(N=N_{1}+N_{2}+N_{3}\) is the total number of particles and \(U=(U_{0}-U_{12})/4\) represents the effective interaction energy, which can be experimentally tuned by controlling the polarization direction and depth of the well potential trap. A discussion on the feasibility of a physical realization of this system by means of Bose-Einstein condensates of dipolar atoms can be found in [29]. In this integrable case, the model can be formulated and solved using the quantum inverse scattering method and Bethe ansatz methods [40]. It acquires the additional conserved operator \[Q=\frac{1}{2}(N_{1}+N_{3}-a_{1}^{\dagger}a_{3}-a_{3}^{\dagger}a_{1}), \tag{3}\] besides the Hamiltonian \(H\) and the total number of particles \(N\), resulting in three independent conserved operators in an equal number of system modes. The conserved charge \(Q\) plays an important role in the _resonant regime_, characterized by the tunneling of atoms between the wells at the edges (labeled by \(i=1\) and \(3\)), while the number of particles in the middle well (\(i=2\)) remains approximately constant. This resonant behavior is a consequence of a second-order process that occurs in a relatively strong interaction regime [41; 27]. More specifically, when \(|U(N-2l)/J|\gg 1\) and for the initial state \[|\Psi_{0}\rangle=|N-k-l,l,k\rangle, \tag{4}\] where \(l\) (\(l=0,...,N\)) and \(k\) (\(k=0,...,N-l\)) represent the number of atoms initially at wells \(2\) and \(3\), respectively, the quantum dynamics of the Hamiltonian (2) can be well described by the effective Hamiltonian [29], \[H_{\text{eff}}=\omega_{l}Q, \tag{5}\] where constant \(\omega_{l}\) is given by \[\omega_{l}=\lambda_{l}J^{2} \tag{6}\] with \(\lambda_{l}=\frac{1}{4|U|}\left(\frac{l+1}{N-2l-1}-\frac{l}{N-2l+1}\right)\) depending on the initial number \(l\) of bosons in the middle well. The constant \(\omega_{l}\) will play the role of the resonant tunneling frequency, with period \(T_{l}=2\pi/\omega_{l}\). For the case where \(l=0\), let us simply denote it by \(T\equiv T_{0}\). In the following sections, we first discuss the dynamical quantities that characterize the behavior of the system and provide information about its quantum entanglement. After that, we provide a protocol that briefly tilts wells 1 and 3 to control the entanglement of the quantum state. Then, we analyze the effects of the protocol on different initial states. Figure 1: Schematic representation of the triple well system. The arrows represent the dipoles of atoms oriented along the direction of polarization. The coupling \(J\) represents the hopping rate, \(U_{0}\) characterizes on-site interactions, while \(U_{ij}\) characterizes the DDI between particles on different sites. ## III Dynamics of populations and entanglement We start by considering the dynamics of the system described above in the integrable and resonant regime, and for convenience, we set \(\hbar=1\). We focus on the time evolution of the average number of particles per well \[\langle N_{i}\rangle=\langle\Psi(t)|N_{i}|\Psi(t)\rangle \tag{7}\] and the von-Neumann entanglement entropy \[S_{i}(t)=-\operatorname{Tr}\left[\rho_{i}(t)\log\rho_{i}(t)\right] \tag{8}\] where the density matrix is defined as \(\rho(t)=|\Psi(t)\rangle\langle\Psi(t)|\) and \(\rho_{i}(t)\) is the reduced density matrix of site \(i\) where the remaining subsystem is traced out. The von-Neumann entropy quantifies the bipartite entanglement between the site \(i\) and the subsystem of other two sites. In the integrable regime, an initial state described by \(|\Psi_{0}\rangle\) will evolve in time according to \[|\Psi(t)\rangle=\mathcal{U}(t)|\Psi_{0}\rangle \tag{9}\] where \(\mathcal{U}(t)\equiv e^{-iHt}\) is the time-evolution operator. In what follows, we will use \(|\Psi\rangle\) to refer to states obtained using Hamiltonian (2), and we will use the notation \(|\widetilde{\Psi}\rangle\) with a tilde to denote analytic states obtained using the effective Hamiltonian (5), from which analytic results can be derived. A comparison between the quantum states \(|\Psi\rangle\) and \(|\widetilde{\Psi}\rangle\) obtained for the same set of parameters will be quantified through the fidelity defined as \(F=|\langle\Psi|\widetilde{\Psi}\rangle|^{2}\). We will consider that the state \(|\Psi\rangle\) theoretically approaches the analytic state \(|\widetilde{\Psi}\rangle\) when \(F>0.95\). For the case of initial state (4), the state \(|\widetilde{\Psi}(t)\rangle\) predicts that \(\langle N_{2}\rangle=l\) remains constant, while the atoms oscillate harmonically between sites \(i=1\) and \(i=3\), according to the expectation values given by [29] \[\langle N_{i}\rangle=\frac{1}{2}\left[N-l+(2-i)(N-l-2k)\cos(\omega_{l}t) \right]. \tag{10}\] The expression above shows a maximum amplitude with oscillations of period \(T=2\pi/\omega_{0}\) when all atoms are initially located in one of the edge wells (i.e., when \(|\Psi_{0}\rangle=|N,0,0\rangle\) or \(|\Psi_{0}\rangle=|0,0,N\rangle\) ) and an equilibrium with \(\langle N_{1}\rangle=\langle N_{3}\rangle=N/2\) remaining constant when the edge wells initially have the same number of atoms (i.e., when \(|\Psi_{0}\rangle=|l,0,l\rangle\), with \(N=2l\)). These two extreme cases will be the subject of our study later on. In Figure 2 are shown some numerical results for the case \(|\Psi_{0}\rangle=|N,0,0\rangle\) using the Hamiltonian (2). Figure 2-a shows the perfect agreement between the results of the numerical simulation and the expectation values given in (10). In Fig. 2-b the time evolution of the entanglement entropy \(S_{1}\) presents a period of \(T/2\) and its first maximum occurring at \(t=T/4\), exactly when the populations reach an equilibrium with \(\langle N_{1}\rangle=\langle N_{3}\rangle\). Nevertheless, despite the von-Neumann entanglement entropy is the most frequently used measure to quantify entanglement, it does not depend on any particular observable, making it difficult to perform a direct experimental measurement of its magnitude. In order to generate signatures to indicate the formation of highly entangled states, in addition to enabling experimental measurements, we also evaluate the two-site correlation function defined as \[C_{ij}\equiv|\langle N_{i}\rangle\langle N_{j}\rangle-\langle N_{i}N_{j} \rangle|. \tag{11}\] Using the state \(|\widetilde{\Psi}(t)\rangle\), we can derive the correlation function in the closed form \(C_{13}=(N/4)\sin^{2}(t\,\omega_{0})\), from which a maximum value is directly obtained at \(t=T/4\). The agreement between numerical simulation of \(C_{13}\) and its analytic formula can be seen in Fig. 2-c. From Figures 2-b and c it is clear that the maximum values of the two-site correlation functions and the entanglement entropy occur simultaneously. This result shows that the two-site correlation function is also able to reveal information about the quantum entanglement of the subsystem of wells. It is worth noting that the entanglement entropy \(S_{2}\) vanishes, since the state of site 2 remains constant in resonant regime, while \(S_{1}=S_{3}\neq 0\) showing that bipartite entanglement is present only in the subsystem of sites 1 and 3. ## IV Protocol for quantum entanglement control We now focus on establishing a protocol for generating and controlling maximally entangled states. The control of quan Figure 2: (**a**) Expectation value of \(N_{1}/N\) (green line) and \(N_{3}/N\) (yellow line) for the initial state \(|\Psi_{0}\rangle=|20,0,0\rangle\), \(U=-2\) and \(J=1\). The marks represent the analytic expression (10). (**b**) Time evolution of the entanglement entropy \(S_{1}\) in units of \(S_{1}^{max}=\ln(N+1)\). (**c**) Time evolution of correlation functions \(C_{13}\) in units of \(C_{13}^{max}=N/4\). The numerical simulation (11) is represented by the solid line, while the marks results from the closed form expression. The vertical dotted lines mark the instant \(t=T/4\), where \(\langle N_{1}\rangle=\langle N_{3}\rangle\) and the values of \(S_{1}\) and \(C_{13}\) are maximum. tum entanglement can be achieved by tilting wells 1 and 3 through the action of an additional coherent light beam superimposed on the triple well system designed on an optical trap. During the presence of tilt on wells 1 and 3 the dynamics is governed by the Hamiltonian [30] \[\mathcal{H}(\epsilon)=H+\epsilon(N_{3}-N_{1}), \tag{12}\] where \(H\) is the integrable Hamiltonian (2) and the parameter \(\epsilon\) characterizes the energy offset between edge potential wells (see Figure 3). Other properties of this Hamiltonian (12) can be found in [29, 42]. Here we will examine the case where the tilt is introduced into the protocol as a short-duration square pulse just after the initial state evolves to the state with maximum correlation at \(t=T/4\), as identified in the previous section. It will be seen that the amount of quantum entanglement is completely determined by the duration of the square pulse. The full description of the protocol can be represented as follows \[|\Psi(t)\rangle=|\Psi_{k}(t)\rangle,\quad t_{k-1}\leq t\leq t_{k},\] where the states for steps \(k=1,2\), and \(3\) of the protocol are given sequentially by \[|\Psi_{1}(t)\rangle=\mathcal{U}(t-t_{0},0)|\Psi_{0}\rangle,\] \[|\Psi_{2}(t)\rangle=\mathcal{U}(t-t_{1},\epsilon)|\Psi_{1}(t_{1} )\rangle,\] \[|\Psi_{3}(t)\rangle=\mathcal{U}(t-t_{2},0)|\Psi_{2}(t_{2})\rangle.\] Here, \(\mathcal{U}(t,\epsilon)\equiv e^{-i\mathcal{H}(\epsilon)t}\) is the time evolution operator. This sequence is depicted in Figure 3 below, illustrating the dependence on the parameter \(\epsilon\). In what follows, we continue to adopt the notation \(|\Psi\rangle\) (without tilde) for states obtained using the Hamiltonian (12), and \(|\widetilde{\Psi}\rangle\) for analytic states obtained using the effective Hamiltonian \(\mathcal{H}_{\text{eff}}(\epsilon)=H_{\text{eff}}+\epsilon(N_{3}-N_{1})\) where \(H_{\text{eff}}\) is given by (5). At the end of the whole process, the protocol generates the state \[|\Psi_{\text{out}}\rangle \equiv |\Psi_{3}(t_{3})\rangle \tag{13}\] \[= \mathcal{U}(\Delta t_{3},0)\mathcal{U}(\Delta t_{2},\epsilon) \mathcal{U}(\Delta t_{1},0)|\Psi_{0}\rangle,\] where \(\Delta t_{k}=t_{k}-t_{k-1}\), is the duration of \(k\)-th (\(k=1,2,3\)) step of the protocol. As mentioned earlier, we are assuming that \(t_{0}=0\), \(\Delta t_{1}=T/4\) and \(\Delta t_{2}\ll\Delta t_{1,3}\) such that the breaking of integrability is the dominant effect in the second step of protocol. In the following sections, the action of the protocol on different initial input states will be investigated in detail. ## V Input Fock State We start by first considering the case of a completely localized initial state given by \(|\Psi_{0}\rangle=|N,0,0\rangle\). In Figure 4, it is shown the effect of the protocol on the dynamics for different values of duration of a square pulse, \(\Delta t_{2}\), counted in units of period \(T_{\epsilon}=2\pi/\Omega_{\epsilon}\), where \(\Omega_{\epsilon}=2\epsilon\). In the first line of the Fig. 4, after the action of the square pulse (\(t\geq t_{2}\)), it is shown the expectation value of the fractional population of sites \(i=1,3\), which is given by \[\langle N_{i}\rangle/N=1/2-(1-i/2)\sin[\omega_{0}(t-t_{2})]\cos\phi, \tag{14}\] and \(\phi\) is a dimensionless parameter defined as \[\phi=2\epsilon\Delta t_{2}.\] We observe that the amplitude of the expectation values of \(N_{i}/N\) decrease gradually with increasing the pulse duration \(\Delta t_{2}\) until the dynamics becomes stationary balanced for a long time at \(\Delta t_{2}=T_{\epsilon}/4\) (or \(\phi=\pi/2\)) and completely reversed at \(\Delta t_{2}=T_{\epsilon}/2\) (or \(\phi=\pi\)). In the second line of Fig. 4, the range of values of entanglement entropy gradually decreases with increasing duration of the pulse, becoming stationary at its maximum value at \(\Delta t_{2}=T_{\epsilon}/4\). The dynamics of entanglement of the state \(|\Psi(t)\rangle\) along the control process is also signaled in the third line of Fig. 4 through the correlation function of sites 1 and 3. Figure 3: Schematic representation of the control protocol sequence. The tilt \(\Delta V\) between the potentials of sites 1 and 3 is induced through an external field. The duration of the field allows the entanglement in the subsystem consisting of sites 1 and 3 to be controlled. In Figure 5 we present the entanglement entropy of state \(\left|\Psi_{\text{out}}\right\rangle\) as a function of \(\phi\) for three time intervals \(\Delta t_{3}=T/16\), \(T/8\), \(T/4\). We observe that entanglement entropy can be controlled over a larger range of values at \(\Delta t_{3}=T/4\). Therefore, for fixed duration \(\Delta t_{3}=T/4\), the protocol predicts the following quantum state \[\left|\widetilde{\Psi}_{\text{out}}(\phi)\right\rangle=\frac{\left[\sin(\phi/ 2)\,a_{1}^{\dagger}+\cos(\phi/2)\,a_{3}^{\dagger}\right]^{N}}{\sqrt{N!}}|0,0,0\rangle, \tag{15}\] where \(|0,0,0\rangle\) is the vacuum state. From the above expression, the correlation function of sites 1 and 3 can be determined analytically as a function of parameter \(\phi\) and it is given by \(C_{13}=(N/4)\sin^{2}\phi\). When performing the control within the interval \(\Delta t_{2}\in[0,T_{\epsilon}]\), the expression above shows that the maximized correlation \(C_{13}^{max}=N/4\) occurs at \(\phi=\pi/2\,(3\pi/2)\), when the state \(\left|\Psi_{\text{out}}\right\rangle\) has the maximum entanglement entropy with all atoms into the (anti)symmetric coherent state with fidelity \(F=0.99827\,(0.999509)\): \[\left|\widetilde{\Psi}_{\text{out}}\right\rangle=\frac{\left(a_{1}^{\dagger} \pm a_{3}^{\dagger}\right)^{N}}{\sqrt{2^{N}N!}}|0,0,0\rangle.\] If all atoms are initially loaded into site 3 (i.e., \(\left|\Psi_{0}\right\rangle=\left|0,0,N\right\rangle\)), the states with maximum correlation are generated with symmetry reversed compared to the case where \(\left|\Psi_{0}\right\rangle=\left|N,0,0\right\rangle\). In the next section, we consider the case where initially both sites 1 and 3 have the same number of atoms. ## VI Twin-Fock input state In this section we investigate the quantum entanglement control for the case of the initial twin-Fock state in the sites 1 and 3, given by \(\left|\Psi_{0}\right\rangle=\left|l,0,l\right\rangle\), for which \(N=2l\) and the expectation values \(\left\langle N_{1}\right\rangle=\left\langle N_{3}\right\rangle=l\) remain constant under integrable time evolution in the resonant regime. Figure Figure 4: Time evolution of the expectation values (row **a**) \(N_{1}/N\) (green line), \(N_{2}/N\) (dotted line), \(N_{3}/N\) (yellow line); entanglement entropy in units of \(S^{max}=\log(N+1)\) (row **b**) and, two-site correlation function in units of \(C_{13}^{max}=N/4\) (row **c**). Each column represents a different value of \(\Delta t_{2}\). The first column represents the integrable case, where \(\Delta t_{2}=0\). The other columns show the cases where \(\Delta t_{2}=T_{\epsilon}/8\), \(\Delta t_{2}=T_{\epsilon}/4\) and \(\Delta t_{2}=T_{\epsilon}/2\), in sequence. In all cases, initial state \(\left|\Psi_{0}\right\rangle=\left|20,0,0\right\rangle\), \(U=2\), \(J=1\) and \(\epsilon=1\). The vertical dashed lines represent the instant \(t=t_{2}\). 6 presents the dynamics of the entanglement entropy of state \(|\Psi(t)\rangle\) for three different duration of a square pulse. Again, for \(\Delta t_{2}=T_{\epsilon}/4\), the entanglement entropy is stationary. The Figure 7 shows the entanglement entropy \(S_{1}\) as function of dimensionless parameter \(\phi\) and three time intervals \(\Delta t_{3}=T/16\), \(T/8\), and \(T/4\). In this case, the output state \(|\Psi_{\text{out}}\rangle\) presents high entanglement entropy with a small dip at \(\Delta t_{2}=T_{\epsilon}/4\) in its signature and for \(\Delta t_{3}=T/4\) the state predicted by our protocol is given by \[|\widetilde{\Psi}_{\text{out}}(\phi)\rangle=\frac{\left(2\cos \phi\,a_{1}^{\dagger}a_{3}^{\dagger}+\sin\phi[(a_{1}^{\dagger})^{2}-(a_{3}^{ \dagger})^{2}]\right)^{l}}{2^{l}l!}|0,0,0\rangle. \tag{16}\] The above state allows determining analytically the correlation function of sites 1 and 3 as a function of parameter \(\phi\), given by \[C_{13}=\frac{l(l+1)}{2}\sin^{2}\phi.\] In particular, for \(\phi=\pi/2\) and \(\phi=3\pi/2\), the correlation achieves its maximum value \(C_{13}=l(l+1)/2\) and the state \(|\widetilde{\Psi}_{\text{out}}\rangle\) is highly entangled with the respective fidelities \(F=0.998315\) and \(F=0.998771\), given by (up to global phase) \[|\widetilde{\Psi}_{\text{out}}\rangle=\frac{\left[(a_{1}^{\dagger})^{2}-(a_{3} ^{\dagger})^{2}\right]^{l}}{2^{l}l!}|0,0,0\rangle.\] The above state shows that the protocol acts on the initial twin-Fock state by performing a discrete Fourier transform on the modes 1 and 3 defined as \(a_{1(3)}^{\dagger}\rightarrow(a_{1}^{\dagger}\pm a_{3}^{\dagger})/\sqrt{2}\)[43], which leads to a quantum state with only an even number of particles at sites 1 and 3. This result can be interpreted as a destructive interference process on the odd number of particles, similar to the well-known Hong-Ou-Mandel (HOM) effect [44; 45]. It is worth noting that, in the resonant regime, the time evolution operators \(\mathcal{U}(T/4,0)\) and \(\mathcal{U}(\Delta t_{2},\epsilon)\) used to generate the output state \(|\Psi_{\text{out}}\rangle\) play an analogous role of the \(50:50\) beam-splitter and phase shifter operations in a Mach-Zehnder (MZ) interferometer [46; 47] This shows the protocol is capable of performing interferometric operations in which the phase estimation sensitivity depends on the choice of the initial state and the observable to be detected. (see Appendix A). ## VII Entangled input state In the previous sections, we considered a class of non-entangled initial states in which the state of well 2 remains constant over time, and therefore remains disentangled from the rest of the system. Now we will consider an entangled initial state in which quantum entanglement between well 2 and the subsystem composed of the other two wells is also manifest. To this end, let us analyze the effect of the protocol on the initial state defined as \[|\Psi_{0}\rangle=\frac{1}{\sqrt{2}}|0,N,0\rangle+\frac{1}{\sqrt{2N!}}\left( \frac{a_{1}^{\dagger}+a_{3}^{\dagger}}{\sqrt{2}}\right)^{N}|0,0,0\rangle. \tag{17}\] The above state has a NOON-like state (NLS) structure, in the sense it is a superposition between the state with all particles in well 2 and the state with all atoms in the subsystem of wells 1 and 3. However, the state of the subsystem of well 1 and 3 has all particles into a coherent state \(|\text{CS}\rangle=\frac{1}{\sqrt{N!}}\left(\frac{a_{1}^{\dagger}+a_{3}^{ \dagger}}{\sqrt{2}}\right)^{N}|0,0,0\rangle\). The motivation for its study is directly related to the ground state of the integrable Hamiltonian (see Appendix B). Now, considering the case of initial state (17), the effective Hamiltonian is still given by (5) with \(l=0\), since \(\omega_{N}=\omega_{0}\). Then, the protocol predicts the following quantum state \[|\widetilde{\Psi}_{\text{out}}(\phi)\rangle = \frac{1}{\sqrt{2}}|0,N,0\rangle \tag{18}\] \[+\frac{1}{\sqrt{2N!}}\left(c_{\phi}a_{1}^{\dagger}-s_{\phi}a_{3}^ {\dagger}\right)^{N}|0,0,0\rangle,\] where we define \[c_{\phi}=\cos\left(\frac{\phi}{2}-\frac{\pi}{4}\right),\quad s_{\phi}=\sin \left(\frac{\phi}{2}-\frac{\pi}{4}\right).\] Figure 8 presents the change of entanglement entropies \(S_{1}\), \(S_{2}\), and \(S_{3}\) with respect to the parameter \(\phi\). The figure clearly shows that the entropy \(S_{2}\) remains constant at \(S_{2}=\log 2\) while the other entropies exhibit a dip at \(\phi=\pi/2(3\pi/2)\) with the typical value \(S_{1(3)}=\log 2\) of a NOON state. In addition, the two-site correlation functions obtained from the quantum state (18) are given by (see Figure 9) \[C_{13} = \frac{N(N-2)}{16}\cos^{2}\phi,\] \[C_{12} = \frac{N^{2}}{4}\cos^{2}\left(\frac{\phi}{2}-\frac{\pi}{4}\right),\] \[C_{23} = \frac{N^{2}}{4}\sin^{2}\left(\frac{\phi}{2}-\frac{\pi}{4}\right).\] From the Figure 9, it is clear the occurrence of maximum of \(C_{12}\) coincides with the cancellation of \(C_{23}\) and vice-versa when \(C_{13}=0\) at \(\phi=\pi/2\) and \(\phi=3\pi/2\), producing the corresponding states \[|\widetilde{\Psi}_{\text{out}}(\pi/2)\rangle = \frac{1}{\sqrt{2}}|0,N,0\rangle+\frac{1}{\sqrt{2}}|N,0,0\rangle,\] \[|\widetilde{\Psi}_{\text{out}}(3\pi/2)\rangle = \frac{1}{\sqrt{2}}|0,N,0\rangle+\frac{(-1)^{N}}{\sqrt{2}}|0,0,N\rangle,\] with the fidelities \(F=0.996123\) and \(F=0.973085\), respectively. The above NOON states can be seen as the result of an entanglement deconcentration process [48; 49; 50] through unitary transformation on the NLS state, since they are produced in the subsystems 12 and 23 with less entanglement entropy \(S_{1,3}\) than the initial state. This result shows that the protocol controls the transition between the bipartite and tripartite entanglement of the quantum state. It also suggests that the triple-well system can be thought of as a potential shared router operating at the interface between two individual quantum devices to perform a transfer of a NOON state. ## VIII Conclusion We have proposed a protocol to generate states with controlled levels of entanglement, where the control is realized by breaking the integrability for a short period of time. Our study provides closed formulas for correlation functions to characterize the entanglement in terms of the integrability breaking time, which allowed us to predict the time required to generate highly entangled states. In the action of protocol on one of the initial states, the maximum correlation predicts the formation of NOON states, whereas, for other unentangled initial states, the maximum correlations are closely related to interference processes. Our results have the potential to open new avenues for the manipulation and short-range transfer of entangled states within multimode sytems. These may find applications in quantum routing processes of new devices based on ultracold quantum technology. ## IX Acknowledgments The authors acknowledge support from CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) - Edital Universal 406563/2021-7. AF and JL are supported by the Australian Research Council through Discovery Project DP200101339. We thank Rafael Barkknecht for helpful discussions. ## Appendix A Interferometry In this section, we discuss some interferometric aspects of the protocol proposed in section IV. First, we consider the state produced in equation 15 to calculate the imbalance population between sites 1 and 3. This provides an in Figure 9: Correlation functions (in units of \(C^{max}=N^{2}/4\)) \(C_{12}\) (solid line), \(C_{23}\) (dot-dashed line) and \(C_{13}\) (dot line) for \(N=10\) and \(U=-1.3\). terference pattern as a function of parameter \(\phi\) according to equation \[\langle N_{1}-N_{3}\rangle\equiv\langle\widetilde{\Psi}_{\text{out}}(\phi)|N_{1}-N _{3}|\widetilde{\Psi}_{\text{out}}(\phi)\rangle=-N\cos\phi.\] Note the unconventional negative sign can be changed by extending the duration of the last operation to \(\Delta t_{3}=3T/4\). The phase uncertainty can be obtained using the error propagation theory [51] and is given by \[\Delta\phi=\frac{\Delta(N_{1}-N_{3})}{|\partial_{\phi}(N_{1}-N_{3})|}=\frac{1 }{\sqrt{N}}.\] where the notation \(\Delta X=\sqrt{\langle X^{2}\rangle-\langle X\rangle^{2}}\) is the standard deviation of operator \(X\). The above result shows the uncertainty of parameter \(\phi\) is the shot noise limited. The sensitivity of parameter \(\phi\) can be improved for the case of initial twin-Fock state \(|l,0,l\rangle\) at sites 1 and 3. This can be achieved by detecting the parity operator \(\Pi_{1}=e^{-i\pi N_{1}}\)[52], whose expectation value for the output state generated in equation 16 is given by \[\langle\Pi_{1}\rangle\equiv\langle\widetilde{\Psi}_{\text{out}}(\phi)|\Pi_{1} |\widetilde{\Psi}_{\text{out}}(\phi)\rangle=P_{1}(\cos(2\phi-\pi)),\] where \[P_{1}(x)=\sum_{k=0}^{\lfloor l/2\rfloor}\frac{(-1)^{k}}{2^{l}}\binom{l}{k} \binom{2l-2k}{l}x^{l-2k},\] are the Legendre polynomials. The sensitivity of parameter \(\phi\) can be estimated by \[\Delta\phi=\frac{\Delta\Pi_{1}}{|\partial_{\phi}\langle\Pi_{1}\rangle|}, \tag{10}\] which shows the uncertainty of parameter \(\phi\) approaches to the Heisenberg limit \(\Delta\phi\approx 1/(2l)\) when \(\phi\approx\pi/2\) (see [52] for details). ## Appendix B Ground state In this section, we discuss the structure of the ground state of integrable Hamiltonian (2) in the resonant regime with \(U<0\). To this end, we first note that the Hamiltonian (2) can be reduced to a Bose-Hubbard Hamiltonian of a two-site structure (see, for instance [53]) \[H=U(N_{13}-N_{2})^{2}-J(a_{2}^{\dagger}a_{13}+a_{13}^{\dagger}a_{2}),\] by identifying the single mode operator \(a_{13}=\frac{a_{1}+a_{3}}{\sqrt{2}}\) and the total number of particles \(N_{13}=N_{1}+N_{3}\) in the subsystem of sites 1 and 3. On the other hand, for a small number of atoms, it is known that the ground state of two-site Bose-Hubbard Hamiltonian admits the generation of NOON state \(|\text{NOON}\rangle=\frac{1}{\sqrt{2}}(|N,0\rangle+|0,N\rangle)\) in strong repulsive interaction regime [54], which has the entanglement entropy \(S_{\text{NOON}}=-Tr(\rho_{1}\log\rho_{1})=\log 2\) due to it having only one pair of equally likely base Fock state. Likewise, in the resonant regime (with \(U<0\)) for small \(N\sim 10\), the ground state \(|\text{GS}\rangle\) of three modes integrable Hamiltonian (2) presents high fidelity (above 0.99) to the NOON-like state (NLS) \[|\text{NLS}\rangle=\frac{1}{\sqrt{2}}|0,N,0\rangle+\frac{1}{\sqrt{2}}\frac{(a_ {13}^{\dagger})^{N}}{\sqrt{N!}}|0,0,0\rangle.\] The Figure 10 presents the fidelity \(F=|\langle\text{NLS}|\text{GS}\rangle|^{2}\) as a function of \(|U/J|\) and \(N\) for \(U<0\). Figure 10: (**a**) Fidelity vs \(|U/J|\) for \(U<0\) and \(N=5\) (turquoise), \(N=10\) (magenta) and \(N=12\) (pink). (**b**) Fidelity vs \(N\) for \(U=-1.0\) (dashed line), \(U=-1.3\) (solid line) and \(U=-2.0\) (dot dashed line). The vertical lines mark \(|U/J|=1.3\) and \(N=10\), values used in section VII.
2304.08763
A Survey for Biomedical Text Summarization: From Pre-trained to Large Language Models
The exponential growth of biomedical texts such as biomedical literature and electronic health records (EHRs), poses a significant challenge for clinicians and researchers to access clinical information efficiently. To tackle this challenge, biomedical text summarization (BTS) has been proposed as a solution to support clinical information retrieval and management. BTS aims at generating concise summaries that distill key information from single or multiple biomedical documents. In recent years, the rapid advancement of fundamental natural language processing (NLP) techniques, from pre-trained language models (PLMs) to large language models (LLMs), has greatly facilitated the progress of BTS. This growth has led to numerous proposed summarization methods, datasets, and evaluation metrics, raising the need for a comprehensive and up-to-date survey for BTS. In this paper, we present a systematic review of recent advancements in BTS, leveraging cutting-edge NLP techniques from PLMs to LLMs, to help understand the latest progress, challenges, and future directions. We begin by introducing the foundational concepts of BTS, PLMs and LLMs, followed by an in-depth review of available datasets, recent approaches, and evaluation metrics in BTS. We finally discuss existing challenges and promising future directions in the era of LLMs. To facilitate the research community, we line up open resources including available datasets, recent approaches, codes, evaluation metrics, and the leaderboard in a public project: https://github.com/KenZLuo/Biomedical-Text-Summarization-Survey/tree/master. We believe that this survey will be a useful resource to researchers, allowing them to quickly track recent advancements and provide guidelines for future BTS research within the research community.
Qianqian Xie, Zheheng Luo, Benyou Wang, Sophia Ananiadou
2023-04-18T06:38:40Z
http://arxiv.org/abs/2304.08763v2
# A Survey on Biomedical Text Summarization with Pre-trained Language Models ###### Abstract The exponential growth of biomedical texts such as biomedical literature and electronic health records (EHRs), provides a big challenge for clinicians and researchers to access clinical information efficiently. To address the problem, biomedical text summarization has been proposed to support clinical information retrieval and management, aiming at generating concise summaries that distill key information from single or multiple biomedical documents. In recent years, pre-trained language models (PLMs) have been the de facto standard of various natural language processing tasks in the general domain. Most recently, PLMs have been further investigated in the biomedical field and brought new insights into the biomedical text summarization task. In this paper, we systematically summarize recent advances that explore PLMs for biomedical text summarization, to help understand recent progress, challenges, and future directions. We categorize PLMs-based approaches according to how they utilize PLMs and what PLMs they use. We then review available datasets, recent approaches and evaluation metrics of the task. We finally discuss existing challenges and promising future directions. To facilitate the research community, we line up open resources including available datasets, recent approaches, codes, evaluation metrics, and the leaderboard in a public project: [https://github.com/KenZ](https://github.com/KenZ) Luo/Biomedical-Text-Summarization-Survey/tree/master. Biomedical texts, text summarization, pre-trained language models. ## 1 Introduction With the rapidly increasing of unstructured clinical information, such as biomedical literature [1] and clinical notes [2], there is a big challenge for researchers and clinicians to access the required information effectively. To address it, the text summarization technique [3] has been explored in the biomedical domain to help users seek information more efficiently. Biomedical text summarization [4] aims to shorten single or multiple long biomedical documents into a condensed summary that keeps the most important semantic information. It saves much time and human effort for users since they can grasp the main idea of long biomedical documents by only reading the summary quickly. It can be applied in various real applications including but not limited to aiding evidence-based medicine [5], clinical information management [6], and clinical decision support [7]. In recent years, pre-trained language models (PLMs) [8] that are applied as the paradigm of various natural language processing tasks, have been introduced into biomedical text summarization [9, 10]. Compared with existing methods, such as graph-based ranking methods [11], traditional machine learning methods [12] and deep learning methods [13], self-supervised pre-training makes PLMs memorize common sense and lexical knowledge inherited in the training texts [14], which can be transferred to improve NLP tasks via fine-tuning. Without manually annotated data, PLMs can greatly boost the performance of various NLP tasks via knowledge transfer where large-scale unlabeled data is available, such as text summarization in the biomedical domain. In these methods, general-domain PLMs such as BERT [8] or domain-specific PLMs such as BioBERT [15] is employed as the backbone model for encoding input texts. They are further fine-tuned with the specific loss of the biomedical text summarization task on the biomedical unstructured dataset. It allows the semantic knowledge captured in PLMs to be transferred to the biomedical text summarization task, resulting in more conclusive and informative summaries. Despite the fact that there were previous surveys for traditional machine learning and deep learning techniques on biomedical text summarization [4, 16, 17, 18], there were no efforts summarizing and tracking the recent development of PLMs on biomedical text summarization task. To fill the gap, this paper surveys recent work that utilizes PLMs for the biomedical text summarization task. We systematically review benchmark datasets, recent approaches, and evaluation methods of the task. We categorize and discuss existing methods according to how they use PLMs (feature-based, fine-tuning based, and adaption+fine-tuning based), and what PLMs they used (encoder-based PLMs, decoder-based PLMs, and encoder-decoder based PLMs). We hope this paper can be a timely survey for researchers in the research community to quickly track recent progress, challenges, and promising future directions. The main contributions of this survey are: * We propose a comprehensive review of biomedical text summarization with pre-trained language models. To the best of our knowledge, this is the first review that surveys recent PLMs-based methods. * We categorize and discuss recent approaches, benchmark datasets, and evaluation methods thoroughly. * We discuss challenges of existing approaches and outlook promising future directions. **Compared with existing surveys** Afantenos et al [16] was the earliest survey that summarized traditional natural language processing and machine learning methods for medical document summarization. Mishra et al [4] reviewed text summarization methods for biomedical literature and electronic health records (EHRs), between January 2000 and October 2013. Pivovarov et al [17] examined automated summarization methods for electronic health records. Most methods summarized in these surveys are traditional machine learning methods based on feature engineering. With the prosperity of deep learning since 2014, deep neural networks became the mainstream method for biomedical text summarization. Recently, Wang et al [18] investigated deep learning-based text summarization approaches for both biomedical literature and EHRs between January 2013 to April 2021. Nevertheless, researches with PLMs for biomedical text summarization were not included in this survey. Although there were previous efforts summarizing PLMs-based methods for biomedical natural language processing [19], they only briefly introduced PLMs for the biomedical text summarization task as one of the various tasks. Compared with them, we provide a more comprehensive and focused overview for PLMs on biomedical text summarization including benchmark datasets, evaluation metrics, and limitations et al. **Paper collection** We collect representative works since 2018 that are published in conferences and journals of computer science and biomedical science such as ACL, EMNLP, COLING, NAACL, AAAI, Bioinformatics, BioNLP, JAMIA, AMIA, NPJ digital medicine et al. We use PubMed and google scholar as the search engine and the database. We search with keywords including "biomedical summarization", "medical summarization", "clinical summarization", "medical dialogue summarization" et al. **Organization of the paper** We will first introduce the background of biomedical text summarization and pre-trained language models in the Section 2. Then Section 3 will describe benchmark datasets. Representative PLMs based methods will be categorized and discussed in Section 4. We introduce evaluation methods in Section 5. We next discuss limitations and future directions in Section 6. Finally, we make a conclusion in Section 7. Figure 1 shows the proposed overview of biomedical text summarization with pre-trained language models. ## 2 Background In this section, we first review biomedical text summarization and pre-trained language models, which are two essential concepts used in this survey. The overview of the background section is shown in Figure 2. ### _Biomedical Text Summarization_ Biomedical text summarization aims to shorten single or multiple biomedical documents into a condensed summary that preserves the most important information from the original text. In general, automated summarization approaches are divided into extractive summarization methods [20] and abstractive summarization methods [21] according to the output of summaries as shown in Figure 3. Extractive methods select key sentences from original documents and concatenate them into a summary, while abstractive methods generate new sentences as the summary based on the original documents. Compared with extractive summarization, abstractive summarization is more challenging. It is difficult for automated abstractive methods to generate factually consistent summaries since it involves generating informative sentences from the large vocabulary, lexical and syntactic adjustment, and paraphrasing. Formally, let's assume \(C\) as a biomedical corpus with \(D\) documents, \(d\in C\) is a document consisting of \(m\) sentences: \(d=\{s_{1},\cdots,s_{m}\}\). We also assume the gold summary of the document \(d\) as \(t_{d}\). For biomedical scientific papers, abstracts of papers are generally deemed as their gold summaries. Automatic biomedical summarization methods are largely facilitated and inspired by automatic methods in the general domain. The earliest methods are traditional machine learning methods such as Naive-Bayes classifier [22], and graph-based ranking methods such as TextRank [11]. With the prosperity of deep learning since 2014, neural network methods have been the mainstream method for both extractive and abstractive summarization of biomedical texts. Neural extractive methods [20] formulate the extractive task as the binary classification problem that predicts labels (1 or 0) of sentences in original documents to select sentences. As for neural abstractive methods [23], they model the abstractive task as the text generation problem that generates new sentences based on the sequence-to-sequence [24] framework. **Extractive summarization** For document \(d\), extractive summarization methods aim to select a subset of \(o\) sentences from \(d\), \(o\ll m\). Neural extractive methods can be classified into unsupervised methods and supervised methods. The unsupervised methods model the extractive task into the sentence ranking problem. They generate sentence representations based on word embeddings, and use the unsupervised ranking method to select important sentences based on their representations. For supervised methods, since most corpus adopt human-written abstractive summaries as the gold summaries whose sentences are not in the original documents, they are first required to generate binary labels for sentences according to the gold summaries, to train the extractive models. To this end, they generally adopt unsupervised sentence selection methods such as the greedy search algorithm [20] to generate the oracle summary for each document with sentences that are most semantically similar to the gold summary. Therefore, sentences that are included in the oracle summary are labeled with \(1\), while the remaining sentences are labeled with \(0\). Most supervised neural extractive methods consist of the neural network-based encoder and classifier. A neural network-based encoder is used to capture the contextual information of input documents and generate vector representations of sentences. The classifier is to predict labels of sentences according to their vector representations. The objective is to maximize the log-likelihood of the observed labels of sentences: \[\log p(y|C;\theta)=\sum_{d\in C}\sum_{i=1}^{m}\log p(y_{i}^{d}|d;\theta) \tag{1}\] where \(y_{i}^{d}\) is the ground truth label of sentence \(s_{i}\) in document \(d\), \(\theta\) is the parameter set of the model. **Abstractive summarization** Neural abstractive methods build the abstractive task as the sequence-to-sequence learning problem. Most of them utilize the encoder-decoder framework [24], which consists of the neural network-based encoder and decoder. Similar to extractive methods, the encoder is used to yield vector representations of input documents. The decoder is to generate the target summary sequentially with representations from the encoder. The model is optimized via the objective to maximize the log-likelihood of target words in the gold summary. \[\log p(t|C;\theta)=\sum_{d\in C}\sum_{i=1}^{n}\log p(t_{i}^{d}|d;\theta) \tag{2}\] where \(t_{i}^{d}\) is the \(i\)-th word in the gold summary \(t^{d}\) of the document \(d\), \(n\ll m\). Most recently, PLMs have become the new paradigm of biomedical summarization. PLMs-based methods have a similar framework to neural methods for extractive and abstractive tasks, while PLMs are more powerful than neural networks in encoding biomedical texts. We will next introduce the pre-trained language models. ### _Pre-trained Language Models_ Language model pre-training [25, 26] has long been an active research area with the aim of learning low-dimensional vector representations from natural language, which are applicable and generalizable for downstream tasks. The earliest unidirectional neural language models such as word2vec [26] and glove [27], learn meaningful word embeddings via estimating the probability of the next word with the sequence of history words. The bidirectional language models such as ELMo [28] are then proposed to further consider the bidirectional context of words. Bidirectional Encoder Representations from Transformers (BERT) [8] is the breakthrough work that advances the state-of-art of various NLP tasks. BERT and its variants generally consist of two steps: pre-training and fine-tuning. It proposes to first pre-train the deep models based on basic neural network structure such as transformer [29] on the large scale of unlabeled data with a self-supervised learning task. Then the pre-trained parameters of deep models and task-specific parameters are fine-tuned on labeled data with downstream tasks. We will further illustrate the core components of PLMs along with pre-training and fine-tuning. For more details of PLMs, one can check the review [30]. **Model architecture** The early language models such as ELMo and its predecessors [28, 31], generally utilize Bi-LSTM [32] as the backbone network structure, to capture bi-directional contextual information of texts. However, Bi-LSTM has the limitation of parallelization and sequential computation with the growth of sequence length. One breakthrough work is Transformer [29], which proposes the self-attention-based neural network model architecture. It is able to parallel computation and model long-range dependencies of sequences efficiently. The Transformer follows the encoder-decoder architecture with stacked multi-head self-attention and a point-wise fully connected feed-forward network. After that, nearly all pre-trained language models utilize the Transformer architecture. To learn better representations, they usually have deep network architecture. For example, the base model of BERT has 12 Transformer layers Fig. 1: Overview of biomedical text summarization with pre-trained language models. with hidden size 768 and 12 self-attention heads. According to different model architectures, existing PLMs can be categorized into three types: bi-directional encoder language models, auto-regressive decoder language models, and encoder-decoder language models, as shown in Figure 4. The bi-directional language models, i.e, BERT and its variants such as Roberta [33], use Transformer as the bi-directional encoder without using the decoder structure in Transformer. The auto-regressive language models such as GPT series [34, 35, 36], only pre-train auto-regressive decoders based on Transformer architecture. Different from these two methods, encoder-decoder language models such as BART [37] pre-train the full encoder-decoder Transformer architecture. **Training corpora** Most PLMs use the corpora in the general domain such as BooksCorpus [38] and Wikipedia. To address the out-of-vocabulary words, they split words into sub-words to formulate the vocabulary via the Byte-Pair Encoding (BPE) [39] or WordPiece [40] methods. **Pre-training** As the first step of PLMs, the pre-training task on a large scale of unlabeled data is the key for language models to learn useful representations and parameters, which can be fine-tuned to downstream tasks. The pre-training task of most previous language models follows the unidirectional language model [25]. It aims to maximize the log-likelihood of words conditionally on history words: \[\mathcal{L}_{lm}=-\sum_{t=1}^{T}\log p(x_{t}|x_{1},x_{2},\cdots,x_{t-1}) \tag{3}\] where \(X=\{x_{1},\cdots,x_{T}\}\) is a given text sequence with \(T\) words. The bidirectional language model is further proposed to capture contextual information of text from both directions. It combines both the left-to-right language model Fig. 3: The example of extractive and abstractive biomedical text summarization. Fig. 2: Overview of background. and right-to-left language model: \[\begin{split}\mathcal{L}_{blm}=&-\sum_{t=1}^{T}(\log p (x_{t}|x_{1},x_{2},\cdots,x_{t-1})\\ &+\log p(x_{t}|x_{t+1},x_{t+1},\cdots,x_{T}))\end{split} \tag{4}\] Different from the bidirectional language model, PLMs such as BERT utilize the masked language model (MLM), which allows bi-directional self-supervised pre-training more efficiently. It randomly selects 15% tokens of the input text to predict, in which 80% of them are replaced with the special token "[MASK]", 10% of them are replaced with other words in the vocabulary. The objective is to maximize the log-likelihood of ground-truth words in the selected positions with masked text sequence: \[\mathcal{L}_{mlm}=-\sum_{\hat{x}\in m(x)}\log p(\hat{x}|X_{m}) \tag{5}\] where \(X_{m}\) is the masked text sequence and \(m(x)\) is the set of masked words. Pre-training with MLM guides the language model to fully capture the contextual information embedded in the token sequence and generate more expressive representations on different levels such as tokens and sentences. **Fine-tuning** Although self-supervised pre-training on a large-scale corpus allows language models to memorize common sense and linguistic knowledge in pre-trained parameters and contextual representations, it is still essential to adapt the model and generated representations to downstream tasks via fine-tuning with task-specific objectives and datasets. According to downstream tasks, task-specific inputs are firstly fed into pre-trained language models to yield contextual representations. Different tasks usually are formulated into different problems such as classification, regression, and generation. Therefore, it requires choosing contextual representations on different levels and different task-specific layers stacked on top of language models. For example, for the extractive summarization task, previous researches generally append an extra classification layer to predict the labels of sentences based on the sentence representations from PLMs. By optimizing the model with the classification loss, the parameters of PLMs and task-specific parameters are refined. The general linguistic and semantic knowledge in the pre-trained language models are transferred into task-specific representations via fine-tuning, which have shown great performance in various tasks and become the paradigm of these tasks. ### _Biomedical Language Models_ Inspired by the great success of PLMs on NLP tasks, much attention has been devoted to applying PLMs to tasks in the biomedical domain including biomedical text summarization. However, most advanced pre-trained language models, including BERT, variants of BERT, GPT3 [36], T5 [41] et al, are pre-trained on texts of the general domain such as BooksCorpus [38] and Wikipedia. It is challenging to directly apply these models to biomedical texts. The greatest issue is the terminologies and compound words in biomedical texts, most of which have never been mentioned in the general domain texts. These domain-specific tokens uncovered by the vocabulary of PLMs nevertheless embed the salient information which is fundamental for understanding the biomedical texts. To fill the gap, many pre-trained language models for the biomedical domain such as BioBERT [15], BlueBERT [42], and ClinicalBERT [43] et al, have been proposed to further pre-train PLMs in the general domain with biomedical texts. BioBERT [15] is the first biomedical language model that further pre-train BERT on biomedical scientific texts including PubMed abstracts (PubMed) and PubMed Central full-text articles (PMC). Following it, BlueBERT [42] conducts continual pre-training on clinical text corpus MIMIC-III [44] along with biomedical scientific texts. Yet they still adopt the same vocabulary as BERT, which limits their ability in modeling the semantic information of biomedical texts. Different from them, SciBERT [45] builds a domain-specific vocabulary from scratch and conducts pre-training on scientific literature, in which 12% articles are from the computer science domain and 82% articles are from the biomedical domain. PubMedBERT [46] pre-trains their models with scientific papers solely in the biomedical domain. Compared with PLMs pre-trained in the general domain, biomedical PLMs Fig. 4: Comparison of different language models. Bi-directional encoder language models use only the encoder portion of Transformer and predict masked tokens independently with the masked input sequence. They are hard to be used for generation tasks. Different from the bi-directional encoder language models, autoregressive decoder language models are uni-directional and only use the decoder portion of the Transformer. They predict tokens auto-regressively and therefore can be used for generation tasks. Encoder-decoder language models use the full encoder-decoder architecture in Transformer. They are more flexible since there is no need to align the input tokens or the encoder and output tokens in the decoder. based on biomedical texts can provide better representations that capture the contextual information of both normal tokens and domain-specific terms. One can check the survey paper [19] for more details on pre-trained language models for the biomedical domain. ## 3 Datasets Unstructured biomedical texts used in text summarization methods involve various types, including biomedical literature, electronic health records (EHRs), medical conversations, and medical questions, as shown in Figure 5. Details of these datasets are summarized in Table I. **Biomedical Literature** With the exponentially growing of scientific papers, developing automated summarization tools for biomedical articles has long attracted much attention. These texts are usually written by domain experts such as researchers and physicians. Compared with general domain texts such as social media texts or news texts, they are less noisy and generally organized with standard sections, such as "Introduction", "Methods", "Results" et al. For single document summarization, PubMed [47] is one of the most commonly used datasets, for summarization of long biomedical texts. It consists of 133K scientific papers collected from the PubMed open access repositories1. It has been used as a benchmark dataset and widely evaluated by the general text summarization methods and biomedical text summarization methods. It is noticed that Zhong et al [57] further adapt the dataset that only uses the introduction of texts as the input. To identify these two settings on the dataset, we name the original PubMed dataset that uses full contents of documents as the PubMed-Long2, and the new dataset that is adapted by Zhong et al [57] as the PubMed-Short. Following it, SumPubMed [48] proposed recently, includes 33,772 documents from Bio Fig. 5: Overview of datasets. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Dataset** & **Category** & **Size** & **Content** & **Summarization Task** \\ \hline PubMed [47] & Biomedical literature & 133,215 & Full contents of articles & Single \\ SumPubMed [48] & Biomedical literature & 33,772 & Full contents of articles & Single \\ S2ORC [49] & Biomedical literature & 63,709 & Full contents of articles & Single \\ CORD-19 [50] & Biomedical literature & - & Full contents of articles & Single \\ CDSR [51] & Biomedical literature & 7,805 & Abstracts of articles & Single \\ RCT [10] & Biomedical literature & 4,528 & Titles and abstracts of articles & Multiple \\ MS2 [9] & Biomedical literature & 470,402 & Abstracts of articles & Multiple \\ MIMIC-CXR [52] & EHRs & 124,577 & Full contents of reports & Single \\ OpenI [53] & EHRs & 3,599 & Full contents of reports & Single \\ HET-MC [54] & Medical conversation & 109,850 & Multi-turn conversations & Single \\ MeQSum [55] & Medical question & 1,000 & Consumer health question & Single \\ CHQ-Summ [56] & Medical question & 1,507 & Full contents of question & Single \\ \hline \hline \end{tabular} \end{table} TABLE I: Biomedical text summarization datasets. Med Central (BMC) of PubMed archive3. Bishop et al [49], [58] extracts the subset from the large scientific corpus S2ORC [59] and build the dataset S2ORC4 which includes 63,709 articles from the biological and biomedical domain. Most recently, COVID-19 Open Research Dataset (CORD-195) [50] has attracted much attention, for which developing summarization systems would facilitate relevant research and help against the COVID-19 pandemic. CORD-19 contains millions of papers related to COVID-19, SARS-CoV-2, and other coronaviruses. Moreover, Guo et al [51] collected the CDSR6 dataset to support the task of lay language summarization of biomedical scientific reviews, which is a special kind of single document summarization that aims to generate plain language abstract for lay people based on professional abstracts from expertise. It contains 7,805 abstract pairs of biomedical scientific reviews, in which professional abstracts of systematic reviews are deemed as inputs and their corresponding plain language abstracts as target summaries. Footnote 3: [https://github.com/vgupta123/sumpubmed](https://github.com/vgupta123/sumpubmed) Footnote 4: [https://github.com/lbshp/GenCompareSum](https://github.com/lbshp/GenCompareSum) Footnote 5: [https://github.com/allenai/cor/10/](https://github.com/allenai/cor/10/) Footnote 6: [https://github.com/qiweiwle/Plain_language_summarization](https://github.com/qiweiwle/Plain_language_summarization) Footnote 7: [https://github.com/bwallace/RCT-summarization-data](https://github.com/bwallace/RCT-summarization-data) Footnote 8: [https://pubmed.ncbi.nlm.nih.gov](https://pubmed.ncbi.nlm.nih.gov) Footnote 9: [https://www.cochranellibrary.com/](https://www.cochranellibrary.com/) Footnote 10: [https://github.com/allenai/ms2/](https://github.com/allenai/ms2/) Footnote 11: [https://openi.nlm.nih.gov/faq/collection](https://openi.nlm.nih.gov/faq/collection) Footnote 12: [https://physionet.org/content/mimic-cxr/2.0.0/](https://physionet.org/content/mimic-cxr/2.0.0/) As for multi-document summarization in the biomedical domain, Wallace et al [10] build the RCT7 summarization dataset with 4,528 data samples searched from PubMed8. The input of each data sample includes titles and abstracts of related papers describing randomized controlled trials (RCTs), while the conclusion section of the systematic review from Cochrane9 is treated as the target summary. Similarly, Devoung et al [9] developed the MS210 for multi-document summarization of medical studies. It collected 470K papers from Semantic Scholar and 20K reviews that summarized these papers. Footnote 8: [https://github.com/pubmed.ncbi.nlm.nih.gov](https://github.com/pubmed.ncbi.nlm.nih.gov) **Electronic Health Records** Electronic health records have been widely adopted by hospitals to store and manage medical information of patients, such as diagnostic codes, medications, laboratory results, clinical notes et al. They are also written by professionals with professional language and specific structure. Different from scientific papers that are generally free access, EHRs may have restrictions on public access due to privacy issues. Several publicly available datasets have been released to support the automated summarization of radiology reports. It is important to automatically generate the impression which should highlight key observations of the findings and background of the radiology reports. Demner et al [53] collected the OpenI11 datasets containing 3,996 chest x-ray reports from hospitals within the Indiana Network. Compared with OpenI, MIMIC-CXR [52] is a larger publicly available dataset including 107,372 radiology reports from Beth Israel Deaconess Medical Center Emergency Department between 2011-201612. Footnote 11: [https://www.cochranellibrary.com/](https://www.cochranellibrary.com/) **Medical Conversations** Medical conversations between patients and doctors from online healthcare systems have become an important source of medical information, with the increasing usage of telemedicine. The automated summarizing of key medical information on long medical conversations can save much time for doctors and improve healthcare efficiency. Medical conversations usually involve multi-turn interactions between two parties. The patients focus on asking questions and solutions to their health problems and describing their symptoms, while doctors would ask for detailed symptoms of patients and provide diagnostic suggestions. Similar to EHRs, accessing medical conversations at telemedicine platforms may have restrictions due to privacy concerns. Moreover, it is time-consuming and expensive to build the supervised data, since it requires professionals to write target summaries manually. Up to now, although several advanced methods with PLMs for medical conversation summarization have been proposed [54], [60], [61], [62], [63], publicly available datasets are limited. Song et al [54] proposed the Chinese medical conversation summarization dataset13 with 109,850 conversations from the online health platform14. Footnote 13: [https://github.com/cuhksz-nlp/HET-MC](https://github.com/cuhksz-nlp/HET-MC) Footnote 14: [https://www.chunyuyisheng.com/](https://www.chunyuyisheng.com/) **Medical Questions** The consumer health questions produced by healthcare consumers in the web such as health forums, are another important data source of clinical information. To find trustworthy answers for their health questions, healthcare consumers can query the web with long natural language questions with peripheral details. The peripheral information is useless to find high-quality answers for health questions. Therefore, summarizing consumer health questions into concise text with salient information is quite useful for improving efficiency of medical question answering. Abacha et al [55] build the MeQSum15 corpus with 1,000 consumer health questions as inputs and their manual summaries from three medical experts. Yadav et al [56] introduced another dataset CHQ-Summ16 most recently, which includes 1,507 consumer health questions and their summaries annotated by experts. Different from other texts such as biomedical papers with thousands of words, consumer health question-summary pairs are short texts. For example, in CHQ-SUmm, the average length of questions and their summaries are 200 words and 15 words respectively. Footnote 15: [https://github.com/shwetanlp/Yahoo-CHQ-Summ](https://github.com/shwetanlp/Yahoo-CHQ-Summ) Footnote 16: [https://github.com/cuhksz-nlp/HET-MC](https://github.com/cuhksz-nlp/HET-MC) ## 4 PLMs for Biomedical Text Summarization Given biomedical text datasets, there are many methods that have been proposed to explore how to make better use of PLMs for the biomedical summarization task. Different from previous reviews [16], [17], [18] for traditional and deep learning methods that classify methods according to their inputs (multiple document/single document) and outputs (extractive/abstractive), we focus on how PLMs are leveraged in recent research to improve biomedical text summarization. Thus, we first categorize them into three major categories: feature-based, fine-tuning-based, and domain-adaption-with-fine-tuning-based, according to the ways that they introduce PLMs into biomedical summarization. We then further categorize methods according to the structure of PLMs and the dataset they adopt. As shown in Figure 6, the feature-based methods independently utilize contextual representations from PLMs without refining the pre-trained parameters of PLMs. The representations from PLMs are generally concatenated with the representations from the encoder to generate the output. Although they are time-saving for fixing the parameters of PLMs, the performance is limited without considering the task-specific supervised information. The fine-tuning-based methods generally take PLMs as the text encoder whose generated representations are directly fed into the output layer. The parameters of PLMs along with task-specific parameters are fine-tuned according to the task loss. Compared with feature-based methods, they usually require much higher computing resources and are time-consuming but can yield improved and promising performance. The fine-tuning-with-domain-adaption-based methods first conduct the domain-adaption for PLMs via continually pre-training PLMs with designed tasks on the target data and then fine-tune the adapted PLMs along with task-specific layers. The domain adaption allows PLMs to better capture domain-specific knowledge compared with PLMs only pre-trained on the general domain texts, resulting in a better performance on the summarization task. Next, we will review and discuss these methods in more detail, as shown in Table II. ### _Feature-based Methods_ Feature-based methods take contextualized representations of PLMs as the extra features, which can provide contextual semantic information that is ignored by the traditional text encoder. **Bi-directional encoder language models** For biomedical literature texts, Moradi et al [64] propose the unsupervised extractive summarizer based on hierarchical clustering and PLMs. They conduct sentence clustering based on sentence representations from PLMs and then select top sentences from clusters with the ranking method. They investigate different versions of BERT and BioBERT to yield sentence representations. The proposed methods show better performance than traditional unsupervised methods such as TextLexAn17. They find that all versions of BioBERT (BioBERT-pmc, BioBERT-pubmed, BioBERT-pubmed+pmc) outperform the BERT-base, but underperform the BERT-large. Moradi et al [65] propose the graph ranking based method for biomedical text summarization. They use the contextualized embeddings of BioBERT to represent sentences and build graphs for texts. The important sentences are identified with the graph ranking algorithm from text graphs. The model based on BioBERT-pubmed+pmc achieves better performance than models based on the other two versions of BioBERT: BioBERT-pmc and BioBERT-pubmed. Footnote 17: [http://textexan.sourceforge.net](http://textexan.sourceforge.net) As for clinical notes, Gharebagh et al [66] develop supervised method for abstractive summarization of clinical notes. They propose to incorporate the contextual embeddings from BERT as the input embeddings of the Bi-LSTM-based encoder. It shows better performance than classical sequence-to-sequence-based methods based on recurrent neural networks. Yan et al [67] propose the radiology-specialized language model RadBERT that is pre-trained on millions of radiology reports. On the unsupervised extractive summarization of radiology reports, it achieves better performance than other language models including BERT, BioBERT, ClinicalBERT, BlueBERT, and BioMed-RoBERTa [88]. Among all variants: RadBERT-BERT-base, RadBERT-RoBERTa, RadBERT-ClinicalBERT et al, the RadBERT-BioMed-RoBERTa achieves the best performance. **Auto-regressive decoder language models** Bishop et al [49] present unsupervised extractive summarization method GenCompareSum for biomedical literature. GenCompareSum uses the T5 generative model to generate key snippets for text sections and selects important sentences with BERTScore [89] between key snippets and sentences. It outperforms traditional unsupervised methods such as LexRank [90], TextRank [11], and also the SOTA supervised Fig. 6: Comparison of different strategies on using pre-trained language models. method BERTSum. **Encoder-decoder language models** Su et al [68] propose the query-focused multi-document summarizer for COVID-19 articles, that is able to generate abstractive and extractive summaries based on user queries. For extractive summarization, they proposed the feature based method with ALBERT [91], to generate sentence representations and calculate the cosine similarity between sentences and queries to select important sentences. ### _Fine-tuning based Methods_ Fine-tuning is the most common way to use PLMs for downstream tasks. Different from feature-based methods that freeze the parameters of PLMs, fine-tuning-based methods refine all parameters of PLMs along with task-specific parameters. **Bi-directional encoder language models** For biomedical literature, Park et al [69] present the ContinualBERT model for adaptive extractive summarization of covid-19 related literature. ContinualBERT trains two BERT models with continual learning in order to process texts online. It shows better performance than the SOTA extractive method BERTSum [92]. Du et al [70] propose BioBERTSum model for extractive summarization of biomedical literature. It uses token embedding, sentence embedding, and position embedding to embed input texts, and then yields contextual representations of sentences with BioBERT. BioBERT and the extra classifier layer are fine-tuned with the cross-entropy loss. It proves the advantage of using a domain-specific language model for biomedical texts and outperforms the SOTA method BERTSum that uses BERT in the general domain as the encoder. Cai et al [71] propose a SciBERT-based abstractive summarization model for COVID-19 scientific papers, that uses the linguistic information of word co-occurrence encoded by graph attention network to enrich the SciBERT encoder. They find that their method based on the SciBERT encoder outperforms that based on BERT and BioBERT encoder. As for clinical notes, Kanwal et al [72] propose the multi-head attention-based method for extractive summarization. It fine-tunes BERT on the task of predicting and identifying ICD-9 labels on the ICD-9 labeled MIMIC-III discharge notes. The attention scores of sentences from the last layer of the BERT model are used to select sentences. Hu et al [73] propose the radiology report summarizer that uses BioBERT as text encoder and randomly initialized transformer layers as the decoder. They use the graph encoder and contrastive learning to incorporate extra knowledge to improve the BioBERT encoder. The proposed method achieves the SOTA in the radiology report summarization. Moreover, Song et al [54] propose the hierarchical encoder-daggerger (HET) model for extractive summarization of medical conversation, which includes token-level and utterance-level encoders to encode input long transcripts. They use the Chinese version of BERT as the token-level \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline **Paper** & **Strategy** & **Model** & **Category** & **Input** & **Output** & **Training** & **Data** \\ \hline Moradi et al [64] & feature-base & encoder & literature & single & extractive & unsupervised & - \\ Moradi et al [65] & feature-base & encoder & literature & single & extractive & unsupervised & - \\ Gharbeghi et al [66] & feature-base & encoder & HHs & single & abstractive & supervised & MIMIC-CXR, OpenI \\ RadBERT [67] & feature-base & encoder & EHRs & single & extractive & unsupervised & - \\ GenCompusun [49] & feature-base & decoder & literature & single & extractive & unsupervised & PubMed, CORD-19, SZORC \\ 5u et al [68] & fine-tuning feature-base & encoder, encoder-decoder & literature & multiple & hybrid & un+supervised & CORD-19 \\ ContinualBERT [69] & fine-tuning & encoder & literature & single & extractive & supervised & CORD-19 \\ BioBERTTSum [70] & fine-tuning & encoder & literature & single & extractive & supervised & - \\ Cai et al [71] & fine-tuning & encoder & literature & single & abstractive & supervised & CORD-19, PubMed \\ Kamwal et al [72] & fine-tuning & encoder & EHRs & single & extractive & supervised & MIMIC-III \\ Hu et al [73] & fine-tuning & encoder & EHRs & single & abstractive & supervised & MIMIC-CR,OpenI \\ HET [54] & fine-tuning & encoder & conversation & single & extractive & supervised & HET-MC \\ Esteva et al [6] & fine-tuning & decoder & literature & multiple & abstractive & supervised & CORD-19 \\ Chittaguntu et al [61] & fine-tuning & decoder & conversation & single & abstractive & supervised & \\ Dyeoung et al [4] & fine-tuning & encoder-decoder & literature & multiple & abstractive & supervised & MS2 \\ Zhu et al [74] & fine-tuning & encoder-decoder & HHs & single & abstractive & supervised & MIMIC-CR,OpenI \\ Kondadadi et al [75] & fine-tuning & encoder-decoder & EHRs,single & abstractive & supervised & MIMIC-CR,OpenI \\ Xu et al [69] & fine-tuning & encoder-decoder & EHRs,question & single & abstractive & supervised & MIMIC-CR,OpenI,McSum \\ He et al [7] & fine-tuning & encoder-decoder & HHs,question & single & abstractive & supervised & MIMIC-CR,OpenI,McSum \\ Yadav et al [78] & fine-tuning & encoder-decoder & question & single & abstractive & supervised & MoSum \\ CLUSTERSENT [79] & fine-tuning & encoder-decoder & conversation & single & abstractive & supervised & - \\ Zhang et al [62] & fine-tuning & encoder-decoder & conversation & single & abstractive & supervised & - \\ Navarro et al [60] & fine-tuning & encoder-decoder & conversation & single & abstractive & supervised & - \\ BioBART [91] & fine-tuning & encoder-decoder & conversation & single & abstractive & supervised & \\ KobiGuan [58] & adapting-fine-tuning & encoder & literature & single & extractive & supervised & PubMed, CORD-19, SZORC \\ Yalunin et al [62] & adaption-fine-tuning & encoder & HHs & single & abstractive & supervised & \\ Mahai et al [63] & adaption-fine-tuning & encoder & EHRs & single & abstractive & supervised & MIMIC-CR,OpenI \\ Yadav et al [64] & adaption-fine-tuning & encoder & question & single & abstractive & supervised & MoSum \\ Kieuvongngan et al [65] & adaption-fine-tuning & decoder & literature & single & hybrid & supervised & CORD-19 \\ Guo et al [51] & adaption-fine-tuning & encoder-decoder & literature & single & hybrid & supervised & CDSR \\ Wallace et al [10] & adaption-fine-tuning & encoder-decoder & literature & multiple & abstractive & supervised & RCT \\ BDKc [58] & adaption-fine-tuning & encoder-decoder & EHRs & single & abstractive & supervised & MIMIC-CR,OpenI \\ Mrini et al [67] & adaption-fine-tuning & encoder-decoder & question & single & abstractive & supervised & MGSum \\ \hline \hline \end{tabular} \end{table} TABLE II: Overview of Methods. Except how they use PLMs and what PLMs they used, we also list other features of these methods, according to types of output summary (extractive, abstractive, hybrid), numbers of input documents (single, multiple), and types of input documents (biomedical literature summarization, radiology report summarization, medical dialogue summarization, and medical question summarization). “-” means datasets that are not released. encoder. **Auto-regressive decoder language models** There are abstractive methods that focus on fine-tuning language models such as GPT-2 and GPT-3. They are generally based on the classical encoder-decoder framework and take auto-regressive decoder language models as the text decoder. For COVID-19 related biomedical articles, Esteva et al [6] design the parallel encoder-decoder framework for abstractive summarization of multiple COVID-19 articles, that uses BERT as encoder and GPT-2 as the decoder. To overcome the problem of limited training data, methods are proposed to use the few-shot learner such as GPT-3 or few-shot fine-tuning strategy. The auto-regressive decoder language models are also introduced for medical dialogue summarization. Chintagunta et al [61] integrate medical knowledge and GPT-3. They consider GPT-3 as the summary generator and choose the best summary that captures the most medical concepts. They show that GPT-3 can be a promising backbone method for generating high-quality training data that can be incorporated with the training data with human annotation. **Encoder-decoder language models** Encoder-decoder language models such as BART, T5, and PEGASUS [93] that are pre-trained with an objective function specifically for abstractive text summarization, have been widely used for biomedical abstractive summarization. For biomedical literature, Deyoung et al [9] develop the BART-based method for multi-document summarization of medical studies. To encode multi-documents, they investigate two encoders. One is using multiple BART encoders to encode multi-documents separately. Another one is using LongformerEncoderDecoder (LED) [94], which can encode long inputs up to 16K tokens. Su et al [68] also propose the query-focused multi-document summarizer for COVID-19 articles, that is able to generate abstractive and extractive summaries based on user queries. They fine-tune BART for multi-document abstractive summarization. As for EHRs, the MEDIQA 2021 Shared Task [95] at the BioNLP 2021 workshop introduces the abstractive summarization task for radiology reports and medical question, in which most participating teams propose methods based on encoder-decoder language models. Among 14 teams that participate in the radiology reports summarization task, 6 of them [74, 75, 76, 83] use the encoder-decoder language models such as BERT, BART, PEGASUS [93]. Most of them [74, 76] find that fine-tuning PEGASUS achieves the best performance, while Kondadadi et al [75] reports that the best performance is achieved by the BART. Moreover, they report that adapting PEGASUS on the PubMed corpus can lead to worse performance, which may be due to the gap between biomedical literature and medical reports. For medical conversation summarization, efforts have been proposed to explore the encoder-decoder language models to address challenges such as limited labeled data and long transcripts. Krishna et al [79] develop CLUSTER2SENT, an extractive-abstractive hybrid method on doctor-patient conversations to generate SOAP notes (long semi-structured clinical summaries). T5 model is used in the abstractive module of CLUSTER2SENT. Zhang et al [62] leverage the BART model for automatic summarization of doctor-patient conversations with limited labeled training data. They propose the multistage fine-tuning strategy to address the input length limitation of BART. They find that fine-tuning BART can generate summaries of good quality even with limited training data. Moreover, they also find that the BART-based model significantly outperforms the BigBird [96] based models that are initialized by RoBERTa-base and PEGASUS-Large. Navarro et al [80] explore fine-tuning BART, T5, PEGASUS with zero-shot and few-shot learning strategies for medical dialogue summarization with small training data. They find that BART achieves the best performance among these PLMs. Yuan et al [81] develop the first encoder-decoder pre-trained language model BioBART in the biomedical domain, which has shown better performance on medical dialogue summarization than BART. Moreover, for medical questions, 22 teams participate in the medical question summarization task of the MEDIQA 2021 Shared Task [95] at the BioNLP 2021 workshop, and all methods are based on fine-tuning encoder-decoder language models. The best performance is achieved by the ensemble model [77] that re-ranks summary outputs of multiple advanced encoder-decoder language models including BART, T5, and PEGASUS. Yadav et al [78] present the reinforcement learning based framework for abstractive summarization of medical questions. They propose two reward functions: the Question-type Identification Reward (QTR) and Question-focus Recognition Reward (QFR), which are optimized via learning optimal policy defined by BERT. They show that the encoder-decoder language model ProphetNet [97] with the proposed reward functions has better performance than other PLMs including T5, BART, and PEGASUS. ### _Domain Adaption with Fine-tuning based Methods_ There are summarization methods that conduct domain adaption before fine-tuning PLMs, to capture domain and task-specific information. **Bi-directional encoder language models** For biomedical literature, Xie et al [58] propose the KeBioSum for the extractive summarization. It proposes to refine PLMs with the domain adaption tasks of predicting key entities and their types based on the lightweight fine-tuning framework, which aims to incorporate fine-grained medical knowledge into PLMs. It proves that although biomedical language models such as BioBERT, and PubMedBERT can capture domain knowledge to some extent, fine-grained medical knowledge is still beneficial to improve language models. They find that PubMedBERT-based methods outperform methods based on BERT, RoBERTa, and BioBERT. As for EHRs, Yalunin et al [82] present the abstractive summarizer for patient hospitalization histories, that uses Longformer [94] as the encoder and BERT as the decoder. They propose to pre-train BERT and Longformer with the masked language task on the hospitalization history dataset before task specific fine-tuning. Dai et al [86] proposed the BDKG method, which achieves the best performance on MEDIQA 2021 Shared Task for radiology reports summarization. It ensembles results from multiple language models BART, DistillBERT [98], PEGASUS, and uses other strategies including domain adaption and text normalization. Moreover, for medical questions, Yadav et al [84] investigate to incorporate the knowledge of "question-focus" and "question-type" with PLMs for abstractive summarization of consumer health questions. To induce PLMs to capture this knowledge, they adapt PLMs with designed Cloze tasks. **Auto-regressive decoder language models** Kieuvongngan et al [85] use the GPT-2 [35] for abstractive summarization of COVID-19 medical research articles. They take keywords of articles as inputs and fine-tune GPT-2 on multi-tasks including the language modeling task and the multiple choice prediction task. **Encoder-decoder language models** For biomedical articles, Guo et al [51] use the BART for automated lay language summarization of biomedical review articles. They conduct domain-adaption before fine-tuning, which pre-trains the BART model to reconstruct original PubMed abstracts with disrupted abstracts. Wallace et al [10] propose the multi-document abstractive summarization models based on BART for randomized controlled trials (RCTs). They adapt the BART with the domain-specific pre-training strategy of generating summaries from full-text articles before fine-tuning. They also use the "decoration" strategy to explicitly inform key trial attributes (the "PICO" elements) of input articles. As for medical questions, Mrini et al [87] present the multi-task learning and data augmentation method on medical question summarization and recognizing question entailment (RQE) for medical question understanding. They prove that the multi-task learning between question summarization and RQE is able to increase the performance of PLMs including BART and BioBERT. ## 5 Evaluations Evaluating summaries of biomedical texts is more challenging than general document summarization since biomedical texts are more technical and complex than general texts in length and structure. As shown in Table III, we first categorize metrics into automatic metrics and human evaluation according to whether human efforts are involved. Automatic metrics for biomedical text summarization can verify the performance of methods without any human effort. However, it can only leverage the shallow lexical and syntactical information of the generated and gold summaries. In contrast, human evaluation can capture and model semantic information which is generally difficult to quantize. Nevertheless, human evaluation in the biomedical domain which requires specific domain knowledge has an even higher time and financial consumption than it in the general domain. We further divide existing metrics into three different classes, consisting of similarity, factuality, and others. Similarity metrics focus on the relevance of generated summaries with gold summaries, which are generally based on the overlapping of tokens, phrases, and sentences. Factuality metrics verify the factual agreement of generated summaries with original documents, which is a critical measurement for the real application of automatic systems, especially in the biomedical domain. Moreover, there are other metrics such as: 1) interpretability that tests whether human readers can easily understand generated summaries, 2) fluency: how fluent and coherent generated summaries are, and 3) grammaticality: how grammatically correct generated summaries are. ### _Automatic metrics_ **Similarity** Similar to the general domain, ROUGE [100] is the most widely used metric for biomedical summarizers, including (1) ROUGE-1: unigram overlap between generated summaries of summarizers and gold summaries; (2) ROUGE-2: bigram overlap between generated summaries of summarizers and gold summaries; and (3) ROUGE-L: the longest common subsequences between generated summaries of summarizers and gold summaries. All biomedical text summarization datasets adopt the ROUGE score to evaluate their performance. However, ROUGE metrics are limited to relying on shallow lexical overlaps without considering the paraphrasing and terminology variations when measuring similarity. To address this issue, there is a recently proposed BERScore [89] metric that is also introduced to several common datasets. It calculates the similarity between two sentences as the sum of cosine similarities between the contextual embeddings of their tokens from pre-trained language models. **Factuality** Compared with extractive methods, it is reported [101] that abstractive methods struggle to generate factual correct summaries. It has been a growing awareness that metrics such as ROUGE and BERScore can not reflect the factual correctness of generated summaries [102]. For automatic evaluation, Deyoung et al [9] propose the \(\Delta\)EI metric to calculate the factual agreement of generated summaries and input medical studies. They propose to calculate the Jensen-Shannon Distance (JSD) between distributions of generated summaries and input medical studies in three \begin{table} \begin{tabular}{c|c|c|c c|c c c} \hline \hline Dataset & Category & \multicolumn{5}{c|}{**Models**} \\ \hline & & \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Human evaluation**} \\ \hline & & similarity & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Pathodal [5] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Instance \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} RCQ24, [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94],[94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94],[94], [94], [94], [94], [94], [94],[94], [94], [94], [94], [94], [94], [94], [94], [94],[94], [94], [94], [94], [94], [94], [94], [94], [94],[94], [94], [94], [94],[94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94], [94],[94], [94], [94],[94], [94], [94],[94], [94], [94], [94],[94], [94], [94], [94],[94], [94], [94],[94], [94], [94],[94], [94],[94], [94], [94], [94],[94], [94],[94], [94], [94],[94], [94], [94], [94],[94], [94],[94], [94], [94],[94], [94],[94], [94],[94], [94], [94],[94], [94], [94], [94],[94], [ directions (increase, decrease, no change) of reported directionality. However, the metric is only adopted in the MS'2 dataset and there are no efforts in other biomedical literature datasets considering the factual correctness. Zhang et al [102] propose the factual F1 score to evaluate the factual correctness of generated summaries of radiology reports. They propose to use the CheXbert labeler [103] to yield the binary presence values of disease variables of generated summaries and references and then calculate the overlap of yielded binary presence values between them. **Others** Guo et al [51] propose to apply the readability evaluation which verifies if the generated summaries are understandable for laymen. It utilizes three different standards including Flesch-Kincaid grade level [104], Gunning fog index [105], and Coleman-Liau index [106]. The readability metric is specially designed for their summarization task and dataset CDSR. ### _Human evaluation_ Although manually evaluating the performance of summarization methods is time-consuming and expensive, human evaluation with domain experts can capture more aspects than automatical evaluation, such as fluency, coherence, factuality, and grammaticality et al. Generally, the human evaluation would recruit human evaluators which are able to read and write in English and participate in medical training and biology courses. Evaluators are required to score the generated summaries with designed questions that focus on one of the aforementioned aspects. Moramarco et al [107] study the correlation between human evaluation and 18 automatic evaluation metrics including text overlap metrics such as ROUGE, CHRF [108], embedding metrics BertScore, and factual F1 score et al, on generated clinical consultation notes. They find that simple character-based metrics such as character-based Levenshtein distance can be more effective than other complex metrics such as BERTScore, and the choice of human references can largely influence the performance of automatic evaluation metrics. **Similarity** For biomedical literature, Guo et al [51] also propose to use human evaluation to further assess the quality of generated summaries on the CDSR dataset. Different from other research, it requires human evaluators to not participate in medical training and biology courses since its method is designed for laymen. It proposes the meaning preservation metric for human evaluators, which requires them to answer whether the generated summaries cover the key information of source documents, on the 1-5 Likert scale (1 means very poor, 5 means very good). For multi-document summarization dataset RCT, Wallace et al [10] ask evaluators who are medical doctors to score the relevance of the generated summaries to the given topic from mostly off-topic to strongly on-topic. As for medical questions, Yadav et al [78] request two experts in medical informatics to measure the semantics preserved in the generated summaries, i.e, whether the question intent was mentioned in the generated summary on both MEGSUM and MATINF. **Factual consistency** For biomedical literature, Guo et al [51] propose a correctness question for key information which requires evaluators to judge the quality of the generated summaries on the CDSR dataset following the same 1-5 Likert scale. Wallace et al [10] also request evaluators to answer two questions about the factuality of the generated summaries for RCT which concern the directory and the degree of the generated summaries compared with gold summaries. Otmakhova et al [99] define three different metrics for the factuality of the generated summaries on the MS'2 dataset, including (1) PICO correctness: the generated summary should contain the same patient population, intervention, and outcome (which are the entity types defined by PICO) as the gold summary; (2) direction correctness: the generated summary should have the same direction referring to the intervention's effect to the outcome as the gold summary, which can be classified as positive effect, negative effect, and no effect; (3) modality: the confidence of the generated summary about the claim should be the same as the gold summary, which can be defined as strong claim, moderate claim, weak claim, no evidence, and no claim. As for medical questions, Yadav et al [78] also ask experts to verify if all key entities appear in the generated summaries on RCT as factual consistent. **Other** There are also several methods that propose new human evaluation metrics to help understand the performance of biomedical summarizers. Moramarco et al [109] propose an objective human evaluation based on counting medical facts for generated summaries of medical reports. Guo et al [51] design two questions for grammatical correctness and readability respectively for CDSR dataset. Wallace et al [10] ask evaluators to evaluate the fluency of the generated summaries on the RCT dataset. Otmakhova et al [99] consider the fluency of the generated summaries and propose three metrics to evaluate it, including grammatical correctness, lexical correctness, and absence of repetition. ## 6 Discussion In this section, we make a further discussion on existing methods and their limitations, and then outlook promising future directions. ### _Comparison_ We first present the performance of existing SOTA PLMs based methods on different datasets in Table IV, Table V, Table VI, Table VIII, and Table IX. **How do PLMs in biomedical text summarization work?** For biomedical literature, on the PubMed dataset, as shown in Table IV and Table V, PLMs-based methods such as PubMedBERTSum [92], and KeBioSum [58] outperforms TextRank [11] without PLMs on PubMed-long, and GenCompareSum [49], BERTSum [92], MatchSum [57], and KeBioSum [58] show a significant improvement in both ROUGE metrics compared with TextRank without PLMs on PubMed-short. It is noticed that the PUbMed dataset is also used as a benchmark dataset in the general domain. Although there are other advanced methods such as LongT5 [111] have achieved the new SOTA on the dataset, we mainly focus on comparing methods that are designed for the biomedical domain in here. As for CORD-19 and S2ORC, Table VI shows that GenCompareSum [49] and BERTSum [92] present great performance compared with existing non PLMs-based methods such as TextRank \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 38.15 & 12.99 & 34.77 \\ BERTSum [92] & fine-tuning & BERT & 41.09 & 15.51 & 36.85 \\ MatchSum [57] & fine-tuning & RoBERTa & 41.21 & 14.91 & 36.75 \\ KeBioSum [58] & domain adaption+fine-tuning & PubMedBERT & 43.98 & 18.27 & 39.93 \\ \hline \hline \end{tabular} \end{table} TABLE V: ROUGE F1 score of generated summaries by the SOTA extractive methods on the PubMed-short dataset, that extract 6 sentences to formulate the final summary. \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 38.15 & 12.99 & 34.77 \\ BERTSum [92] & fine-tuning & BERT & 41.09 & 15.51 & 36.85 \\ MatchSum [57] & fine-tuning & RoBERTa & 41.21 & 14.91 & 36.75 \\ KeBioSum [58] & domain adaption+fine-tuning & PubMedBERT & 43.98 & 18.27 & 39.93 \\ \hline \hline \end{tabular} \end{table} TABLE VI: ROUGE F1 score of generated summaries by the SOTA extractive methods on the CORD-19 and S2ORC datasets, which extract 8 and 9 sentences for the CORD-19 and S2ORC datasets correspondingly to formulate the final summary. \begin{table} \begin{tabular}{c|c c c c c c|c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 32.99 & 10.39 & 24.471 & 36.58 & 13.23 & 33.10 \\ SumBasic [110] & - & - & 33.88 & 8.24 & 30.86 & 36.63 & 10.43 & 33.68 \\ BERTSum [92] & fine-tuning & BERT & 38.95 & 12.17 & 35.48 & 43.56 & 17.85 & 40.40 \\ GenCompareSum [49] & feature-base & T5 & 41.02 & 13.79 & 37.25 & 43.39 & 16.84 & 39.82 \\ \hline \hline \end{tabular} \end{table} TABLE V: ROUGE F1 score of generated summaries by the SOTA extractive methods on the CORD-19 and S2ORC datasets, which extract 8 and 9 sentences for the CORD-19 and S2ORC datasets correspondingly to formulate the final summary. \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 38.15 & 12.99 & 34.77 \\ BERTSum [92] & fine-tuning & BERT & 41.09 & 15.51 & 36.85 \\ MatchSum [57] & fine-tuning & RoBERTa & 41.21 & 14.91 & 36.75 \\ KeBioSum [58] & domain adaption+fine-tuning & PubMedBERT & 43.98 & 18.27 & 39.93 \\ \hline \hline \end{tabular} \end{table} TABLE VI: ROUGE F1 score of generated summaries by the SOTA extractive methods on the PubMed-short dataset, that extract 6 sentences to formulate the final summary. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline \multicolumn{2}{c|}{**Metrics**} & \multicolumn{4}{c}{**MIMIC-CXR**} & \multicolumn{4}{c}{**OpenI**} \\ \hline **Method** & **Strategy** & **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline LexRank [90] & - & - & 18.11 & 7.47 & 16.87 & 14.63 & 4.42 & 14.06 \\ TransformerEXT [92] & - & - & 31.00 & 16.55 & 27.49 & 15.58 & 5.28 & 14.42 \\ OntologyABS [66] & - & - & 53.57 & 40.78 & 51.81 & - & - & - \\ Hu et al [73] & fine-tuning & BioBERT & 57.38 & 45.52 & 56.13 & 54.52 & 64.97 & 55.59 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: ROUGE F1 score of generated summaries by the SOTA abstractive methods for radiology findings summarization on MIMIC-CXR and OpenI. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 38.15 & 12.99 & 34.77 \\ BERTSum [92] & fine-tuning & BERT & 41.09 & 15.51 & 36.85 \\ MatchSum [57] & fine-tuning & RoBERTa & 41.21 & 14.91 & 36.75 \\ KeBioSum [58] & domain adaption+fine-tuning & PubMedBERT & 43.98 & 18.27 & 39.93 \\ \hline \hline \end{tabular} \end{table} TABLE VI: ROUGE F1 score of generated summaries by the SOTA abstractive methods on the PubMed-short dataset, that extract 3 sentences to formulate the final summary. \begin{table} \begin{tabular}{c|c c|c c c|c c c c} \hline \hline \multicolumn{2}{c|}{**Methods**} & \multicolumn{2}{c|}{**Strategy**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**ROUGE-1**} & \multicolumn{1}{c}{**ROUGE-2**} & \multicolumn{1}{c}{**ROUGE-L**} \\ \hline TextRank [11] & - & - & 38.15 & 12.99 & 34.77 \\ BERTSum [92] & fine-tuning & BERT & 41.09 & 15.51 & 36.85 \\ MatchSum [57] & fine-tuning & RoBERTa & 41.21 & 14.91 & 36.75 \\ KeBioSum [58] & domain adaption+fine-tuning & PubMedBERT & 43.98 & 18.27 & 39.93 \\ \hline \hline \end{tabular} \end{table} TABLE VI: ROUGE F1 score of generated summaries by the SOTA extractive methods on the PubMed-short dataset, that extract 6 sentences to formulate the final summary. and SumBasic [110]. Moreover, for multiple document summarization dataset MS'2, as shown in Table VII, PLMs-based methods such as BART [9] and LED [9] have shown great performance in both automatical metrics and human evaluation. We can see that they can generate summaries that have low grammar and lexical error (less than \(31\%\)), and low redundancy (less than \(15\%\)). However, they have problem on generating factual correct summaries, for example with PICO correctness no higher than \(45\%\). For EHRs, Table VIII clearly proves the benefits of PLMs via the comparison of PLMs-based methods such as Hu et al [73] and traditional methods such as OntologyABS [66], LexRank, and TransformerEXT [92]. Hu et al [73] present the best performance on both MIMIC-CXR and OpenI datasets. Moreover, as shown in Table IX, PLMs-based methods such as BertSum, PEGASUS [93], Yadav et al [84], and Mrini et al [87] both outperform traditional Seq2Seq [24] method on the medical question dataset MeQSum. Overall, we can find that performance on various datasets has been greatly boosted by advanced methods that make use of PLMs, and existing methods are able to generate fluent summaries. However, their performances are still far away from desirable especially in factuality. **What is the optimal way to introduce PLMs in biomedical text summarization?** From Table V and Table IX, we can find that domain-adaption with fine-tuning based methods such as KeBioSum [112] and Mrini et al [87] present the best performance among three different ways to introduce PLMs. We believe it should also have a superior performance with domain-adaption with fine-tuning mechanism to further capture domain knowledge, although there are no previous efforts to apply it to the multiple document summarization dataset MS'2 and EHRs datasets. Fine-tuning based methods generally yield better performance than feature-based methods, except for methods on the CORD-19 and S2ORC datasets, as shown in Table VI. For multiple biomedical document summarization and EHRs summarization, there are only fine-tuning based methods which are proved to be effective in introducing PLMs. Moreover, for radiology reports and medical question summarization, it shows that ensemble multiple language models can achieve the best performance [86, 77]. **What is the difference in the choice of PLMs?** The choice of PLMs has a significant influence on the performance of biomedical text summarization. For biomedical literature, it has been proven that the domain-specific language model BioBERT has better performance than BERTbase, but underperforms BERT-large [64]. PubMedBERT further outperforms BioBERT [58]. As shown in Table V, methods using PubMedBERT which is continually pre-trained on the biomedical texts show a better performance compared with general PLMs such as BERT and RoBERTa on both PubMed-long and PubMed-short datasets. On CORD-19 and S2ORC datasets, the method GenCompareSum with SOTA PLMs T5 outperforms the method BERTSum with BERT even though T5 applied in their methods would not be fine-tuned. As for multiple document summarization, LED based on Longformer which is specially designed for long texts presents an inferior performance compared with BART, indicating that it requires more effort to address the input length limitation of PLMs in the biomedical domain rather than directly applying solutions from the general domain. As for EHRs, medical conversations, and medical questions, most methods use the encoder-decoder language models such as BART, T5, PEGASUS for abstractive summarization on these datasets. For EHRs, they find that domain-specific language models such as BioBERT and PubMedBERT, are not effective for radiology reports summarization, since they are pre-trained on biomedical literature [75]. Language models such as RadBERT that are pre-trained on radiology reports, are better choices [67]. For radiology reports summarization, PEGASUS achieves better performance than BART and T5 [74, 76, 86]. For medical conversation summarization, it shows that GPT-3 and BART are promising methods with limited training data [61, 80]. For medical questions, Mrini et al [87] with BART have the best performance compared with other methods based on BERT, PEGASUS, and MiniLM. ### _Limitations_ Although PLMs-based methods have greatly boosted the performance of biomedical summarization, there are still several limitations. **Developing high-quality public datasets** The development of public datasets for biomedical text summarization is imbalanced. On one hand, from the perspective of dataset types, compared with a number of public datasets in biomedical literature, there are limited released datasets for electronic health records, medical conversations, and medical questions due to privacy issues, despite the fact that there is an urgent need for developing automated text summarization methods in these texts. On the other hand, considering the task types, there are only two public datasets for multi-document summarization, while most of the existing datasets focus on single document summarization. Moreover, the size of datasets in the biomedical domain such as CHQ-SUmm is generally much smaller than those in the general domain, due to the high cost and time-consuming of human annotation which additionally requires domain-specific knowledge. The lack of high-quality large-scale public datasets can hinder the development and employment of PLMs whose performance relies on the amount of data. **Encoding long biomedical texts** PLMs have the limitation on the token length of input documents [94] due to the high time and memory consumption of attention computations in Transformer. Most PLMs-based summarization methods directly truncate input documents and only take their first 512 tokens following methods in the general domain. However, biomedical texts such as biomedical scientific papers, usually have thousands of tokens. The truncating operation losses useful information in the truncated contents of input documents. Moreover, it also leads to the loss of long-range dependencies on long biomedical documents. Although there is method [9] that investigate using PLMs that support encode long documents such as Longformer, it shows poor performance when compared with BART on biomedical texts. Therefore, it still requires more efforts to deal with the limitation to encode long biomedical texts based on PLMs efficiently. **Incorporating domain-specific knowledge** Domain-specific knowledge is critical for understanding biomedical text summarization. cal texts. Vocabularies, taxonomies, and ontologies such as UMLS [113] are important sources of biomedical knowledge. While existing methods with PLMs is able to capture lexical knowledge in biomedical texts, they have no knowledge of words or entities that have particular domain-specific importance and their relations. Up to now, limited efforts have been proposed to incorporate external domain specific knowledge for summarization of biomedical literature and EHRs [58, 73]. It is still a limitation for existing methods on other biomedical texts to capture the knowledge of sources such as biomedical concepts, relations between concepts, and lexicographic information et al. **Controlling factual consistency** The factual correctness of generated summaries is especially important for the real application of automatic biomedical summarizers. However, existing abstractive methods are encouraged to reconstruct gold summaries freely without word constraints on text generation. Limited attention has been focused on the factual cosistency issue in the text summarization of biomedical domain. They tend to generate summaries that fabricate facts of original inputs due to freely rephrasing [102, 114], which may cause medical errors. It has shown that existing methods based on PLMs such as BART and Longformer are not able to generate factual correct summaries [9, 102]. It is still a big challenge for existing methods to control factual consistency when fine-tuning PLMs. **Interpretability and transparency** Similar to other deep learning methods, PLMs-based methods have the well-known interpretability problem due to the black-box nature of PLMs. For users such as clinicians, they are hard to explain how and why models select specific words or sentences to yield the final summaries. If errors are consistently made by the model, it is hard for users to know why things go wrong. The explainability and transparency of models such as the inner mechanisms of their algorithms are important in constructing reliable applications for users [115]. However, no effort investigates the problem when introducing PLMs. **Evaluations** Objective and comprehensive evaluation metrics are important to evaluate summarization methods efficiently and reliably. Most existing methods only use the ROUGE and BERTScore metrics to evaluate their models automatically similar to methods in the general domain. However, it has been reported that ROUGE and BERTScore are far from reflecting the quality of generated summaries accurately such as factual correctness, and key finding directions. Although there are efforts that explore objective human evaluation metrics for medical studies [9, 99], it still lacks an accurate automatic evaluation metric that is compatible with humans. Overall, the performances of existing methods are still far from desirable. We believe more efforts should be proposed to address these limitations. ### _Future Directions_ In this section, we further discuss promising future directions, which we hope can provide guidelines for future research. **New large-scale public biomedical datasets** For biomedical text summarization, the annotation of the dataset is much expensive and time-consuming than the general domain since the annotators are required to be domain experts. Moreover, for datasets such as medical conversations and questions, the privacy issue is more critical compared with biomedical literature datasets. Although there is an urgent need for summarization methods to facilitate information processing, rare attention has been paid to developing high-quality large-scale public datasets for biomedical summarization, especially for medical conversations and questions. We believe more efforts should be devoted to the development of new large-scale public biomedical datasets, i.e, unsupervised or distant-supervised automatic annotations and federated learning [116] to allow the development of models while keeping training data on the private side. Besides, more efforts should be proposed in the future to explore unsupervised, few-shot learning, and data augmentation techniques for low-resource biomedical summarization. **Handling long biomedical documents** PLMs generally are limited to a given length of texts due to the time complexity of the model. It is an important limitation for text summarization methods since it ignores the rest content of the text, especially in the biomedical domain whose text length is relatively larger. Although there is existing research to address this issue with PLMs for long document such as Longformer, directly applying these methods in the biomedical domain is reported to have limited improvement [9]. It is urgent to investigate the effective way for PLMs in the biomedical domain to encode the full content of long texts. **Incorporating extra knowledge** PLMs for the general domain and biomedical domain, are shown to be able to capture common sense knowledge and biomedical knowledge to a certain extent. Although they can generate summaries that are fluent or grammatically correct, it proves that most of their generated summaries are illogical or have factual errors [9, 99]. Therefore, limited knowledge captured by PLMs is hard to support the model to generate desirable summaries. It is expected that more knowledge-aware models can be proposed to incorporate extra domain-specific knowledge such as knowledge base UMLS, to improve summarization generation. **Controllable generation** Existing methods generally yield summaries that ignore users' preferences. We believe more efforts should be developed for controlled summarization of biomedical texts, that meet the expectations and requirements of users. Methods are expected to control several attributes of generated summaries, such as length, readability, text style et al. **Benchmarks** To facilitate the development of biomedical NLP, attempts have been made to create NLP benchmarks in the biomedical domain such as BLUE [42], GLUE [46], which include the relation extraction task, text classification task et al. However, none of the existing benchmarks includes the biomedical text summarization task. Considering the variety of types of biomedical texts including scientific papers, EHRs, conversations, questions, and categories of tasks including extractive, abstractive, and multi-documents summarization, we believe it is necessary to build a unified benchmark to support the development and fair evaluations of proposed methods. **Multimodality** In the biomedical domain, there are rich multimodal medical datasets such as radiology reports and associated x-rays. However, most existing methods only take biomedical texts themselves as inputs. It reports that visual features can improve the performance of text generation [117]. It expects that multimodal summarization methods can draw much attention in the future. ## 7 Conclusion In this survey, we make a comprehensive overview of biomedical text summarization with large-scale pre-trained language models. We systematically review recent approaches that are based on PLMs, benchmark datasets, evaluations of the task. We categorize and compare recent approaches according to the ways they leverage PLMs and what PLMs they use. Finally, we highlight the limitations of existing methods and suggest potential directions for future research. We hope the paper can be a timely survey to help future researchers. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgement This research is partially supported by the Alan Turing Institute and the Biotechnology and Biological Sciences Research Council (BBSRC), BB/P025684/1. ## Author Contributions Q Xie conducted the survey, prepared figures, and tables, and drafted the manuscript. Zh Luo, B Wang, and S Ananiadou revised the manuscript carefully. S Ananiadou supervised all the processes. All authors provided feedback and approved the final version of the manuscript. ## Competing Interests The authors declare no competing interests.
2305.07779
Achieving Capacity on Non-Binary Channels with Generalized Reed-Muller Codes
Recently, the authors showed that Reed-Muller (RM) codes achieve capacity on binary memoryless symmetric (BMS) channels with respect to bit error rate. This paper extends that work by showing that RM codes defined on non-binary fields, known as generalized RM codes, achieve capacity on sufficiently symmetric non-binary channels with respect to symbol error rate. The new proof also simplifies the previous approach (for BMS channels) in a variety of ways that may be of independent interest.
Galen Reeves, Henry D. Pfister
2023-05-12T21:50:06Z
http://arxiv.org/abs/2305.07779v1
# Achieving Capacity on Non-Binary Channels with Generalized Reed-Muller Codes ###### Abstract Recently, the authors showed that Reed-Muller (RM) codes achieve capacity on binary memoryless symmetric (BMS) channels with respect to bit error rate. This paper extends that work by showing that RM codes defined on non-binary fields, known as generalized RM codes, achieve capacity on sufficiently symmetric non-binary channels with respect to symbol error rate. The new proof also simplifies the previous approach (for BMS channels) in a variety of ways that may be of independent interest. Channel Capacity, Group Codes, EXIT Area Theorem, Reed-Muller Code, Strong Data-Processing Inequality ## I Introduction Generalized Reed-Muller (GRM) codes were introduced by Kasami, Lin, and Peterson in 1968 [1] as the natural generalization of binary Reed-Muller (RM) codes [2, 3] to non-binary alphabets. GRM codes are closely related to other interesting code families including Reed-Solomon codes [4], multiplicity codes [5], and lifted codes [6]. These families remain interesting subjects of research due to their connections with topics such as local decodability and list decoding. In 2016, it was established that sequences of RM codes can achieve capacity on the binary erasure channel (BEC) [7, 8]. This was followed by some extensions and related work [9, 10, 11]. A nice tutorial overview of RM codes and results until 2020 is provided by [12]. Then, in 2021, the authors showed that RM codes achieve capacity on binary memoryless symmetric (BMS) channels with respect to bit error rate [13]. The main result of this paper is the following theorem. We note that all terminology will be defined in later sections. **Theorem 1**.: _Consider a memoryless channel \(W\) with capacity \(C\) whose input alphabet is \(\mathcal{X}=\mathbb{F}_{q}\) and let \(G\) be the symmetry group of the channel. Suppose one of the following holds:_ 1. \(G\) _contains the affine group over_ \(\mathbb{F}_{q}\)_;_ 2. \(q\) _is prime,_ \(G\) _contains the additive group of_ \(\mathbb{F}_{q}\)_, and the smallest principal inertia component of_ \(W\) _(for the uniform input distribution) is strictly positive._ _Then, for every sequence of GRM codes over \(\mathbb{F}_{q}\) with strictly increasing blocklength and rate converging to \(R\in[0,C)\), the symbol-error rate (SER) under symbol-MAP decoding converges to zero._ **Corollary 2**.: _If \(q\) is prime and the channel symmetry group is transitive (i.e., the channel is symmetric) but it does not contain the additive group of \(\mathbb{F}_{q}\) then the input alphabet can be relabeled so that the conclusion of Theorem 1 still holds._ **Corollary 3**.: _Consider a memoryless channel \(W\) with input alphabet \(\mathcal{X}=\mathbb{F}_{q}\) and let \(I_{s}\) be the mutual information between its input and output with a uniform input distribution. Consider block-coded transmission using group symmetrization over the affine group of \(\mathbb{F}_{q}\) (i.e., each channel use is modulated by a random affine map known at the receiver). Then, for every sequence of GRM codes over \(\mathbb{F}_{q}\) with strictly increasing blocklength and rate converging to \(R\in[0,I_{s})\), the SER under symbol-MAP decoding converges to zero._ For the purposes of our analysis, there are two finite-input channels of interest. The first is the memoryless channel \(W\) (with capacity \(C\)) over which the codeword \(\mathbf{X}\) is transmitted and the output \(\mathbf{Y}\) is received. Our goal is to show that code rates strictly less than \(C\) can be achieved with vanishing symbol error rate. This involves analyzing a second channel from \(X_{0}\) to \(Y_{\sim 0}\), which we call the _coset_ channel due to the group structure in the code. The high-level idea of our proof is to first show that, as the blocklength increases, the sequence of coset channels converges to a _deterministic_ channel (i.e., a channel for which a minimal sufficient statistic is a non-random function of its input). For binary inputs, the only deterministic channels are the perfect channel and uninformative channel but, for non-binary inputs, there are other possibilities. In the remainder of the proof, we use channel symmetry and the area theorem to argue that this limiting channel must be the perfect channel whenever the rate of the code is strictly less than capacity. The two conditions appearing in Theorem 1 have different implications for the symmetry group of the implied coset channel. Case (i) implies doubly transitive symmetry of the coset channel, which simplifies much of the analysis. Case (ii) implies transitive symmetry of the coset channel, and in this case, we need the assumption that \(q\) is prime and some additional arguments to rule out the possibility that the coset channel converges to a deterministic limit that is neither perfect nor uninformative. _Comparison with prior work:_ The proof for binary RM codes on BMS channels [13] is based on the convergence of the power series expansion of the binary entropy function around the uninformative point. This fails for the non-binary case because the analogous power series converges only on a small subset of the domain. Instead, an approach inspired by strong data-processing inequalities (Lemma 6) is used here to bound mean-squared error in terms of mutual information. In [13], the "influences" of two subsets (of channel outputs) on the conditional mean estimator are bounded separately using two different arguments. In this paper, all influences are bounded using a single simpler argument (Lemma 15) that uses extrinsic information transfer (EXIT) functions [14] rather than generalized EXIT functions [15]. Our analysis of channels builds on the framework developed by Blackwell [16] and Le Cam [17]. Our introduction and analysis of the overlap matrix is related to principal inertial components (PICs) [18, 19] and strong data-processing inequalities [20, 21, 22]. A key innovation in this work is Lemma 15, which combines these ideas with a differential analysis of channels enabled by the EXIT area theorem. After the conference version of this paper [23] was accepted, a proof that RM codes have vanishing block-error rate on BMS channels, at any rate less than capacity, was posted to arXiv [24]. It would be interesting to see if the methods in [24] can be combined with this work to establish vanishing block error rate in the non-binary setting. _Notation:_ The real numbers and are denoted by \(\mathbb{R}\), the natural numbers are denoted by \(\mathbb{N}\coloneqq\{1,2,\ldots\}\), and \(\mathbb{N}_{0}\coloneqq\mathbb{N}\cup\{0\}\). For \(N\in\mathbb{N}_{0}\), a range of natural numbers is denoted by \([N]\coloneqq\{0,1,\ldots,N-1\}\). Let \(\Delta_{q}\) denote the probability simplex on \(q\) elements and \(\mathbf{e}_{i}\in\mathbb{R}_{q}\) be the \(i\)-th standard basis vector. We use \(\mathbb{F}_{q}\) to denote the Galois field with \(q\) elements. For a set \(\mathcal{X}\), the \(N\)-element vector \(\mathbf{x}\in\mathcal{X}^{N}\) is denoted by boldface and is indexed from 0 so that \(\mathbf{x}=(x_{0},,\ldots,x_{N-1})\). For an \(M\)-element index set \(A=\{a_{0},a_{1},\ldots,a_{M-1}\}\subseteq[N]\) with \(a_{0}<a_{1}<\cdots<a_{M-1}\), we define the subvector \(x_{A}=(x_{a_{0}},x_{a_{1}},\ldots,x_{a_{M-1}})\in\mathcal{X}^{M}\) without using boldface. A single random variable is denoted by a capital letter (e.g., \(X,Y,Z\)). Vectors of random variables are denoted by boldface capital letters (e.g., \(\mathbf{X},\mathbf{Y},\mathbf{Z}\)). All unspecified logarithms (i.e., \(\log\)'s) are taken base-\(q\) and thus expressions involving entropy and mutual information are reported in _qits_. This is an extended version of the paper [23]. ## II Channels We assume throughout that \(q\in\mathbb{N}\) with \(q\geq 2\) and \(\mathcal{X}=\{0,1,\ldots,q-1\}\). The symmetric group \(\mathbb{S}_{q}\) is the set of bijective functions (i.e., permutations) mapping \(\mathcal{X}\) to \(\mathcal{X}\) with the group operation given by composition. A permutation group \(G\subseteq\mathbb{S}_{q}\) is _transitive_ if, for each \(x,x^{\prime}\in\mathcal{X}\), there exists a permutation in \(G\) the maps \(x\) to \(x^{\prime}\). Likewise, it is _doubly transitive_ if, for any \(x_{1},x_{2},x^{\prime}_{1},x^{\prime}_{2}\in\mathcal{X}\) with \(x_{1}\neq x_{2}\) and \(x^{\prime}_{1}\neq x^{\prime}_{2}\), there exists a permutation in \(G\) that maps \(x_{k}\) to \(x^{\prime}_{k}\) for \(k=1,2\). We define the action of the symmetric group \(\mathbb{S}_{q}\) on \(\mathbb{R}^{q}\) (and the probability simplex \(\Delta_{q}\subseteq\mathbb{R}^{q}\)) according to \[\sigma(v_{0},\ldots,v_{q-1})=(v_{\sigma(0)},\ldots,v_{\sigma(q-1)})\] for every \(\sigma\in\mathbb{S}_{q}\) and \((v_{0},\ldots,v_{q-1})\in\mathbb{R}^{q}\). This operation is extended to a probability measure \(P\) on \(\mathbb{R}^{q}\) via the pushforward measure \(\sigma P\) defined by \[(\sigma P)(B)=P(\sigma^{-1}B),\quad\forall B\in\mathcal{B}\] where \(\sigma^{-1}B=\{\sigma^{-1}p\,|\,p\in B\}\). ### _Finite-Input Channels_ A \(q\)-ary input channel \(W\) is a conditional distribution mapping from an input \(x\in\mathcal{X}\) to a probability measure \(W(\cdot\,|\,x)\) on a measurable space \((\mathcal{Y},\mathcal{A})\). When convenient, we use the compact notation \(W_{x}(\cdot)=W(\cdot\,|\,x)\). Following the approach of Blackwell [16], we introduce a standard version of the channel whose output alphabet is the probability simplex. For a channel \(W\), the _canonical map_\(\phi\colon\mathcal{Y}\to\Delta_{q}\) is defined by \[\phi(y)\coloneqq\big{(}\phi_{0}(y),\phi_{1}(y),\ldots,\phi_{q-1}(y)\big{)},\] where \(\phi_{x}(y)\coloneqq(dW_{x}/d\bar{W})(y)\) is the Radon-Nikodym derivative of \(W_{x}\) with respect to the reference measure \[\bar{W}(A)\coloneqq\sum_{x\in\mathcal{X}}W(A\,|\,x),\quad A\in\mathcal{A}.\] The canonical map can also be viewed as the posterior pmf of with respect to a uniform prior distribution, i.e., \(\phi_{x}(y)\) is the probability that the input is \(x\) given the output is \(y\). Composing the channel \(W\) with its canonical map produces a new channel, \(W^{\mathrm{s}}\), on the output space \((\Delta_{q},\mathcal{B})\) satisfying \[W^{\mathrm{s}}(B\,|\,x)=W(\phi^{-1}B\,|\,x),\quad\forall x\in\mathcal{X}, \forall B\in\mathcal{B}.\] Because the canonical map is a sufficient statistic for the channel input, the mapping from \(W\) to \(W^{\mathrm{s}}\) preserves the relevant properties of the channel, such as its capacity and minimum error probability. Following Blackwell's definition of a standard experiment [16], we call a channel _standard_ if its canonical map is the identity map, and we refer to \(W^{\mathrm{s}}\) as the standard channel associated with \(W\). Furthermore, we will call two channels _Blackwell equivalent_ if they have the same standard channel. ### _Channel Symmetry_ In communication theory, the term symmetric channel is used to refer to a variety of related (but distinct) symmetry conditions [25, 26]. This paper uses the following definition due to its compatibility with Blackwell equivalence. **Definition 1** (Channel Symmetry Group).: The symmetry group \(G\) of a \(q\)-ary channel \(W\) is the permutation group \[G=\{\sigma\in\mathbb{S}_{q}\,|\,\forall x\in\mathcal{X},\forall B\in\mathcal{B},W^{\mathrm{s}}(\sigma B\,|\,\sigma x)=W^{\mathrm{s}}(B\,|\,x)\},\] where \(W^{\mathrm{s}}\) is the standard channel associated with \(W\). In other words, \(G\) is the group of all permutations \(\sigma\) such that the distribution of the canonical map \(\phi(Y)\) under input \(\sigma x\) is equal to the distribution of \(\sigma\psi(Y)\) under input \(x\). This can also be written compactly as \(G=\{\sigma\in\mathbb{S}_{q}\,|\,\forall x\in\mathcal{X},\;\sigma W^{\mathrm{s}}_ {x}=W^{\mathrm{s}}_{\sigma x}\}\). A channel is called _symmetric_ if its symmetry group is transitive. Its well known that this condition is sufficient to ensure that the capacity of the channel is achieved by the uniform input distribution [25]. More generally, for any decision-theoretic problem whose loss function has the same symmetries as the channel, the uniform input distribution maximizes the expected loss [27, Chapter 6]. A slightly stronger notion of symmetry occurs when the symmetry group of a channel is associated with a group structure on the input alphabet. Let \((\mathcal{X},\boxplus)\) be a group with binary operation denoted by \(\boxplus\), and assume without loss of generality that \(0\) is the identity element. Each \(x\in\mathcal{X}\) defines a permutation \(\sigma_{x}\in\mathbb{S}_{q}\) according to \(\sigma_{x}x^{\prime}=x\boxplus x^{\prime}\) for every \(x^{\prime}\in\mathcal{X}\). By Cayley's theorem, the group \((\mathcal{X},\boxplus)\) is isomorphic to the permutation group \(H\coloneqq\{\sigma_{x}\,|\,x\in\mathcal{X}\}\), which we will refer to as the permutation representation of \((\mathcal{X},\boxplus)\). The channel \(W\) is called _group symmetric_ if its symmetry group contains \(H\) as a subgroup. While there exist channels that are symmetric but not group symmetric, this can occur only if \(q\) is not a prime power. **Lemma 4**.: _If a channel \(W\) is symmetric with \(q\) equal to a prime power, then it is group symmetric._ ### _Group Symmetrization_ There is a simple (and commonly used) process that can equip any channel with any desired symmetry. Moreover, if the channel capacity is achieved by a uniform input distribution, then this process does not change the capacity. Let \(W\) be a channel with input alphabet \(\mathcal{X}\) and let \(H\) be any subgroup of \(\mathbb{S}_{q}\). Now, define a new channel \(W^{\prime}\) that, for the input \(x\in\mathcal{X}\), chooses a uniform random element of \(\sigma\in H\) and transmits \(\sigma x\) through \(W\). Then, the output of \(W^{\prime}\) is defined to be \((Y,\sigma)\) where \(Y\in\mathcal{Y}\) is the output of \(W\). We call this operation group symmetrization and the symmetry group of the resulting channel must contain \(H\) as a subgroup. The standard channel \(W^{\prime\prime}\)s satisfies \[W^{\prime\prime}\!\mathrm{s}(B\,|\,x)=\frac{1}{|H|}\sum_{\sigma\in H}W^{ \mathrm{s}}(\sigma^{-1}B\,|\,x).\] If \(H\) is transitive, then the group symmetrization operation implies that the effective input distribution seen by the original channel \(W\) is uniform. So, if one is content to include group symmetrization in the system, then any desired channel symmetry can be engineered while still achieving the rate \(I_{\mathrm{s}}\) equal to the mutual information between the channel input and output under a uniform input distribution. ### _Overlap Matrix_ We define _overlap matrix_\(Q\in\mathbb{R}^{q\times q}\) associated with a \(q\)-ary channel \(W\) according to \[Q_{x,x^{\prime}}\coloneqq\int\phi_{x}(y)\,\phi_{x^{\prime}}(y)\bar{W}(dy).\] This matrix is symmetric, positive semidefinite, and doubly stochastic. Thus, its eigenvalues are real positive numbers that satisfy \(1=\lambda_{0}\geq\lambda_{1}\geq\cdots\geq\lambda_{q-1}\geq 0\). It can be verified that the eigenvalues of index \(x\geq 1\) correspond to the principal inertia components (PICs) of the channel with respect to the uniform input distribution [18, 19]. The smallest PIC \(\lambda_{q-1}\), which plays a prominent role in our analysis, has been considered previously in the context of perfect privacy [19]. For random variables \((X,Y)\), the _symbol error rate_ (SER) is defined by \[\mathrm{SER}(X\,|\,Y)\coloneqq\min_{X-Y-\bar{X}}\mathbb{P}\Big{[}X\neq\hat{X} \Big{]},\] where the minimum is over all Markov chains \(X-Y-\hat{X}\). **Lemma 5**.: _For any input-output pair \((X,Y)\) through a \(q\)-ary channel with overlap matrix \(Q\), we have_ \[\mathrm{SER}(X\,|\,Y)\leq 1-\mathrm{Tr}(\mathrm{diag}(p)Q),\] _where \(p\in\Delta_{q}\) is the prior pmf of \(X\)._ The following lemma will also us to connect MMSE estimation error with conditional mutual information. **Lemma 6**.: _For any Markov chain \(S-T-X-Y\) where \(Y\) is an observation of \(X\) through \(q\)-ary channel, we have,_ \[I(X;Y\,|\,S)-I(X;Y\,|\,T)\] \[\qquad\geq\frac{\lambda_{q-1}^{2}}{2\ln q}\mathbb{E}\big{[}\| \mathbb{E}[\mathbf{e}_{X}\,|\,T]-\mathbb{E}[\mathbf{e}_{X}\,|\,S]\|^{2}\big{]},\] _where \(\lambda_{q-1}\) is the minimal eigenvalue of the overlap matrix_ For a \(q\)-ary channel \(W\) with canonical map \(\phi\), we define the squared-error discrepancy \[\delta\coloneqq\mathbb{E}\big{[}\|\phi(Y)-\mathbb{E}[\phi(Y)\,|\,X]\|^{2} \big{]}\] where \(Y\) is the output for a uniformly distributed input \(X\). Note that \(\delta\) is zero if and only if the output of the standard channel is determined uniquely by the input (i.e., the channel is deterministic). This discrepancy can also be expressed in terms of the overlap matrix \(Q\) or the PICs: \[\delta=\frac{1}{q}\,\mathrm{Tr}(Q)-\frac{1}{q}\,\mathrm{Tr}(Q^{2})=\frac{1}{q} \sum_{x=1}^{q-1}\lambda_{x}(1-\lambda_{x}). \tag{1}\] If \(\delta\) is close to zero, then the PICs are clustered near the boundaries \(0\) and \(1\) of the unit interval. The next result gives sufficient conditions under which the PICs are all close to the same boundary point. **Lemma 7**.: _Consider a \(q\)-ary channel with symmetry group \(G\) and squared-error discrepancy \(\delta\). Suppose one of the following holds:_ 1. \(G\) _is doubly transitive;_ 2. \(G\) _is transitive,_ \(q\) _is prime, and_ \(4q^{2}\delta<1\)_._ _Then, the overlap matrix \(Q\) satisfies_ \[\min_{b\in\{1,q\}}|\,\mathrm{Tr}(Q)-b|\leq 2q\delta. \tag{2}\] **Remark 1**.: To see that the conditions in Lemma 7 are not superfluous, consider the channel with input \(\mathcal{X}=\{0,1,2,3\}\) and output \(Y=X+N\pmod{4}\) where \(N\) is uniform on \(\{0,2\}\). This channel is deterministic in the sense that its canonical map is determined uniquely by the input: \[\phi(Y)=\frac{1}{2}\mathbf{e}_{X}+\frac{1}{2}\mathbf{e}_{X+2}\] Hence, the squared-error discrepancy is \(\delta=0\). The channel is also symmetric because its symmetry group \(G\) contains all cyclic shifts. For example, adding an element \(x^{\prime}\in\mathcal{X}\) to the input before transmitting is equivalent to adding the same element to the output \(Y\). However, the channel does not satisfy the conditions of Lemma 7 because \(G\) is not doubly transitive and \(q\) is not prime. The overlap matrix satisfies \(\mathrm{Tr}(Q)=2\), which violates (2). ## III Codes ### _Group Codes_ In coding theory, the term group code is used to refer to a few related (but distinct) mathematical objects. These include binary codes closed under modulo-2 addition [26], sets of points real space generated by a group of orthogonal transformations applied to a single point [28], and codes whose codewords are elements of a group ring [29]. The group structure of binary codes was recognized early and exploited in [26, 30]. Later, similar ideas were developed for non-binary codes and channels [28, 31]. In this paper, we use the following definition. **Definition 2** (Group code).: Let \((\mathcal{X},\,\boxplus)\) be a group where \(0\in\mathcal{X}\) is the identity. A set \(\mathcal{C}\subseteq\mathcal{X}^{N}\) is called a _group code over \((\mathcal{X},\boxplus)\)_ if the set \(\mathcal{C}\) forms a group with respect to the binary operation \(\boldsymbol{x}\boxplus\boldsymbol{x}^{\prime}=(x_{0}\boxplus x_{0}^{\prime},x _{1}\boxplus x_{1}^{\prime},\ldots,x_{N-1}\boxplus x_{N-1}^{\prime})\). **Definition 3** (Matched to channel).: A group code over \((\mathcal{X},\boxplus)\) is matched to channel \(W\) if the symmetry group of \(W\) contains the permutation representation of \((\mathcal{X},\boxplus)\) as a subgroup. For a group code matched to a channel, the code and channel together have a property which is akin to the geometric uniformity defined by Forney [31]. Similar ideas were explored more recently for group codes over integer rings [32]. In particular, one gets the uniform error property where the error rate of the optimal decoder is independent of the transmitted codeword. ### _The Coset Channel_ For this section, assume that \(\mathcal{C}\subseteq\mathcal{X}^{N}\) is a group code over \((\mathcal{X},\boxplus)\) and that for all \(x\in\mathcal{X}\), there exists \(\boldsymbol{c}\in\mathcal{C}\) such that \(c_{0}=x\). Under these assumptions, the cosets of the subgroup \(\{\boldsymbol{c}\in\mathcal{C}\,|\,\,c_{0}=0\}\) partition the code \(\mathcal{C}\) into \(q\) sets of equal size. We define the _coset channel_ to be the channel from \(X_{0}\) to \(Y_{\sim 0}\) defined as follows: 1. Given the input \(x_{0}\in\mathcal{X}\), a codeword \(\boldsymbol{X}\) is drawn uniformly from the coset \(\{\boldsymbol{c}\in\mathcal{C}\,|\,\,c_{0}=x_{0}\}\). 2. The output \(Y_{\sim 0}\) is a memoryless observation of \(X_{\sim 0}\) through the channel \(W\). From now on, we will use \(V\) to denote the coset channel and \(\psi\) to denote its canonical map. Composing \(V\) with \(\psi\) gives the standard coset channel \(V^{\ast}\). **Lemma 8**.: _If the code is matched to the channel \(W\) and, for all \(u\in\mathcal{X}\), there exists \(\boldsymbol{c}\in\mathcal{C}\) with \(c_{0}=u\), then the coset channel \(V\) is group symmetric. Also, the output of the standard coset channel does not depend on which coset element is chosen, i.e., for an arbitrarily distributed input \(X_{0}\), the output \(\psi(Y_{\sim 0})\) is conditionally independent of \(X_{\sim 0}\) given \(X_{0}\)._ If the code has additional symmetries that are matched to the channel, then the symmetry group of the coset channel may be larger. For the following result, let \(G\) be the permutation group of the channel \(W\), let \(H\) be the permutation representation of the group \((\mathcal{X},\boxplus)\), and let the group of homogeneous alphabet relabelings that preserve the code be given by \[F\coloneqq\big{\{}\sigma\in\mathbb{S}_{q}\,|\,\forall\boldsymbol{c}\in \mathcal{C},\big{(}\sigma(c_{0}),\sigma(c_{1}),\ldots,\sigma(c_{N-1})\big{)} \in\mathcal{C}\big{\}}.\] Since both \(F\) and \(G\) are subgroups of \(\mathbb{S}_{q}\), their intersection \(F^{\prime}\coloneqq F\cap G\) is also a subgroup of \(\mathbb{S}_{q}\). **Lemma 9**.: _If the code is matched to the channel \(W\), then the symmetry group of the coset channel \(V\) contains the group \(\langle F^{\prime},H\rangle\) generated by \(F^{\prime}\) and \(H\)._ For the case where \(q\) is not prime, our proof technique requires that the coset channel has doubly transitive symmetry. In view of Lemma 9, a sufficient condition for this can be readily verified when \(q\) is a prime power and \(\mathcal{C}\) is a linear code over \(\mathcal{X}=\mathbb{F}_{q}\) (and hence a group code with respect to the additive group of \(\mathbb{F}_{q}\)). Furthermore, if the code contains the all ones codeword (i.e., \((1,\ldots,1)\in\mathcal{C}\)) then its group of homogeneous alphabet relabelings contains the affine group over \(\mathbb{F}_{q}\), which is defined by \(A_{q}\coloneqq\{\sigma_{a,b}\in\mathbb{S}_{q}\,|\,a\in\mathbb{F}_{q}\setminus \{0\},b\in\mathbb{F}_{q}\}\) where \(\sigma_{a,b}(x)=(a\cdot x)+b\) uses \(\mathbb{F}_{q}\) addition and multiplication. **Lemma 10**.: _If \(\mathcal{C}\subseteq\mathbb{F}_{q}^{N}\) is a linear code that contains the all ones codeword and the symmetry group of the channel \(W\) contains the affine group \(A_{q}\), then the code is matched to the channel \(W\) and the symmetry group of the coset channel \(V\) is doubly transitive._ Lastly, we recall that the permutation automorphism group of the code of a \(\mathcal{C}\subseteq\mathcal{X}^{N}\) is the group of permutations \(\pi\in\mathbb{S}_{N}\) such that \((c_{\pi(0)},c_{\pi(1)},\ldots,c_{\pi(N-1)})\in\mathcal{C}\) for all \(c\in\mathcal{C}\). If this group is transitive then the coset channel defined for symbol position \(0\) is Blackwell equivalent to the coset channel for any other position \(i\in[N]\). ### _Generalized Reed-Muller Codes and Puncturing_ The natural generalization of binary RM codes to non-binary alphabets was introduced by Kasami et al. in 1968 and dubbed Generalized Reed-Muller (GRM) codes [1]. The GRM code \(\text{RM}_{q}(r,m)\subseteq\mathbb{F}_{q}^{N}\) is a length \(N=q^{m}\) linear code over \(\mathbb{F}_{q}\). Like binary RM codes, GRM codes can be defined as polynomial evaluation codes. A more detailed description of GRM codes can be found in Appendix A. The rate of \(\text{RM}_{q}(r,m)\), denoted by \(R_{q}(r,m)\), is computed using the base-\(q\) logarithm. Thus, it equals the number of \(\mathbb{F}_{q}\) information symbols (i.e., the dimension of the code) divided by the number of \(\mathbb{F}_{q}\) codeword symbols. The following lemma bounds the change in rate caused by reducing \(m\). The proof is in Appendix A. **Lemma 11**.: _For integers \(q\geq 2\), \(0\leq r\leq m(q-1)\), and \(0\leq k<m-r\), the rates of \(\text{RM}_{q}(r,m-k)\) and \(\text{RM}_{q}(r,m)\) satisfy_ \[R_{q}(r,m-k)-R_{q}(r,m)\leq\frac{4k}{\sqrt{m-k}}.\] **Definition 4** (Punctured Code).: For a code \(\mathcal{C}\subseteq\mathbb{F}_{q}^{N}\) and index set \(I\subseteq[N]\), the punctured code formed by the symbol positions indexed by \(I\) is given by \(\mathcal{C}_{I}\coloneqq\{\boldsymbol{c}_{I}\in\mathbb{F}_{q}^{|I|}\,\,|\, \,\boldsymbol{c}\in\mathcal{C}\}\). **Remark 2**.: Although one may also consider a puncturing operation that includes reordering of code symbols, this is not needed for our results. For our definition, the symbols are kept in the same order but their indices are renumbered. The following lemma is proved in Appendix A. **Lemma 12** (GRM Puncturing).: _If one punctures the code \(\mathcal{C}=\mathrm{RM}_{q}(r,m)\) by keeping only the first \(q^{m-k}\) symbol positions (i.e., giving \(\mathcal{C}_{I}\) with \(I=[q^{m-k}]\)), then \(\mathcal{C}_{I}=\mathrm{RM}_{q}(r,m-k)\). Moreover, puncturing a uniform random codeword from \(\mathcal{C}\) gives a uniform random codeword from \(\mathcal{C}_{I}\)._ ## IV Main Results ### _SER of the Coset Channel_ This section gives bounds on the SER of the coset channel (defined in Section III-B) under the following conditions: **Condition 1** (Code).: The input \(\boldsymbol{X}\) is distributed uniformly of the codewords of a \(q\)-ary group code \(\mathcal{C}\) that has code rate \(R\). The code has a transitive permutation automorphism group and, for each \(x\in\mathcal{X}\), there exists \(\boldsymbol{c}\in\mathcal{C}\) with \(c_{0}=x\). **Condition 2** (Channel).: The output \(\boldsymbol{Y}\) is an observation of the input through a symmetric memoryless channel \(W\) that is matched to the code and has capacity \(C\). Under these conditions, \(X_{0}\) is uniformly distributed, the coset channel \(V\) is group symmetric, and the SER of the coset channel is an upper bound on the maximal SER of the code: \[\max_{i\in[N]}\mathrm{SER}(X_{i}\,|\,\boldsymbol{Y})=\mathrm{SER}(X_{0}\,|\, \boldsymbol{Y})\leq\mathrm{SER}(X_{0}\,|\,Y_{\sim 0})\] For the purposes of analysis, we introduce a degraded family of channels that interpolates between \(W\) and an uninformative channel. Specifically, we define \(W_{t}\) to be the composition of \(W\) and an erasure channel with erasure probability \(t\in[0,1]\). The implied coset channel is denoted by \(V_{t}\) and its squared error discrepancy is given by \[\delta(t)\coloneqq\mathbb{E}\big{[}\|\Psi(t)-\mathbb{E}[\Psi(t)\,|X_{0}]\|^{2 }\big{]},\quad 0\leq t\leq 1\] where \(\Psi(t)\coloneqq\psi_{t}(Y_{\sim 0}(t))=\mathbb{E}[\boldsymbol{c}_{X_{0}} \,|\,Y_{\sim 0}(t)]\). Here, the second expression for \(\Psi(t)\) holds because \(X_{0}\) is uniformly distributed, and thus the canonical map is equal to the posterior pmf. Finally, the _average discrepancy_ is defined to be \[\delta_{\mathrm{avg}}\coloneqq\int_{0}^{1}\delta(t)\,dt.\] We begin with a lower bound on the overlap matrix of the coset channel. In combination with Lemma 5, this bound shows that the SER of the coset channel is strictly less than the trivial upper bound \(1-1/q\) whenever the code rate \(R\) is strictly less than the capacity of the channel \(W_{t}\). **Lemma 13**.: _Assume that Conditions 1 and 2 hold with \(R<C\). For all \(0\leq t<1-R/C\), the overlap matrix \(Q(t)\) of the coset channel \(V_{t}\) satisfies_ \[\mathrm{Tr}(Q(t))\geq q^{C-R/(1-t)}.\] Next, we combine Lemma 13 with the constraints on the overlap matrix in Lemma 7 to provide a stronger bound on the SER in terms of the average discrepancy. **Lemma 14**.: _Assume that Conditions 1 and 2 hold with \(R<C\) and the average discrepancy satisfies_ \[\delta_{\mathrm{avg}}\leq\frac{(1-R/C)(q^{\frac{1}{2}(C-R)}-1)}{q}.\] _Further, suppose that one of the following conditions holds:_ 1. _the symmetry group of_ \(V\) _is doubly transitive, or_ 2. _the symmetry group of_ \(V\) _is transitive,_ \(q\) _is prime, and_ \[\delta_{\mathrm{avg}}<\frac{1-R/C}{8q^{2}}.\] _Then, the SER satisfies_ \[\mathrm{SER}(X_{0}\,|\,Y_{\sim 0})\leq\frac{4\delta_{\mathrm{avg}}}{1-R/C}.\] Finally, we provide a link between the average discrepancy and the entropy rates of subsets of the code. The following result is obtained by combining a decomposition of the squared error discrepancy, via the Efron-Stein-Steele inequality, with Lemma 6 and the EXIT area theorem. **Lemma 15**.: _Assume Conditions 1 and 2 hold. Furthermore, assume that overlap matrix of \(W\) has minimal eigenvalue \(\lambda_{\min}>0\). Let \(\mathcal{B}\) be a collection of subsets of \([N]\) such that_ 1. \(\bigcap_{B\in\mathcal{B}}B=\{0\}\)__ 2. _For each_ \(B\in\mathcal{B}\)_, the punctured code_ \(\mathcal{C}_{B}\) _has a transitive permutation automorphism group._ _Then, we have_ \[\delta_{\mathrm{avg}}\leq\frac{2\ln q}{\lambda_{\min}^{2}}\ \sum_{B\in\mathcal{B}} \biggl{(}\frac{H(X_{B})}{|B|}-R\biggr{)}.\] ### _Proof of Theorem 1_ We begin by verifying the conditions used in Section IV-A. Since the GRM code is a linear code over \(\mathbb{F}_{q}\), it is automatically a group code under the additive group of \(\mathbb{F}_{q}\). Also, is is well known that GRM codes have doubly transitive permutation automorphism groups [1]. For a linear code, each code position either takes all possible values or is always 0. Thus, for all \(x\in\mathcal{X}\), there is a \(\boldsymbol{c}\in\mathcal{C}\) such that \(c_{0}=x\) because otherwise \(\mathcal{C}\) would only contain the all zero codeword due to transitive symmetry and have rate zero. Together, these results imply Condition 1 is satisfied. Next, we note that cases _(i)_ and _(ii)_ of Theorem 1 both require that \(G\) contains the additive group of \(\mathbb{F}_{q}\). This implies that the channel \(W\) is group symmetric and the code is matched to the channel. Thus, Condition 2 is also satisfied. Under these conditions, we can apply Lemma 8 to see that the coset channel is is group symmetric and, for case _(ii)_, we can apply Lemma 10 to see that it is doubly transitive. Having verified the assumptions of Lemma 14, we conclude that for any \(\epsilon\in(0,1]\) there exists \(\delta^{*}>0\), such that if \(R_{q}(r,m)\leq(1-\epsilon)C\) and \(\delta_{\mathrm{avg}}\leq\delta^{*}\), then \[\mathrm{SER}(X_{0}\,|\,Y_{\sim 0})\leq\frac{4\delta_{\mathrm{avg}}}{1-R/C}.\] The final step of the proof is to show that \(\delta_{\mathrm{avg}}\) converges to zero as \(m\to\infty\). If \(C=0\), the theorem is vacuous, so we assume \(C>0\). If the channel symmetry group is doubly transitive, then \(C>0\) implies \(\lambda_{\min}>0\). Otherwise, \(q\) is prime and \(\lambda_{\min}>0\) by assumption. The desired convergence is established by the following result, which is obtained by combining the rate difference property of GRM codes in Lemma 11 with the generic bound in Lemma 15. **Lemma 16**.: _For a GRM code \(\text{RM}_{q}(r,m)\) with \(m\geq q^{2}\) on a channel \(W\) whose overlap matrix has minimal eigenvalue \(\lambda_{\min}>0\), we find that_ \[\delta_{\mathrm{avg}}\leq\frac{2\ln q}{\lambda_{\min}^{2}}\bigg{(}\frac{7+3\log _{q}m}{\sqrt{m}}\bigg{)}=O\bigg{(}\frac{\ln m}{\sqrt{m}}\bigg{)}.\] ### _Proof of Corollary 3_ For this result, we use the idea of group symmetrization in Section II-C. Applying this operation to \(W\) with the affine group \(A_{q}\) over \(\mathbb{F}_{q}\) forces the symmetrized channel to satisfy the condition _(i)_ in Theorem 1. Then, applying Theorem 1 proves the corollary. We note that this channel symmetrization operation is the same as multiplying the codeword elementwise by a uniform random \(\mathbb{F}_{q}^{*}\) vector and then adding elementwise a uniform random \(\mathbb{F}_{q}\) vector. These vectors are shared with the receiver in advance so that the process can be inverted during decoding.
2303.12944
Use of Federated Learning and Blockchain towards Securing Financial Services
In recent days, the proliferation of several existing and new cyber-attacks pose an axiomatic threat to the stability of financial services. It is hard to predict the nature of attacks that can trigger a serious financial crisis. The unprecedented digital transformation to financial services has been accelerated during the COVID-19 pandemic and it is still ongoing. Attackers are taking advantage of this transformation and pose a new global threat to financial stability and integrity. Many large organizations are switching from centralized finance (CeFi) to decentralized finance (DeFi) because decentralized finance has many advantages. Blockchain can bring big and far-reaching effects on the trustworthiness, safety, accessibility, cost-effectiveness, and openness of the financial sector. The present paper gives an in-depth look at how blockchain and federated learning (FL) are used in financial services. It starts with an overview of recent developments in both use cases. This paper explores and discusses existing financial service vulnerabilities, potential threats, and consequent risks. So, we explain the problems that can be fixed in financial services and how blockchain and FL could help solve them. These problems include data protection, storage optimization, and making more money in financial services. We looked at many blockchain-enabled FL methods and came up with some possible solutions that could be used in financial services to solve several challenges like cost-effectiveness, automation, and security control. Finally, we point out some future directions at the end of this study.
Pushpita Chatterjee, Debashis Das, Danda B Rawat
2023-02-04T17:58:34Z
http://arxiv.org/abs/2303.12944v1
# Use of Federated Learning and Blockchain towards Securing Financial Services ###### Abstract In recent days, the proliferation of several existing and new cyber-attacks pose an axiomatic threat to the stability of financial services. It is hard to predict the nature of attacks that can trigger a serious financial crisis. The unprecedented digital transformation to financial services has been accelerated during the COVID-19 pandemic and it is still ongoing. Attackers are taking advantage of this transformation and pose a new global threat to financial stability and integrity. Many large organizations are switching from centralized finance (CeFi) to decentralized finance (DeFi) because decentralized finance has many advantages. Blockchain can bring big and far-reaching effects on the trustworthiness, safety, accessibility, cost-effectiveness, and openness of the financial sector. The present paper gives an in-depth look at how blockchain and federated learning (FL) are used in financial services. It starts with an overview of recent developments in both use cases. This paper explores and discusses existing financial service vulnerabilities, potential threats, and consequent risks. So, we explain the problems that can be fixed in financial services and how blockchain and FL could help solve them. These problems include data protection, storage optimization, and making more money in financial services. We looked at many blockchain-enabled FL methods and came up with some possible solutions that could be used in financial services to solve several challenges like cost-effectiveness, automation, and security control. Finally, we point out some future directions at the end of this study. Blockchain, Federated Learning, Decentralized Finance, Cyber Security, Financial Security ## I Introduction Financial services have started to realize that blockchain technology could have a revolutionary effect on things like increasing revenue, improving the end-user experience and delivery process, increasing efficiency, and lowering the risks that come with running a business [1]. The financial technology industry, much like every other sector focused on technology, is now in the process of developing. Fig. 1[2] shows the use of blockchain in financial sectors is greater than in other sectors. There are a lot of new financial apps coming out all the time, and each one offers better and more creative ways to handle payments and process them. By the end of the year 2028, it is anticipated that the value of the financial blockchain industry will have increased to 36.04 billion dollars [3]. Decentralized Finance, often known as "DeFi," is an emerging financial technology that is based on blockchain and aims to limit the amount of control that banks have over financial services and money. Over many decades, digital ledgers will also cause big changes in how we get, send, store, and manage our money. Almost any business may fall prey to cybercriminals, but they often choose their targets based on which ones would net them the most money or have the biggest impact [4]. Hackers often go after banks and other similar financial services because they meet the above requirements. In the business of finance, companies often deal with billions of dollars and keep very sensitive and valuable information in the form of computerized archives like social security numbers, bank account details, wills, titles, and other personal documents. Their ongoing attempts to digitally change and the challenging policy landscape that is speeding up the use of hybrid workspaces all make it easier for cybercriminals to get data and sell it. As a direct consequence of this, cyber threat actors are focusing an inordinate amount of their attention on the banking industry. The blockchain [5] has a powerful function for storing certificates and can track data in a way that can be controlled. Federated Learning, on the other hand, can make sure that sensitive data never leaves the local node, that only the gradient information of the model update needs to be sent, and that all data is kept secret. Complex joint model training is done based on this premise, which makes these benefits possible. As a result, the functions that they perform are somewhat complementary to one another. If they can be Fig. 1: Usage of blockchain in financial sectors rather than other sectors. [2] combined, they will not only be able to share data more efficiently, but they will also be able to keep the privacy of the data. Blockchain usage has been increasing day by day, as shown in 2. Because blockchain [6]is decentralized, immutable, and secure, combining these two technologies has the potential to improve Federated Learning's transparency, trustworthiness, and, most importantly, decentralization [7]. **Motivation** The financial sector has benefited from several technological improvements and integrations. However, it still works in a centralized way, with financial institutions and governments at the center of the model. Customers of financial services have begun to question the relevance of this time-honored custom. Because of this uncertainty, blockchain development services have emerged as a more open and honest alternative. Because technology has added a new dimension, the financial services environment, which has become a technical revolution in the financial sector, now has a new facet. It has caused a lot of changes in how companies are set up and how they do business, which has given the financial technology sector a huge chance. Because of this, both new businesses and companies that have been around for a while and focus on making financial applications are interested in finding out if blockchain is necessary for financial services or not. According to Fig. 2, the use of blockchain in financial services is greater than in other sectors. A common factor that contributes to the complexity of risk is the fact that it is impossible to completely remove or guard against all risks, regardless of how sophisticated your systems are. It is where the process of risk management comes in. Risk management is a regular, ongoing process in which the right professionals look at risks from time to time to reduce the chances that certain threats will come true. Companies that provide financial services in today's market not only have a hard time luring in new clients, but they also have a hard time doing the same with prospective workers. It may be hard to find the right people to fill new positions in information technology (IT) because of several factors, the most important of which is that millennials don't like long-term jobs. The provision of essential financial services is essential to the operation of any economy. Without them, those who have money to save may have difficulty locating others who need to borrow money, and vice versa. And without financial services, people might not buy as many goods or services because they would be so worried about saving money to protect themselves from possible losses. **Contribution of the work** This paper gives an overview of how blockchain and federated learning are used in the financial services industry. The goal of this research is to look at the benefits of blockchain and FL to get useful information that can be used to improve external statistics and make policy decisions in the financial sector. This paper talks about and analyzes how financial activities face weaknesses and problems. Besides, it is discussed how financial services can be used and organized using decentralized technology. However, the main contribution of this paper is presented as follows: * We give an overview of financial services by talking about how important they are and showing how they can be used in different situations. * We provide recent financial services vulnerabilities and related threats that have occurred. Possible risks in financial services are also discussed in this paper. * We demonstrate a few problems that exist in financial services and how blockchain can be used to solve them. The benefits of blockchain in this sector are also discussed in detail. * We present several blockchains and federated learning applications in different fields. Parallelly, we give some ideas for how blockchain and FL can be used in financial services and explain what these technologies signify. The remaining of this paper is organized as follows. Fig. 3 shows the overall roadmap of this survey paper. Section II gives the existing use cases of financial services. In section Fig. 3: Roadmap of the survey. Fig. 2: Expected blockchain usage in 10 years. [8] III, existing vulnerabilities, threats, and risks in financial services are provided in detail. Section IV depicts the usage of blockchain in financial services. Applications of Blockchain and federated learning in financial services are stated in section V. Section VI introduces some future research aspects in financial services. Finally, the paper is concluded in section VI. ## II Financial Services Use Cases The Banking, Financial Services, and Insurance (BFSI) [9] sector is the most vulnerable to uncertainty because it depends on global trends, changing laws, and customer demographics. It is a metric that has a great deal of significance for any sector, particularly the BFSI sector. Customers often give their banks and insurance companies their most private information, like information about their health or finances. It could have consequences for both the bank and the insurance company. Data and analytics are being used by the most important institutions in this field to change the rules of the game. They are gathering additional information from sources such as telecom providers, merchants, and social media to improve the knowledge that they already have about their customers. Because they have such a comprehensive perspective on their consumers, they are in a position to increase revenue, reduce risk, cut opportunity costs, and improve operational efficiency. **Sales & Revenue Analysis** Examining the operating procedures assists financial institutions in lowering their continuing expenses. If you know the sales trends for a certain consumer, you may be able to make things easier to repeat. Sales is a key activity, and having Business Intelligence (BI) tools may aid in defining benchmarks like the number of net new customers and the lucrative sector among current customers. These are just two examples of how having these tools can be beneficial. **Sales Performance Analysis** The report is all-encompassing and includes data on employee productivity. Any employee who works with customers, like salespeople, account managers, and tellers, can benefit from the information it can give because it can help them find ways to improve and, in the end, give better service to their customers. This evaluation could be used to check the viability of new financial products or services and make strategic changes that are in line with the institution's long-term goals. **Branch/Online Sales Analysis** It may assist financial institutions in formulating the most effective channel strategy. Multiple channels are now available for customers to use when communicating with their banks. Their trips through these channels are very complicated. They often start in one channel, go through different stages of the process in another channel, and then end up in a different channel. By collecting real-time data and using analytics to learn more about the buyer's journey, financial institutions may be able to use this to give customers a truly seamless multichannel experience. In addition to this, it assists them in maintaining an awareness of their rivals. **Lending, Payment & Transaction Analysis** Banks can use their customers' transaction history to recommend products and services that are relevant to those customers. It leads to improved conversion rates as well as increased levels of client satisfaction. The following information about customers may be analyzed more effectively with the use of banking analytics: Existing clients of a bank can ask to look at their transaction history, which could include information about deposits, withdrawals, or payments. Bankers can help their customers take advantage of good deals on their credit or debit cards or other new financial products by getting to know how they spend their money and encouraging them to do so. Using this information to send timely spending alerts and payment reminders can improve the customer experience. It can be done to improve the customer experience. By looking at their clients' transaction histories and looking for patterns, banks can find transactions that might be fraudulent and take steps to stop them. Analytics for banking include data-driven methods like digital credit evaluation, improved early-warning systems, next-generation stress testing, and analytics for collecting debt. These techniques are used to protect clients against fraud at financial institutions. **Credit Risk and Exposure Analysis** An analysis of a client's credit risk and exposure might shed light on whether or not a customer has a history of defaulting on their payments in the past. These consumers' credit profiles highlight their assets and customer behavioral data, such as past-due bills, loans or borrowings, and earnings, that may be used to calculate each customer's credit score. **Market and Portfolio Analysis** It is highly important to do market and portfolio research to recruit new clients and keep the ones you already have. An analytics system could look at a client's current portfolio to suggest new investment options to the client and help the client's portfolio managers keep a steady return. Again, doing a market study is of the utmost importance when it comes to building a portfolio that will be successful regardless of the state of the economy. **Liquidity Risk Management** Every single banking procedure has the potential to become more efficient and streamlined. Using advanced analytics, financial institutions may be able to do things like answer questions from regulators faster and more accurately and give teams more information. It helps with decision-making that is augmented by analytics. The compliance and regulatory standards that banks must meet are quite severe. This has a big effect on how poor the impairment risk they face is. Know-your-customer (KYC) analysis is very vital, not only to achieve compliance with the legal requirements but also as a method of mitigating risk. Anti-money laundering (AML) analysts can more effectively detect and monitor problematic account holders with the use of these BI technologies. **Analyzing and Planning Finances** Finance is the core of every company, just as it is with every other kind of organization. In the case of banks, this issue is even more important because bank employees are not only responsible for running the day-to-day business of the bank but also for meeting the different financial needs of customers. An analytical system may find the following use cases when it comes to a bank's finances: Banks need to have their cash on hand to handle payments well and follow all of the rules set by regulators. By looking at how much they spent in the past, they can make a budget that works for them. They also take into account certain factors that could make their financial needs go up or down. This could lead to finding a clear set of important success criteria that turn short-term savings into long-term, sustainable improvements and the best way to manage costs. Business intelligence (BI) tools could make financial planning and analysis (FP&A) easier and make it easier to report to key stakeholders in a useful way. By making the necessary reports automatically and regularly, these systems could cut the amount of work needed for financial reporting by a large amount. In addition to this, they help speed up the transmission of information and ensure that decision-makers are kept up-to-date on the state of the bank's finances. **Management, Marketing, and Production** There are instances when a new strategy for approaching an established consumer is necessary. Banks need to give their current customers suggestions for new and better products, and this information should be given to these customers at the right time. When you look at the company's current customers, you can see which marketing methods have worked best in the past. You can then use these methods to bring in new customers. Business intelligence systems can be used with transactional and trade analytics to make more complete and richer profiles of customers. This, in turn, can increase the acquisition and retention of consumers as well as cross- and upselling opportunities. **Customer Portfolio and Segmentation** It is another significant use of analytics systems in financial services. It is necessary to correctly segment clients to successfully market to them. Consumers who are searching for a house loan or a vehicle loan are an example of one kind of customer segmentation used by financial services organizations. Another example would be customers who are specifically interested in a checking account or a money market account. Conversion can happen when the customer relationship manager makes an offer or calls the customer about something important to them. In the same way, a new offer may be aimed at a smaller group of people based on their credit scores. **Churn Prediction and Value modeling** Predicting a client's likelihood to churn and estimating their lifetime worth as a client are two areas that have gained major significance for financial institutions like banks and insurance companies in recent years. It takes a massive expenditure to compete with the thousands of businesses that are fighting for consumers' attention and physical space. It is of the utmost importance to make sure that you are not leaving any value on the table after you have successfully onboarded the consumer. The process of mapping the customer journey to observe their behavior helps in understanding any requirements that the customer may have and also assists in up-selling. **Analysis of a marketing campaign** An analysis of a marketing campaign gives a summary of the different channels that work for a bank and find the best way to spend money on all of them. The leading banks use the information from the transaction data of credit cards (from both their terminals and those of other banks) to develop offers that provide customers with an incentive to make regular purchases from one of the bank's merchants. These offers can be found on the websites of the leading banks. ## III Vulnerabilities, Threats, and Risks in Financial Services Financial services businesses produced more data after becoming digital. Every time you make a financial transaction or contact someone, data is created and shared using multiple applications. Cybercriminals can use this information to their advantage. They may sell, utilize, or threaten to dump it. In 2022, financial institutions are most at risk from ransomware, phishing, online application, and vulnerability exploitation attacks, denial of service (DoS) attacks, insider threats, nation-state and state-sponsored threat actors, and Advanced Persis- trent Threat (APT) groups [10]. Fig. 4 shows financial services vulnerabilities, threats, and risks. Trends make financial services organizations more vulnerable to cyberattacks. Thus, firms must understand their current dangers and build effective defenses against them. ### _Vulnerabilities_ Organizations in the financial industry confront security risks from both internal and external sources regularly because they are high-value targets for hackers. Threat actors use banking websites or virtual private networks (VPNs) to get into online banking systems to steal account information, cause trouble, or test how far they can get into a network. Internal threats often come from unhappy employees, weak third-party vendors, and human mistakes caused by phishing Fig. 4: Vulnerabilities, threats, and risks in financial services emails or other forms of social engineering. Weaknesses in external and internal security let sensitive financial information, client data, and monitoring networks for account balances and transactions get out. It hurts customer confidence and causes business problems [10]. Today's financial services firms need sophisticated cybersecurity solutions that can manage the growing demands of keeping customer and financial data safe, limiting attack risk, and complying with regulatory regulations. Organizations in the financial industry may use Core Security's financial security services to identify and prioritize their most significant vulnerabilities and access concerns and effectively handle mandatory compliance audits. **Actively Evaluating Cybersecurity** Unknown weaknesses in cybersecurity and compliance cannot be addressed by financial organizations. In addition, failure to address these vulnerabilities may have significant repercussions. If left neglected until an event happens, institutions are compelled to use a reactive response, which may result in business interruptions and eroded client trust. Instead, financial service companies should take a proactive approach. Financial service organizations may conduct an initial evaluation of current vulnerabilities to discuss with a managed service provider (MSP). **Ransomware Attacks** There is an exponential increase in the number of potential targets for ransomware assaults as the globe continues to become more digitally linked. The term "tansomware" refers to an attack in which the perpetrators employ malware to get access to your business's systems or data and then keep that data hostage until the firm pays a ransom. The aftermath of these assaults has been utterly catastrophic. In addition to the cost of the ransom, there may be additional expenditures related to damage management, such as legal fees and other expenses. There is also the possibility of losing data. 3) Access Vulnerability Sensitive data can be left exposed and subject to attack if there are flaws in the different levels of information access. Integration of cybersecurity measures is essential throughout all departments of a business and at each level of access. Criminals online will attempt to take advantage of whatever vulnerabilities they may find, regardless of the organizational hierarchy of the company they are targeting. 4) Managing Compliance The advancement of information technology has made the financial services industry's job more difficult in terms of complying with regulations. The financial services industry in the United States is one of the most heavily regulated corporate sectors in the world. On the other hand, merely complying with the rules may not be enough anymore. Instead, aggressively managing compliance risk and increasing compliance overall is essential for gaining the trust of customers and avoiding expensive fines. 5) Operational Stability A backup and disaster recovery solution that is proactive and dynamic is necessary for avoiding disruptions to corporate operations and the loss of crucial data, either of which might result in a compliance violation. It is common for off-the-shelf onsite backup systems to be unable to provide the degree of performance necessary to satisfy the requirements of finance and investment businesses. It is essential to come up with a solution in advance of an outage to guarantee a speedy recovery and reduce the amount of time customers are without service. **Insider Weaknesses** Insider vulnerabilities are a source of concern in the banking and financial industries when it comes to cybersecurity. It occurs when people who work inside a bank or other financial institution do something that puts the company at risk of being attacked. The 2019 IBM X-Force Intelligence Index [11] found that phishing emails were used in almost two-thirds (29%) of the attacks that were looked at. Whaling attacks, which are often called "corporate email compromise scams," were to blame for 45% of these problems. In these incidents, hackers try to break into the email accounts of important people in the organization, like the CEO, to get the company to reveal private information. Another common thing that might happen is that systems and servers are set up incorrectly. **Technological Bottlenecks** Websites and apps about banking and money make the architecture of the network as a whole more vulnerable. Researchers discovered that they were more likely to be hacked into banking and finance systems. Cross-site scripting (XSS) attacks [11], which allow attackers to run malicious code on a website or app, could happen to 80% of the people who were tested. Then, the bad script could change the site's content by getting to the user's cookies and other sensitive information. Users are more likely to mistrust websites and programs that have vulnerabilities like these. So, so if businesses want to stay competitive, they should look into what steps they can take to protect their websites and apps. **Responsibility Gap** Even though the global financial system is becoming more dependent on digital infrastructure, it is not clear who is responsible for protecting it from cyberattacks. It is due, in part, to the rapid pace at which the environment is changing. If people don't work together, the global financial system will continue to get worse as more innovation, more competition, and the pandemic speed up the pace of the digital revolution and make it more dangerous. Although many threat actors are motivated by a desire to make money, the number of attacks that are solely disruptive and destructive has been increasing; additionally, those who learn how to steal learn about the financial system's networks and operations, allowing them to release more obstructive or dangerous future attacks [12]. Even though the system is generally well-developed and well-regulated, this sudden change, like the risks it poses, puts a strain on its ability to respond. ### _Threats_ The worst catastrophes have harmed financial information, particularly accounts, calculations, and transactions. Such assaults, which may undermine trust, now have some technological solutions. VMware reported 238% more financial institution cyberattacks in the first half of 2020 [13]. IBM and the Ponemon Institute estimate a financial data breach would cost $5.72 million in 2021 [13]. It's global. The increased frequency of assaults on targets of opportunity in low- and lower-middle-income countries is less reported than cyberattacks in high-income countries. Financial inclusion has been the biggest driver of digital banking services like mobile payment systems. Digital banking services expand financial inclusiveness but provide hackers with more targets. **Phishing** Phishing is social engineering that deceives individuals into sharing their login credentials to enter a private network. Email phishing, when victims get official-looking emails, is the most common. Visiting a phishing email's dangerous links or attachments might install malware or launch a bogus website that steals login credentials. In the first half of 2021, bank phishing attacks rose 22%. Financial app assaults increased by 38% at the same time. Akamai's 2019 State of the Internet report found over 50% of phishing attempts targeted financial services [13]. Phishing tactics are evolving to exploit modern worries. These troubling trends rank phishing among the banking sector's top cybersecurity dangers. **Ransomware** Ransomware also threatens financial institutions. Ransomware encrypts computers, locking victims out [13]. Only a ransom can fix the harm. Due to strict rules requiring financial institutions to be resilient to cyberattacks and data breaches, these extortion methods work effectively against them. Ransomware attacks are now data breaches, which might affect regulatory compliance requirements. Ransomware gangs target financial businesses because of their customer data. Due to the danger of data exposure on the dark web and reputational damage, many financial services companies accept extortion demands.. **DDoS Attacks** 2020 witnessed the highest DDoS attacks on financial institutions. DDoS attacks are a prevalent cyber threat to financial services since they may target consumer accounts, payment gateways, and banks' IT systems. Due to this, the impact of DDoS attacks on financial firms is amplified. Cybercriminals may use the ensuing confusion in one of two ways. Password login attacks and DoS attacks were the two main online dangers to payment systems in 2020. In comparison to the same period in 2020, multi-vector DDoS attacks have increased by 80% in 2021 [13]. These DDoS attacks combine several campaigns to swamp security personnel. **SQL Injections** A vulnerability in a WordPress plugin that enabled Time-Based Blind SQL injections (SQLi) was found in March of 2021 [14]. This vulnerability was detected. There was a possibility that 600,000 customers were affected by this issue. Through the use of a technique known as Time-Based Blind SQLi, the vulnerability made it possible for any site visitor to access sensitive data stored in a website's database. Because the SQL query was executed inside the function object for the pages" page, this meant that any site visitor, even those who did not have a login, may trigger the execution of this SQL query. It would thus be possible for a hostile actor to give harmful values for either the ID or type parameters. **Local File Inclusion** A vulnerability known as Local File Inclusion (LFI) was discovered in August 2021 for a version of BIQS [13] software used by driving schools for billing customers. When a certain payload is sent to download/index.php in older versions of BIQS IT Biqs-drive than v1.83, a local file inclusion (LFI) vulnerability is present. This vulnerability may be exploited to take control of the affected system. Because of this, the attacker can access arbitrary files stored on the server using the permissions of the web-user configuration. **Cross-Site Scripting** Trend Micro revealed the details of e-commerce website cross-site scripting (XSS) attacks on April 28, 2021. EC-CUBE-built websites have also had XSS instances confirmed by JPCERT/CC (an open-source CMS for e-commerce websites). Any e-commerce website having an XSS vulnerability on its administrator page is targeted by this attack. This attack campaign continues on July 1, 2021. In order forms on targeted e-commerce websites, attackers insert malicious scripts to make purchases. XSS attacks on the administrator's page steal credentials and install Simple WebShell on the website. Attackers then utilize WebShell and JavaScript on the website to harvest and save user data. Monitoring the WebShell may allow the attackers to obtain the stolen data. During the attack, the attackers embed Adminer [15] on the e-commerce website. GUI-based database content analysis tool. It supports MySQL, PostgreSQL, SQLite, MS SQL, Oracle, SimpleDB, Elasticsearch, and MongoDB. Attackers presumably accessed database information using this approach. **OGNL Java Injections** In August 2021, OGNL flaws allowed hostile actors to inject code into Atlassian Confluence servers [16]. OGNL injection vulnerabilities allow unauthenticated users to execute arbitrary code on Confluence Server or Data Center instances. Previous versions of the Confluence Server and Data Center are affected by this problem. The vulnerability is actively abused in nature. Unauthenticated users may exploit it regardless of settings. **Unencrypted Data** When data is left unencrypted [17], fraudsters or hackers may immediately change it, causing major problems for banks. Online and financial institution data must be jumbled. It prevents attackers from using stolen data. **Spoofing** It is one of the most recent instances of a cyber threat that businesses in the financial sector need to be prepared for. The URL of a bank's website will be impersonated by hackers, who will replace it with a website that is connected to the actual one and operates in the same manner (cite 17). When a customer uses a fraudulent website and inputs his login information, the hackers will grab the customer's credentials and utilize them in the future. **Data Manipulation** One of the most common misunderstandings about cyber assaults is the belief that people are only concerned about the theft of data. It isn't always the case, though, because hackers are using data manipulation attacks more and more. Cybercriminals are always developing new methods of attack. Attacks involving data manipulation happen when a bad actor gains access to a trusted system and then makes changes to the data without being caught to help themselves [17]. One example of this would be if an employee changed information about customers. Likely, it won't be found out because the transactions will look like they were done legally. It will cause future data to be stored incorrectly. The more time that goes by before the manipulation is discovered, the more damage it will do. ### _Risks_ The financial sector is getting more and more exposed to "cyber risk," which is the risk of losing money because of how much they depend on computers and digital technology. Cyber-related events, especially cyberattacks, are always at the top of polls that measure the financial stability of the United States and the rest of the world. Cyber risk, like other financial vulnerabilities, raises macroprudential issues. Similar to other financial problems, a lot of technological attention has been paid to cyber resilience, but it is still very early to measure the effects that cyber risk might have on the financial system. If you want to be strong against cyber-attacks, you need to know about the problems that make the cyber risks the financial sector faces even higher. It is important to find a way to solve them all at once, as these problems are linked to each other. **Digital Innovation** In financial institutions (FIs), new technologies are being used, such as cloud computing, artificial intelligence, and digital service delivery. Most FIs are improving their data processing, fraud detection, and financial analytics by using software that is hosted in the cloud. Meanwhile, the COVID-19 epidemic furthered the process of transferring the industry's IT infrastructure (digital transformation), which resulted in the proliferation of virtual banks and financial services. Because of digital transformation, businesses today run an increasing number of brand-new apps, devices, and infrastructure components, all of which expand the attack surface. A surge in cybersecurity threats for financial institutions is caused by all of the issues together. Even if the rise of new technologies in the financial sector has a major impact on industrial risk management, these technologies could help risk management by improving cybersecurity and compliance controls. **Complicated Statutes and Rules** As financial institutions use more technology and data to help their customers, regulations must change to keep up. State, federal, and international authorities have established several new restrictions for their industries in reaction to the growth in cyberattacks on financial services organizations. Data protection, privacy, and cybersecurity legislation for financial institutions (FIs) is tightening in various nations. Compliance may be time-consuming and expensive, but it's in everyone's best interest. According to BITS' technology division, chief information security officers spend 40% of their time addressing regulatory agency criteria [18]. Because of the regulatory environment's complexity, enforcement is tighter, raising regulatory costs and penalties. In August 2020, the US government fined Capital One $80 million for failing to find and deal with cyber risk, which led to a massive data breach in 2019 [19]. Capital One resolved a class-action lawsuit in late December 2021 over a 2019 Amazon Web Services cloud network intrusion that stole 100 million customers' data [20]. The settlement was for 190 million dollars. **Complex Supply Chain Ecosystem** Most financial firms outsource their digital duties. Third-party service providers may be vulnerable even if the FI's internal security is strong. Threat actors are targeting software businesses and sending malware to supply chain customers through legitimate downloads and upgrades. Threat actors gained backdoor access to client networks via these attacks on software distribution platforms. Recent assaults include the SolarWinds breach [21]. Supply-chain assault. Attackers infiltrated SolarWinds' network and planted malware in their management software to target thousands of banks and government entities. The SolarWinds breach shows how susceptible the financial services sector is to cyberattacks and disruptions since it depends on third-party suppliers and service providers with little or no cybersecurity oversight. Third-party cybersecurity vulnerabilities will grow as the government prioritizes business continuity and operational resilience. **Hybrid Workplace** COVID-19 has sped up recent changes in the way people work, like the hybrid workspace, which combines people who work in the office and those who work from home. It will increase the risk that businesses face. As we move into the third year of the pandemic, more and more people are using technologies like remote work, hybrid workforces, and software that is hosted in the cloud. Businesses had no choice but to quickly adopt the new technologies that gave them remote access, better communication, and more ways to work together. Because of this, hybrid working settings make IT systems more complicated, increase the number of ways to attack them, and create new cyber risks and threats. **Enabling Technologies** According to some estimates, the pandemic sped up the transition to digital technology by as much as three years. Enabling technologies, such as application programming interfaces, big data analytics, artificial intelligence, biometrics, cloud computing (particularly outsourcing to the cloud), and distributed ledger (blockchain) technology, makes it feasible for digital transformation to occur. Companies and their boards of directors need to be able to make sure that new technologies are adopted safely so that the benefits can be gained and the risks that come with trying new things can be managed proactively. This will help businesses get the most out of their innovative activities and reduce the risks that come with them. **Data Governance** The importance of having a solid strategy for data governance is only going to grow in the coming years. Companies need to realize that data is a key strategic asset before they can come up with a company-wide plan for collecting, managing, storing, protecting, retrieving, and destroying data. To put it another way, develop a strategy for data governance that is tailored to your organization. If data governance works, it will have many benefits, such as making it easier to see risks in a hybrid work environment, being able to meet the recently agreed-upon requirements for reporting climate risk, and making it easier to keep track of records. **Operational Resilience** Cybersecurity is a major problem for businesses operating in the financial industry. In a September 2021 Conference of State Bank Supervisors (CSBS) study, more than 80% of bankers regarded cybersecurity risk as "very significant" as the top internal risk [22]. This number is more than twice any other operational risk category and greater than the 60% recorded last year. This risk aversion may be attributed to a great number of different factors. For example, worries about cybersecurity can hurt both the way a company works and its reputation. If a financial institution is hit by a cyberattack, its ability to do business could be hurt or completely stopped. It is called operational risk. In addition, as a result of the hack, consumers can lose faith in the company and want to conduct their business elsewhere (reputational risk). ## IV Blockchain for financial services and banking industries Blockchain can make banking and lending easier by reducing the risk of the other party and the time it takes to issue and settle the money. Authenticated paperwork and KYC/AML data reduce operational risks and enable real-time financial document verification. Without a bank or financial services provider, a blockchain lets people transmit money securely [23, 24]. Blockchain technology is known as "distributed ledger technology" in the financial services business [25]. Table I shows the existing methods using blockchain in financial services. Since all transactions on a blockchain are saved in a shared database, it could make banking more open. Because of this openness, problems like fraud might be found and fixed, which could make the risk for financial institutions lower [26]. Fig. 5 shows the usage of blockchain in several cases in financial sectors. ### _Addressable Challenges in Financial Services_ Financial services often have problems, like not reaching their goals, taking a long time to raise money, and losing more and more money. These problems are often caused by inadequate management. The following is a list of challenges that blockchain technology has the potential to solve in the financial technology industry [40, 41]: **Centralized System** Even though financial services solutions made things seem easier, the real power was still in the hands of third parties [42, 43]. Higher-ups are still the only ones who can approve transactions, so users are still waiting for confirmation that they can move forward with their transactions. Because of the introduction of blockchain technology, this was the first problem in the financial services industry that could be addressed. **Trust Issues** When consumers take any action inside financial services apps, they are not aware of what is occurring on the other side of the transaction [44]. This leads to a great deal of uncertainty as well as an increase in the fear of having one's identity stolen, which eventually results in a decrease in faith in the process. Because the blockchain is open and can't be changed, these blockchain application development services solve this problem in the field of financial technology. **Less Efficient Methods** One further reason why the financial technology industry requires blockchain is that the presence of many different third parties often causes the procedures to be delayed. This, in the end, leads to poorer rates of customer satisfaction and increased levels of volatility in the commercial sector as it is generated a lot of data [45]. **Higher Operational Cost** In the financial technology industry, time is money. As a result, blockchain technology has once again shown itself to be one of the financial services innovations that have the potential to lower costs by almost half. It is because it reduces the dependence on many individuals, makes the process public, and shortens the required time. ### _Blockchain in Financial Services_ When talking about how blockchain technology has changed the financial technology industry, it's best to focus on the most important parts of the economy to better understand and analyze the changes. The following is a list of blockchain use cases for financial services [46]. **P2P Payments** Bank clearing and settlement regions have concerns about costly bureaucracy and unclear expertise. These concerns are present in most financial arrangements and cause concern. Old and hierarchical financial systems produce these gaps. Decentralized consensus methods can close them quicker. Blockchain technology helps financial services. Decentralized ledger technology will enable mobile banking for those without bank accounts. A blockchain mobile app development business may simplify cross-checking data between companies engaged in international payment transactions. Blockchain technology allows several checks at once. **Financial Trading** Documents are still being sent or faxed to confirm information that is necessary for trade financing, which means that paperwork is being sent across the world to verify the information. To buy stocks or shares, you still have to go through the complicated and time-consuming steps of brokerage, exchanges, clearing, and settlement. The settlement process takes three days on average, but it may take longer if it occurs over the weekend. This is because every trader is required to keep databases for all of the transaction-based documents, and they must routinely check these databases against each other to ensure that they are accurate. The application of blockchain technology to the provision of financial services in this sector has the potential to free traders from the need to do time-consuming checks on counter-parties while also improving the efficiency of the whole lifecycle [45, 47]. This not only speeds up the settlement process but also makes transactions more accurate and reduces the risks involved [48]. **Crypto Lending** Thanks to crypto lending, the financial world now has a new, easy, and transparent way to lend money. The lenders will give the borrowers the assets they need for the loan at a rate of interest that was agreed upon ahead of time. Bornwers will be able to keep their crypto assets as collateral for a loan based on fiat currency or stablecoins. It is also true when read backward. When borrowers need to borrow crypto assets, they will occasionally use their stable coins or traditional cash as collateral. **Regulatory Compliance** For the second time, this is one of the most consequential uses of blockchain in the financial Fig. 5: Usage of blockchain in different scenarios of financial services. [8] sector. Since it is expected that the global need for regulatory services will expand in the next few years, financial services businesses are integrating blockchain technology to improve regulatory compliance [49]. They expect this technology to record the actions of all parties involved in every verified transaction, eliminating the need for regulators to verify the records' veracity. Technology is also allowing scholars to return to the original documents rather than relying on the many copies that have been produced. Errors are less likely to occur, the integrity of records for financial reporting and audits is being preserved, and the time and resources spent on auditing and accounting are being drastically reduced thanks to the blockchain's promise of immutability [50, 51]. **Digital Identity** The number of accounts that have been made with fake information keeps going up. Even though banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. A digital identification system may benefit from using blockchain technology. The customers only need to go through the validation process once, and then they may use their credentials to conduct transactions in any part of the world. On this front, blockchain may also aid financial users in the following ways: 1) managing personal information; 2) communicating personal information to other parties while minimizing security concerns; and 3) digitally signing legal documents, such as claims and transactions [52, 53]. **Auditing** It is a procedure that checks the finances and brings to light any discrepancies that may exist. The procedure banks do have stringent Know Your Customer and Anti-Money Laundering inspections, these measures are not failsafe. Customers are much less likely to be hacked because there isn't a clear set of documents, they need to provide to prove who they are. is not only difficult to understand, but it also moves quite slowly. Blockchain technology, on the other hand, makes the procedure simpler. Because of this technology, you can ask the blockchain application development firm with whom you are paired to add the record straight to the ledger, making it possible to see and update data in a time-efficient manner [54]. **New Crowdfunding Models** The concept of "crowdfunding" refers to a method of supporting a project by soliciting contributions from a large number of individuals, often via the internet. ICOs, IEOs, and other mechanisms may make the fundraising process using blockchain technology more open and efficient, as opposed to more traditional methods of financing. However, it is highly recommended to have a clear understanding of what all financial services organizations that use blockchain technology are doing with it. ### _Benefits of Blockchain in Finance_ Blockchain technology has made it possible to create inclusive, open, and safe corporate networks. These networks make it possible to issue digital security credentials in a shorter amount of time, at lower unit prices, and with a higher degree of customization [55, 56]. Over the past few years, the use of blockchain technology in the financial sector has grown, which has shown the following benefits [57, 58, 59]. **Transparency** Protocols, standards that everyone agrees on, and common procedures are all used in blockchain technology. Together, these things serve as a single source of growth for all members of the network. It makes the data more reliable and improves the customer experience by making processing faster. **Security** In the financial sector, blockchain technology has made it possible to use secure application code that is designed to be impossible to change by hostile or third-party actors [60]. It makes it practically impossible to modify or hack the system. **Trust** The immutable and clear ledger makes it easier for the different people in a business network to work together on data management and get along. The blockchain is a distributed ledger technology that enables the safe recording, management, storage, and transmission of transactions across a wide variety of industries. **Privacy** When blockchain technology is used in the financial industry, it protects data privacy at all levels of software stacks in a way that is unmatched in the industry. This makes it possible for businesses to share data selectively within their networks. This increases confidence and openness while preserving anonymity and privacy at the same time. **Programmability** It makes it possible to design and run smart contracts, which are pieces of software that automate business logic and can't be changed. This makes them easier to program, more efficient, and more trustworthy. **Scalability and High Performance** In the financial sector, blockchain technology is used through hybrid and private networks that were built to handle hundreds of transactions per second. It provides enterprises with significant resilience and worldwide reaches by completely supporting interoperability across the public and private sectors. ## V Blockchain-Enabled Federated Learning for Financial Services With a central server, FL has reached a point where it can't do anymore. At the same time, new risks are showing up. There are a lot of things going on, but the most important ones are centralized processing, making up data, and not having any incentives. Businesses and academics are paying a lot of attention to blockchain-enabled distributed ledger technology in the hopes that it will speed up the use of FL. There have been a lot of creative answers made to meet the needs of a wide range of situations that are always getting more complicated. The functionality of FL that is allowed by blockchain offers both ideas and approaches to increasing the functionality of FL from a variety of points of view. ### _Federated Learning_ Federated learning (FL) is an exciting new decentralized deep learning technique that lets users update their models together without having to share their data. FL is changing how mathematical modeling and analysis are done in the business world. This makes it possible for more and more industries to build distributed machine learning models that protect the privacy and are safe. Nonetheless, the properties that are intrinsic to FL have resulted in several issues, including those about the security of personal information, the expense of communication, the heterogeneity of the systems, and the unreliability of model upload during real-world operation. It's interesting to think about how adding blockchain technology could improve FL's security and performance and increase the number of things that can be done with it. FL is a strategy for training artificial intelligence systems using data that is kept confidential. It lets centralized AI systems learn from data, which is often personal, without the data's actual content being shared or made public. Instead, only the lessons that can be gleaned from the data's structure are used. ### _Blockchain-enabled FL for Financial Services_ The conventional architecture of federated learning is based on centralization. But a model that depends on a trustworthy central server has several security holes. If the central server is attacked, the whole federated learning process could be ruined. This is because the global model update wouldn't be able to happen, training results might not be accurate, and the federation would be vulnerable to threats while collecting and updating model parameters. Joint sharing modeling of multi-party data is made possible by the use of blockchain technology, which is integrated with federated learning to meet specific data-sharing requirements. Table II provides the existing blockchain-enabled FL methodologies designed and developed by several researchers. These methods can be used in financial services for improving the service quality. Because the blockchain has a strong way to store certificates, it is possible to provide controllable data traceability. For federated learning, both making data visible and making it available are possible. As a result, the functions that they perform are somewhat complementary to one another. If they can be connected, it will not only be possible to improve the way data is shared, but it will also be possible to keep data modeling private [61]. **Data Protection** Feng and Chen [62] proposed a blockchain-enabled FL data-preserving strategy, which can be used by a financial business. Each business node that joins the consortium chain must pass a certain qualification test. Business nodes that take part in federated learning within the consortium chain must also verify their strong reputation value, but only if it is higher than a certain threshold. In federated learning, only nodes with a certain minimum level of reputation can be chosen to take part. They divided federated learning nodes into monitoring and training groups. The training node group does local model training rounds. The monitoring node group determines the training node reputation and global parameters. The top m reputation-valued nodes in this cycle of federated learning are the monitoring nodes. This is done to make it even less likely that malicious nodes will be included in the process of learning. **Data Traceability**Federated computing ensures that the original data never leaves the local node, that only the gradient information of the model update needs to be uploaded, and that private data is protected while taking advantage of the blockchain's immutability, certificate storage function strength, and controllable traceability. Based on this concept, it is possible to train complicated joint models. Therefore, they serve a similar purpose but in different ways. When used in tandem, they may not only make data exchange a reality but also keep personal information secure. **Profit Sharing** Because nodes in a community can use the model whenever they want without making any changes, a strong incentive is needed to get nodes to train models. Li et al. [66] suggested that a system called "profit sharing by contribution" be put in place as a way to solve this problem. Following the aggregation of each round, the managers will then give incentives to the appropriate nodes based on the scores of the updates that they have provided. As a result, giving updates regularly may bring more benefits, and the constantly updated global model may encourage other nodes to join the network. It is important to research this incentive mechanism since it can easily be adapted to a variety of applications in the real world and has great scalability. **Storage Optimization** When it comes to applications that take place in the real world, storage overhead is a critical component that helps determine the hardware requirements for the training devices. Historical models and updates may help with recovery after a disaster, but they take up a huge amount of space. Here, Li et al. [66] gave a simple and workable plan for reducing the storage overhead: nodes that don't have enough space can get rid of old blocks locally, and they only need to keep the most recent model and updates for the current round. In this way, the problem of not having enough storage space can be solved, and the core nodes can keep their ability to recover and verify data. Having said that, it is also easy to see where this approach falls short. With each transaction that is deleted, the legitimacy of the blockchain suffers. It is possible that each node will not adopt this technique due to the high level of mutual mistrust that exists among this group. So, the best thing to do might be to store your data with a reputable and trustworthy third party. The only information that the blockchain stores are the network addresses of the locations of each model or updated file, as well as records of any modifications that were made to those files. For other nodes to access the model and its changes, the centralized store must interface with those nodes. This central storage will be in charge of making backups in case of a disaster and offering services for storing files in different places. **Safeguarding Payment Networks** One of the best things about blockchain technology is that it can be used as a payment network that is not limited by national borders. As a decentralized answer to the problem of making payments without friction and with low transaction costs, many different blockchain protocols have been made. Centralized financial institutions are known for their high fees and painfully slow processing times, which led to the creation of these alternatives. Even though this use of blockchain technology has a lot of potentials, it is still hard to use it widely because of security concerns [73]. Theft and fraud are all too frequent in blockchain transactions since all that is required to complete them is a set of public and private keys. On the other hand, advanced machine learning can easily find unusual account activity, which then calls for human help. Both the companies that supply financial services and the people who use them are protected by this extra security measure [74]. **Effective Financial Services** Financial institutions that use FL and blockchain technologies often have the goal of increasing both the speed at which they provide their services and the quality of those services. In the same way that any other company would, these establishments have the incentive to find ways to save expenses and, as a result, generate greater value. In a survey that shows this trend, Deloitte found [75] that 57% of firms see cost savings as the main benefit of joining consortium blockchain networks. Those that supply financial services may deliver more value to their consumers while also optimizing the returns on their investments if they use these two technologies to drive business operations. **Controlled Finance Automation** The trend toward more automation cannot be denied, but if it is allowed to continue unchecked, it may result in undesirable consequences. Businesses will always lose control of their operations over time if there are no limits on how they can use automated processes. As a result of this, the tasks of automation need to be carried out in conjunction with built-in checks and balances. Using FL and smart contracts backed by blockchain technology could help make this dynamic happen. Smart contacts make it possible to automate procedures, while machine learning can look for problems and only call for human help when it's necessary. Because of this very important infrastructure, all financial transactions would be completely safe, completely open, and completely efficient. ## VI Future Scopes Financial companies have always been concerned about cybersecurity breaches, but cyber-attacks now pose considerably greater hazards to their operations and reputations. They must know the biggest threats first. It will help them prioritize cybersecurity activities and maximize ROI. It will also aid cybersecurity strategy development. Ransomware, phishing, online application, and vulnerability exploitation assaults, denial of service (DoS) attacks [76], insider threats [77], nation-state attacks, and state-sponsored threats will be the biggest dangers to financial institutions in 2022. Financial institutions must upgrade their security to protect against ever-changing threats. Despite spending millions and adding several levels of protection to their infrastructure, they still don't know how to employ security measures. To remain ahead of threat actors, test their IT security infrastructure against real-world threats often. IT and security executives must verify security measures to assess firms' cybersecurity posture and cyber resilience and show that they prevent intrusions. **Effectiveness of Transmission** The storage and synchronization of the blockchain need a significant amount of hardware resources, including not only the space on the hard drive but also the bandwidth on the network. Therefore, the best way to lessen the amount of transmission that is needed while still preserving the reliability of model training is a subject deserving of further investigation. **The Scene in Public** The authentication processes are handled by the alliance blockchain system, although this does have the side effect of making it more difficult to join the training community. How to create a public community by using a Proof-of-Work System Another interesting problem is finding a way to work together while protecting yourself from attacks from hostile nodes in financial services. **Lightweight Training** A lot of the Internet of Things (IoT) devices that financial clients use don't have enough hardware features to train a deep neural network well. Because of this, the question of how to make training models easier (for example, by using edge servers) while still protecting users' privacy is an important one that deserves more research. **Security Control Validation** Even though it's good to see probable, the security of the blockchain is not a good choice. Therefore, the best way to lessen the amount of transmission that is needed while still preserving the reliability of model training is a subject deserving of further investigation. **The Scene in Public** The authentication processes are handled by the alliance blockchain system, although this does have the side effect of making it more difficult to join the training community. How to create a public community by using a Proof-of-Work System Another interesting problem is finding a way to work together while protecting yourself from attacks from hostile nodes in financial services. **Lightweight Training** A lot of the Internet of Things (IoT) devices that financial clients use don't have enough hardware features to train a deep neural network well. Because of this, the question of how to make training models easier (for example, by using edge servers) while still protecting users' privacy is an important one that deserves more research. **Security Control Validation** Even though it's good to see financial institutions work toward high levels of cyber maturity, we strongly suggest doing an objective evaluation of these assumptions and maturity levels and fixing any gaps between what was expected and what was found in an assessment. Banks and other financial institutions, as well as other businesses, must constantly test the efficacy of their security policies against real-world attacks to stay one step ahead of threat actors. The term "security control validation" refers to an approach that is centered on potential threats and that enables businesses to evaluate and analyze their cybersecurity posture and their overall cyber resilience. It also checks to see if the security controls are working well enough to stop cyberattacks. **Automation** As long as AI keeps getting good investments, FL seems to be in a great position to grow a lot. Because of the adaptability of the technology, it is expected that it will make its way into a rising number of different businesses across a wide variety of use cases. The financial services industry, in particular, is in a great position to get a lot of value out of the combination of blockchain and FL technologies. These technologies work together to make big changes in the financial sector by making it safer, making it run better, and giving people more control over automation. Fig. 6 shows the potential outcomes of using blockchain technology in the financial sector. **Next-Generation Marketing** However, to make use of this potential, it is essential to keep in mind why blockchain technology was developed in the first place. Also, for FL learning systems to keep being useful, they need to be fed with good enough data. Although many questions remain unanswered, these technologies are not going away, either together or separately. In a financial market that is always changing, blockchain and FL integrations might be the next drivers of disruptive change across the finance sector. ## VII Conclusions FL, a distributed AI technique, is attracting attention for its privacy-enhancing and scalable financial services and applications. We conducted a state-of-the-art assessment and comprehensive talks based on recent research to examine how blockchain and FL may improve financial services. This study was motivated by the lack of a comprehensive FL and blockchain survey in financial services. We first discussed FL and blockchain technologies and their combination to bridge this gap. Then, we extended our study by providing measures of how FL and blockchain can be used in financial services to protect financial data, make decentralized storage more efficient, keep payment networks safe, automate tasks, protect privacy, and keep things safe. Lastly, we talked about some of the problems with research and gave notions for future initiatives that will bring more attention to this new field and encourage more research to realize FL and blockchain.
2301.05499
CLIP the Gap: A Single Domain Generalization Approach for Object Detection
Single Domain Generalization (SDG) tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain. While this has been well studied for image classification, the literature on SDG object detection remains almost non-existent. To address the challenges of simultaneously learning robust object localization and representation, we propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts. We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss. Our experiments evidence the benefits of our approach, outperforming by 10% the only existing SDG object detection method, Single-DGOD [49], on their own diverse weather-driving benchmark.
Vidit Vidit, Martin Engilberge, Mathieu Salzmann
2023-01-13T12:01:18Z
http://arxiv.org/abs/2301.05499v2
# CLIP the Gap: A Single Domain Generalization Approach for Object Detection ###### Abstract Single Domain Generalization (SDG) tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain. While this has been well studied for image classification, the literature on SDG object detection remains almost non-existent. To address the challenges of simultaneously learning robust object localization and representation, we propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts. We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss. Our experiments evidence the benefits of our approach, outperforming by 10% the only existing SDG object detection method, Single-DGOD [49], on their own diverse weather-driving benchmark. ## 1 Introduction As for most machine learning models, the performance of object detectors degrades when the test data distribution deviates from the training data one. Domain adaptation techniques [3, 5, 8, 30, 41, 43] try to alleviate this problem by learning domain invariant features between a source and a known target domain. In practice, however, it is not always possible to obtain target data, even unlabeled, precluding the use of such techniques. Domain generalization tackles this by seeking to learn representations that generalize to any target domain. While early approaches [1, 28, 26, 28, 47, 57] focused on the scenario where multiple source domains are available during training, many recent methods tackle the more challenging, yet more realistic, case of Single Domain Generalization (SDG), aiming to learn to generalize from a single source dataset. While this has been well studied for image classification [13, 35, 45, 48, 56], it remains a nascent topic in object detection. To the best of our knowledge, a single existing approach, Single-DGOD [49], uses disentanglement and self-distillation [22] to learn domain-invariant features. In this paper, we introduce a fundamentally different approach to SDG for object detection. To this end, we build on two observations: (i) Unsupervised/self-supervised pre-training facilitates the transfer of a model to new tasks [2, Figure 1: **Semantic Augmentation:** We compare the PCA projections of CLIP [36] image embeddings obtained in two different manners: (Top) The embeddings were directly obtained from the real images from 5 domains corresponding to different weather conditions. (Bottom) The embeddings were obtained from the _day_ images only and modified with our semantic augmentation strategy based on text prompts to reflect the other 4 domains. Note that the relative positions of the clusters in the bottom plot resembles that of the top one, showing that our augmentations let us generalize to different target domains. The principal components used are the same for both the figures. 4, 18]; (ii) Exploiting language supervision to train vision models allows them to generalize more easily to new categories and concepts [36, 9]. Inspired by this, we therefore propose to leverage a self-supervised vision-language model, CLIP [36], to guide the training of an object detector so that it generalizes to unseen target domains. Since the visual CLIP representation has been jointly learned with the textual one, we transfer text-based domain variations to the image representation during training, thus increasing the diversity of the source data. Specifically, we define textual prompts describing potential target domain concepts, such as weather and daytime variations for road scene understanding, and use these prompts to perform semantic augmentations of the images. These augmentations, however, are done in feature space, not in image space, which is facilitated by the joint image-text CLIP latent space. This is illustrated in Fig. 1, which shows that, even though we did not use any target data for semantic augmentation, the resulting augmented embeddings reflect the distributions of the true image embeddings from different target domains. We show the effectiveness of our method on the SDG driving dataset of [49], which reflects a practical scenario where the training (source) images were captured on a clear day whereas the test (target) ones were acquired in rainy, foggy, night, and dusk conditions. Our experiments demonstrate the benefits of our approach over the Single-DGOD [49] one. To summarize our contributions, we employ a vision-language model to improve the generalizability of an object detector; during training, we introduce domain concepts via text-prompts to augment the diversity of the learned image features and make them more robust to an unseen target domain. This enables us to achieve state-of-the-art results on the diverse weather SDG driving benchmark of [49]. ## 2 Related Work Domain Adaptation for Object Detection.Domain adaptation methods seek to align the source domain distribution to a particular target domain. To bridge the global and instance-level domain gaps, [3, 5, 43, 41] learn feature alignment via [15] adversarial training; [58] and [46] utilize category-level centroids and attention maps, respectively, to better align instances in the two domains; [30, 8] generate pseudo-labels in the target domain and use them for target-aware training. Domain adaptation, however, assumes that images from the target domain are available during training. In contrast, domain generalization aims to learn models that generalize to domains that were not seen at all during training. Below, we focus on the domain generalization methods that, as us, use a single source domain to do so. Single Domain Generalization (SDG).Several image classification works [13, 45, 48, 56, 35] have proposed strategies to improve the performance on _unseen_ domains while training on a single source domain. In particular, [45, 48, 35] introduce data augmentation strategies where diverse input images are generated via adversarial training; [13, 56] propose normalization techniques to adapt the feature distribution to unseen domains. While SDG has been reasonably well studied for image classification, the case of object detection remains largely unexplored, and poses additional challenges related to the need to further localize the objects of interest. This was recently tackled by Single-DGOD [49] with an approach relying on learning domain-specific and domain-invariant features. Specifically, this was achieved by exploiting contrastive learning to disentangle the features and self-distillation [22] to further improve the network's generalizability. Here, we introduce a fundamentally different approach that leverages the CLIP [36] pre-trained model and semantically augments the data using textual prompts. As will be shown by our results, our method outperforms the state-of-the-art Single-DGOD [49]. Vision-Language Models.Jointly learning a representation of images and text has been studied in many works [12, 14, 24, 27, 36, 55, 9, 24]. They use image-text pairs to train visual-semantic embeddings which can be used not only for image classification, captioning or retrieval but also for zero-shot prediction on unseen labels. VirTex [9] relies on image-caption-based pre-training to learn a rich visual embedding from a small amount of data. CLIP [36] proposes a scalable contrastive pre-training method for joint text and image feature learning. CLIP leverages a corpus of 400 million image-text pairs and a large language model [37] to learn a joint embedding space, which was shown to have superior zero-shot learning ability on classification tasks. The image-text-based training is also useful for Open Vocabulary Detection (OVD) [53], where the objects are detected using arbitrary textual descriptions. To address this task, [53] train their own visual-semantic representation, whereas [39, 16] employ CLIP embeddings. Recently, [29, 54] introduced a phrase-grounding-based pre-training for better OVD and zero-shot object detection. In contrast to these works, whose objective is to generalize to novel _categories or objects_, we seek to generalize to new _domains_ depicting the same object categories as the source one. ## 3 Method Let us now introduce our approach to exploiting a vision-language model for single-domain generalization in object detection. Below, we first present our semantic augmentation strategy aiming to facilitate generalization to new domains. We then describe the architecture and training strategy for our object detector. ### Semantic Augmentation In SDG, we have access to images from only a single domain. To enable generalization, we seek to learn object representations that are robust to domain shifts. Here, we do so by introducing such shifts while training the model on the source data. Specifically, we exploit CLIP's joint representation to estimate shifts in the visual domain using textual prompts, as illustrated in Fig. 1. This corresponds to the optimization step shown in the left portion of Fig. 2. Formally, let \(\mathcal{T}\) denote CLIP's text encoder and \(\mathcal{V}\) its image one. For reasons that will become clear later, we further split \(\mathcal{V}\) into a feature extractor \(\mathcal{V}^{a}\) and a projector to the embedding space \(\mathcal{V}^{b}\). The CLIP [36] model is trained to bring image features closer to their textual captions. In essence, this means that, for an image \(\mathcal{I}\) and a corresponding prompt \(p\), it seeks to minimize the distance between \(\mathcal{V}^{b}(\mathcal{V}^{a}(I))\) and \(\mathcal{T}(p)\). A useful property of the text embedding space is that algebraic operations can be used to estimate semantically related concepts. Word2Vec [31] had demonstrated such a learned relationship (e.g. _king-man+woman_ approaches the word representation of _queen_). Such a relationship exists with CLIP embeddings as well [38]. To exploit this for SDG, we define a generic textual prompt \(p^{s}\) related to the source domain, such as An image taken during the day, and a set of prompts \(\mathcal{P}^{t}=\{p^{t}_{j}\}_{1}^{M}\) encompassing variations that can be expected to occur in different target domains, e.g, describing different weather conditions or times of the day. Our objective then is to define augmentations \(\{\mathcal{A}_{j}\}\) of the features extracted from a source image such that the shift incurred by \(\mathcal{A}_{j}\) corresponds to the semantic difference between \(p^{s}\) and \(p^{t}_{j}\). To achieve this, we first compute the embeddings \(q^{s}=\mathcal{T}(p^{s})\) and \(q^{t}_{j}=\mathcal{T}(p^{t}_{j})\) of the textual prompt. We then take multiple random crops from a source image. For each such crop \(\mathcal{I}_{crop}\), we create a target image embedding \[z^{s}_{j}=z+\frac{q^{t}_{j}-q^{s}}{\|q^{t}_{j}-q^{s}\|_{2}}\, \tag{1}\] where \(z=\mathcal{V}(\mathcal{I}_{crop})\). We then search for an augmentation \(\mathcal{A}_{j}\in\mathbb{R}^{H\times W\times C}\) such that \[\bar{z}_{j}=\mathcal{V}^{b}(\mathcal{V}^{a}(\mathcal{I}_{crop})+\mathcal{A}_ {j}) \tag{2}\] is as similar as possible to \(z^{s}_{j}\), which we measure with the cosine similarity. Ultimately, we estimate the augmentations \(\{\mathcal{A}_{j}\}_{1}^{M}\) through an optimization process using only source domain images. Specifically, we minimize the loss function \[\mathcal{L}_{opt}=\sum_{\mathcal{I}_{crop}}\sum_{j}\mathcal{D}(z^{s}_{j},\bar {z}_{j})+\|\bar{z}_{j}-z\|_{1}\, \tag{3}\] where \[\mathcal{D}(a,b)=1-\frac{a-b}{\|a-b\|_{2}} \tag{4}\] is the cosine distance. The loss also includes an \(l_{1}\) regularizer that prevents the embeddings from deviating too far from their initial values, so as to preserve the image content. As the objective is to estimate the meaningful feature augmentation while preserving the original CLIP pre-training, we keep the image crop size the same as the original CLIP training. Note that the optimization of the augmentations is done once in an offline stage, and we then use the resulting augmentations to train our detector. Figure 2: **Our Approach: (Left)** We first estimate a set of semantic augmentations \(\mathcal{A}\) using a set of textual domain prompts \(\{\mathcal{P}^{t},p^{s}\}\) and source domain images. The goal of these semantic augmentations is to translate source domain image embeddings to the domain specified by the prompts. We can do this because of the CLIP’s joint embedding space and its ability to encode semantic relationships via algebraic operations. \(\mathcal{L}_{opt}\) is minimized w.r.t \(\mathcal{A}\) over random image crops of the same size as CLIP [36]. **(Right)** The optimized semantic augmentations are used to train our modified detector which minimizes a text-based classification loss \(\mathcal{L}_{clip\cdot t}\). Here, we train with the full image and add a randomly sampled \(\mathcal{A}_{j}\) after average pooling. This pooling operation allows us to use \(\mathcal{A}\) on extracted feature maps of the arbitrary-sized image. We initialize the detector with the pre-trained CLIP [36]\(\mathcal{V}\) and \(\mathcal{T}\) encoders to leverage their general representations. ### Architecture Let us now describe our detector architecture. As shown in the right portion of Fig. 2, it follows a standard Faster-RCNN [40] structure but departs from it in two ways. First, to exploit the augmentations optimized as discussed in the previous section, we initialize the blocks before and after the ROI align one with the corresponding \(\mathcal{V}^{a}\) and \(\mathcal{V}^{b}\) modules of the ResNet-based trained CLIP model. Second, to further leverage the vision-language model, we incorporate a text-based classifier in our model's head. Note that, in contrast to OVD [16, 39] where a text-based classifier is used to handle novel categories, we employ it to keep the image features close to the pre-trained joint embedding space. Specifically, we define textual prompts that represent the individual categories we seek to detect, and extract corresponding embeddings \(\mathcal{Q}\in\mathbb{R}^{(K+1)\times D_{clip}}\), for \(K\) categories and the background class, using the text encoder \(\mathcal{T}\). For a candidate image region \(r\) proposed by the Region Proposal Network(RPN) [40], we then compute the cosine similarities between the text embeddings \(\mathcal{Q}\) and the features \(\mathcal{F}_{r}\in\mathbb{R}^{D_{clip}}\) obtained by projection to the embedding space using \(\mathcal{V}^{b}\) after ROI-Align [19] and the text embeddings \(\mathcal{Q}\). These cosine similarities, \(sim(\mathcal{F}_{r},\mathcal{Q})\in\mathbb{R}^{K+1}\), act as logits to the softmax based cross-entropy loss \[\mathcal{L}_{clip\text{-}t}=\sum_{r}\mathcal{L}_{CE}\left(\frac{e^{sim( \mathcal{F}_{r},\mathcal{Q}_{k})}}{\sum_{k=0}^{K}e^{sim(\mathcal{F}_{r}, \mathcal{Q}_{k})}}\right)\,. \tag{5}\] Similarly to [36], we formulate prompts of the form a photo of a {category name} to obtain our text embeddings. ### Training with Augmentation Following the standard detector training [40], we use the full image as our input. This subsequently increases the output feature map size of \(\mathcal{V}^{a}\), hence we use average pooling operation and obtain channel-wise augmentations which can work for arbitrary-sized feature maps. The training of our modified object detector with the semantic augmentations is as follows, first, we randomly sample an augmentation \(\mathcal{A}_{j}\) from the full set and collapse its spatial dimension using average pooling. We then add the resulting vector to every element in the feature map extracted by \(\mathcal{V}^{a}\). In practice, we apply augmentations to a batch with a probability \(\theta\). The detector is then trained with the loss \[\mathcal{L}_{det}=\mathcal{L}_{rpn}+\mathcal{L}_{reg}+\mathcal{L}_{clip\text{-} t}\, \tag{6}\] which combines the \(\mathcal{L}_{clip\text{-}t}\) loss of Eq. (5) with the standard RPN and regression losses [40]. During inference, we use the detector without any augmentation of the feature maps. ## 4 Experiments ### Experimental setup Datasets.To evaluate our model, we use the same datasets as [49]. They include five sets, each containing images with different weather conditions: daytime sunny, night clear, dusk rainy, night rainy, and daytime foggy. The images have been selected from three primary datasets, Berkeley Deep Drive 100K (BBD-100K) [52], Cityscapes [7] and Adverse-Weather [17]. Additionally, rainy images are rendered by [50], and some of the foggy images are synthetically generated from [42]. Our model is trained on the daytime sunny scenes, consisting of 19,395 training images, the remaining 8,313 daytime sunny images are used for validation and model selection. The four other weather conditions are only used during testing. They consist of 26,158 images of clear night scenes, 3501 images of rainy scenes at dusk, 2494 images of rainy scenes at night, and 3775 images of foggy scenes during daytime. All the datasets contain bounding box annotations for the objects _bus, bike, car, motorbike, person, rider_ and _truck_. Fig. 3 shows examples from this dataset. Metric.In all our experiments, we use the Mean Average Precision (mAP) as our metric. Specifically, following [49], we report the [email protected], which considers a prediction as a true positive if it matches the ground-truth label and has an intersection over union (IOU) score of more than 0.5 with the ground-truth bounding box. Figure 3: **Diverse Weather Dataset [49]: Day-Clear acts as our source domain while the other weather condition are our target domains. In these domains, the objects’ appearance drastically changes from the Day-Clear scenario. As we do not utilize any target domain images, learning generalizable features on source images is crucial for the SDG task.** ### Implementation Details We use the Detectron2 [51] implementation of FasterRCNN with a ResNet101 [20] backbone. We initialize the detector with CLIP [36] pre-trained weights, where ResNet convolution blocks 1-3 act as \(\mathcal{V}^{a}\), and block-4 along with the CLIP attention pooling act as \(\mathcal{V}^{b}\). This follows from the standard FasterRCNN implementation with ResNet backbone. **Optimization Step.** As the benchmark dataset evaluates the method on different weather conditions, we curated a list of domain prompts \(\mathcal{P}^{t}\) matching the concept _weather_. To this end, we take all the _hyponyms_ of the term _weather_ from WordNet [44] and generate their text embeddings using the CLIP text encoder \(\mathcal{T}\). We prune away the words whose cosine similarity with the term _weather_ is lower than \(0.5\). Additionally, we filter out the words that are not in the top \(10\)k frequent words in GloVe wordlist [34]. After combining the synonyms, we get to a list of six words: _snow, fog, cloudy, rain, stormy, sunshine_. We remove _sunshine_ as it corresponds to our source domain concept. Furthermore, we consider three times of the day: _day, night, evening_. This lets us generate \(M=15\) prompts using the template an image taken on a {weather} {time of the day}. We use an image taken during the day as the source domain prompt \(p^{s}\). We provide more details in our supplementary material. To optimize the augmentations with these prompts, we generated random crops from the source images and resized them to \(224\times 224\) pixels. The resulting output feature map of \(\mathcal{V}^{a}\) and \(\mathcal{A}_{j}\) are in \(\mathbb{R}^{14\times 14\times 1024}\). We initialize \(\mathcal{A}_{j}\ \forall\ 1\geq j\geq M\) with zeros and train it using the Adam [23] optimizer while keeping the CLIP encoder, \(\mathcal{V}\) and \(\mathcal{T}\), frozen. Optimization was done for \(1000\) iterations with a learning rate of \(0.01\). Detector Training with Augmentation.When training the detector, the input image is resized to \(600\times 1067\) and \(\mathcal{V}\) and \(\mathcal{T}\) are initialized with CLIP pre-trained weights. While \(\mathcal{T}\) is kept frozen during the training, the ResNet blocks 3-4 and attention pooling of \(\mathcal{V}\), along with the other FasterRCNN learnable blocks, are trained with Stochastic Gradient Descent (SGD) for 100k iterations. We train with a learning rate of \(1e^{-3}\), scaled down by a factor of \(0.1\) after 40k iterations. We use a batch size of \(4\) and apply \(\mathcal{A}_{j}\) to the features with probability \(\theta=0.5\). We also use random Figure 4: **Qualitative Results. We visualize the predictions of the detectors trained only with day-clear images. (Top) FasterRCNN [40] predictions. (Bottom) The predictions with our approach. Night-Clear and Night-Rainy contain scenes that are taken under low light conditions. Due to this, the appearance of the object is obscure and deviates from the daytime case. FasterRCNN fails to detect most of the objects. As shown in the Night-Clear, it misclassifies a car to bus. By contrast, we can still detect car under such a big shift. For Dusk-Rainy scenes, the rain pattern on the windscreen and the wet ground causes an appearance shift. As shown FasterRCNN fails to detect several cars and misclassifies person on the bottom-left.** Figure 5: **Qualitative Results. In the foggy scenes, the objects further away w.r.t the camera are more obscure than the near ones. Due to this FasterRCNN (Top) struggles to detect them. car and person missed by FasterRCNN are successfully recovered by our approach (Bottom).** horizontal flipping augmentation as in Single-DGOD [49]. \(D_{clip}\) is set to 512 as in [36] and background class is initialized by zeros in \(\mathcal{Q}\). All of our training was done on a single NVIDIA A100 GPU. Our code will be made public upon acceptance. ### Comparison with the State of the Art We compare our method trained with semantic augmentations against the state-of-the-art Single-DGOD [49]. Similar to them, we also show comparisons with feature normalization methods, SW [33], IBN-Net [32], IterNorm [21], and ISW [6]. These methods improve network generalization by using better feature normalization. We additionally report the performance of FasterRCNN (FR) initialized with ImageNet pre-trained weights. For the SDG task, we evaluate the generalization performance on unseen target domains, hence we compare the mAP scores on the out-of-domain datasets: day-foggy, night-rainy, dusk-rainy, and night-clear. Our approach of combining CLIP pre-training and semantic augmentation outperforms the baselines on all of the target domains. Tab. 1 shows a consistent improvement in all domains with close to \(15\)% improvement on day-foggy and dusk-rainy compared to Single-DGOD. In the challenging scenario with Night conditions, we improve by \(12.6\)% on night-rainy while being comparable with Single-DGOD on night-clear. On the source domain, both our method and Single-DGOD are better than the FR baseline. However, while Single-DGOD gains improvement at the cost of losing out for domain generalization, we improve on both the source and target domains. The failure of feature normalization baselines suggests a large domain gap between the source and target domains. Fig. 4 and Fig. 5 provide a qualitative results on different weather-datasets. In the remainder of this section, we discuss the per-class results on the individual target domains. D daytime Clear to Night Clear.The Night Clear dataset shows a challenging night driving scene under severe low-light conditions. In Tab. 4, we show that while being comparable to Single-DGOD, we bring in a larger improvement in the _car_ and _person_ categories. Night scenes are particularly challenging as the low light condition leads to more confusion among visually closer categories such as _bus_ and _truck_. Daytime Clear to Night Rainy.This is the most challenging scenario where dark night conditions are exacerbated by patterns occurring due to rain. Tab. 5 shows consistent improvement by our approach for most of the classes. The _car_ class sees the biggest improvement with an increase in AP of more than 22% compared to Single-DGOD. The lower performance of the class _rider_ can be attributed to an increase in the confusion between the visually similar _person_ and _rider_ classes under adverse conditions. ### Ablation Study To understand how each element of the proposed method contributes to the overall performance, we conduct an ablation study. We test five individual components of our model. Specifically, we remove semantic augmentation, replace CLIP attention pooling in \(\mathcal{V}^{b}\) with average pooling, replace \(\mathcal{L}_{clip\text{-}t}\) with the FasterRCNN classification loss, and change the weight initialization from the CLIP model to an ImageNet classification model. Removing those five components turns our model back into the standard FasterRCNN. The ablation study results are provided in Tab. 6 and discussed below. CLIP initialization.When the FasterRCNN backbone \(\mathcal{V}\) is initialized with CLIP pre-trained weights, the model performance consistently increases both in the in-domain and out-of-domain scenarios, as shown in the second row of Tab. 6. This setting itself already outperforms Single-DGOD (penultimate row of Tab. 1). This goes to show that, for the generalization task, model weight initialization plays a crucial role. We further improve this performance with semantic augmentations. Attention pooling and \(\mathcal{L}_{clip\text{-}t}\).Next we test the impact of the text-embedding-based loss \(\mathcal{L}_{clip\text{-}t}\) for classification. As visible in the third row of Tab. 6, when combined with CLIP initialization, it improves the generalization performance for the rainy scenarios, but degrades it for the other ones. Replacing average pooling in \(\mathcal{V}^{b}\) with CLIP attention pooling helps to mitigate the detrimental effect of \(\mathcal{L}_{clip\text{-}t}\) and exhibits consistent improvement on all datasets. Semantic augmentation.Finally, adding semantic augmentation gives us the best results, as shown in the last row of Tab. 6. Exposing the visual encoder \(\mathcal{V}\) to targeted semantic augmentations helps the overall model to better generalize when exposed to new domains sharing similarity with the augmentations. ### Additional Analyses Study of semantic augmentation.Our proposed method involves translating feature maps by semantic augmentations learned using plausible domain prompts. To further study the utility of our approach, we replace the augmentation strategy in our training pipeline with (a) **no-aug**: no augmentation; (b) **random**: \(\mathcal{A}\) is initialized with a normal distribution; (c) **clip-random**: we define \(\mathcal{P}^{t}\) with concepts that are not specific to _weather_. We generate prompts with a template an image of \(\{\text{word}\}\), where the words are _desert_, _ocean_, _forest_, and _mountain_. Tab. 7 illustrates the importance of the semantics in our augmentation strategy. The **random** augmentation performs worse than the **no-aug** strategy. **clip-random** is comparable to **no-aug** and doesn't show any consistent trend but is mostly better than **random**. Our semantic augmentation strategy provides a consistent improvement over **no-aug** because the translations are performed with prompts from the relevant _weather_ concept. \begin{table} \begin{tabular}{r|c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{mAP} \\ \cline{2-9} \multicolumn{1}{c}{Method} & Bus & Bike & Car & Motor & Person & Rider & Truck & All \\ \hline FR [40] & 34.7 & 32.0 & 56.6 & 13.6 & 37.4 & 27.6 & 38.6 & 34.4 \\ S-DGOD [49] & **40.6** & **35.1** & 50.7 & **19.7** & 34.7 & **32.1** & **43.4** & 36.6 \\ \hline Ours & 37.7 & 34.3 & **58.0** & 19.2 & **37.6** & 28.5 & 42.9 & **36.9** \\ \hline \hline \end{tabular} \end{table} Table 4: **Per-class results on Daytime Clear to Night Clear.** While being comparable to S-DGOD on most of the categories, we improve on _car_ and _person_. \begin{table} \begin{tabular}{r|c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{mAP} \\ \cline{2-9} \multicolumn{1}{c}{Method} & Bus & Bike & Car & Motor & Person & Rider & Truck & All \\ \hline FR [40] & 16.8 & 6.9 & 26.3 & 0.6 & 11.6 & 9.4 & 15.4 & 12.4 \\ S-DGOD [49] & 24.4 & 11.6 & 29.5 & **9.8** & 10.5 & **11.4** & 19.2 & 16.6 \\ \hline Ours & **28.6** & **12.1** & **36.1** & 9.2 & **12.3** & 9.6 & **22.9** & **18.7** \\ \hline \hline \end{tabular} \end{table} Table 5: **Per-class results on Daytime Clear to Night Rainy.** This dataset presents the most challenging scenario, where the low light and rainy conditions obscure the objects. We still perform better than the baseline on most of the categories. ## 5 Limitations Our method augments visual features using textual prompts. To generate these prompts, it is assumed that some information about the domain gap is known. In our experiments, we assumed that the domain gap was due to changes in weather and daytime conditions. In practice, we only used the word _weather_ and _time of the day_ to derive all the prompts used in our augmentation; nonetheless, some extra information was used. In most applications, however, the domain gap can be known in advance, and providing a few keywords characterizing it shouldn't be an issue. In the rare cases where no information can be known, our approach still has the potential to be used by using multiple broad concept keywords such as weather, ambiance, or location. ## 6 Conclusion We have proposed an approach to improving the generalization of object detectors on _unseen_ target domains. Our approach fundamentally departs from existing method by leveraging a pre-trained vision-language model, CLIP, to help the detector to generalize. Specifically, we have exploited textual prompts to develop a semantic augmentation strategy that alters image embeddings so that they reflect potential target domains, and to design a text-based image classifier. We have shown that our approach outperforms the state of the art on four adverse-weather target datasets. In future work, we plan to extend our approach to learning the prompts to further improve generalization. Acknowledgment:This work was funded in part by the Swiss National Science Foundation and the Swiss Innova \begin{table} \begin{tabular}{c c c c|c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{mAP} \\ \cline{3-10} \multicolumn{1}{c}{} & \multicolumn{4}{c}{Source} & \multicolumn{4}{c}{Target} \\ \hline \multirow{2}{*}{CLIP init} & \multirow{2}{*}{\(\mathcal{L}_{clip-t}\)} & \multirow{2}{*}{Attn. Pool} & \multirow{2}{*}{Sem. Aug} & Day & Night & Dusk & Night & Day \\ & & & Clear & Clear & Rainy & Rainy & Foggy \\ \hline \multirow{4}{*}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \ tion Agency (Innousisse) via the BRIDGE Discovery grant 40B2-0 194729.
2306.01658
An Adaptive Method for Weak Supervision with Drifting Data
We introduce an adaptive method with formal quality guarantees for weak supervision in a non-stationary setting. Our goal is to infer the unknown labels of a sequence of data by using weak supervision sources that provide independent noisy signals of the correct classification for each data point. This setting includes crowdsourcing and programmatic weak supervision. We focus on the non-stationary case, where the accuracy of the weak supervision sources can drift over time, e.g., because of changes in the underlying data distribution. Due to the drift, older data could provide misleading information to infer the label of the current data point. Previous work relied on a priori assumptions on the magnitude of the drift to decide how much data to use from the past. Comparatively, our algorithm does not require any assumptions on the drift, and it adapts based on the input. In particular, at each step, our algorithm guarantees an estimation of the current accuracies of the weak supervision sources over a window of past observations that minimizes a trade-off between the error due to the variance of the estimation and the error due to the drift. Experiments on synthetic and real-world labelers show that our approach indeed adapts to the drift. Unlike fixed-window-size strategies, it dynamically chooses a window size that allows it to consistently maintain good performance.
Alessio Mazzetto, Reza Esfandiarpoor, Eli Upfal, Stephen H. Bach
2023-06-02T16:27:34Z
http://arxiv.org/abs/2306.01658v1
# An Adaptive Method for ###### Abstract We introduce an adaptive method with formal quality guarantees for weak supervision in a non-stationary setting. Our goal is to infer the unknown labels of a sequence of data by using weak supervision sources that provide independent noisy signals of the correct classification for each data point. This setting includes crowdsourcing and programmatic weak supervision. We focus on the non-stationary case, where the accuracy of the weak supervision sources can drift over time, e.g., because of changes in the underlying data distribution. Due to the drift, older data could provide misleading information to infer the label of the current data point. Previous work relied on a priori assumptions on the magnitude of the drift to decide how much data to use from the past. Comparatively, our algorithm does not require any assumptions on the drift, and it adapts based on the input. In particular, at each step, our algorithm guarantees an estimation of the current accuracies of the weak supervision sources over a window of past observations that minimizes a trade-off between the error due to the variance of the estimation and the error due to the drift. Experiments on synthetic and real-world labelers show that our approach indeed adapts to the drift. Unlike fixed-window-size strategies, it dynamically chooses a window size that allows it to consistently maintain good performance. ## 1 Introduction In order to efficiently create training data for machine learning, programmatic weak supervision [39; 38; 53] estimates the accuracy of multiple noisy sources of labels without access to ground truth. Given a set of _labeling functions_ that vote on the true label for each unlabeled example, the goal is to infer the latent ground truth. Once inferred, these labels can be used as training data. In this paper, we study the _non-stationary_ setting, in which the accuracy of each labeling function can drift over time because of changes in the underlying data. For example, in an image classification task, latent subclasses that make up each class might shift over time. If the task is to classify animals into categories like "mammal" and "bird," the accuracy of a weak labeler that looks for attributes like wings might change in accuracy if animals like bats become more or less prevalent. We ask the question, "Under what conditions can we detect changes in the accuracies of weak labelers over time and bound their error without access to ground truth?" Programmatic weak supervision is important for creating training data sets when resources are limited. It can be used for natural language processing [41; 52; 54], computer vision [47; 18; 22], tabular data [17; 2], and other modalities [42; 43]. It has also enabled machine learning applications in industry [7; 15; 45] and academia [16; 21]. Even when prompting or fine-tuning large pre-trained models, weak supervision can unlock improved quality and enable adaptation to new tasks [44; 3; 51]. The central modeling challenge in programmatic weak supervision is estimating the probabilistic relationships among the votes of the weak labelers and the latent ground truth. It is hard because, without access to ground truth labels, the observed votes can be explained in many different ways. Perhaps the votes tend to agree because they are all accurate labelers. Or perhaps they are all inaccurate. Perhaps there are correlations among the votes caused by relying on similar decision processes. If one assumes that the votes are conditionally independent given the true label and that the examples are _independent_, and _identically distributed_ (i.i.d.), this is equivalent to the Dawid-Skene model [20] that is the basis for many related works in crowdsourcing [40; 29; 37; 27; 55]. Many works on crowdsourcing and weak supervision have relaxed the conditional independence assumption in various ways to account for a wide range of weak labelers [8; 9; 10; 6; 48; 1; 33; 32; 2]. With two exceptions discussed below, all these aforementioned works assume that the examples are i.i.d.. This is a restrictive assumption when data is collected over time, and it is natural to observe a change, or _drift_, in the distribution of the examples. In our work, we relax the identically distributed assumption, and assume only that the examples are independent. Thus introduces a trade-off: if we want to obtain a good estimate at the current time, using more past examples provides more data, which might result in a better estimate if that data is similarly distributed, but might harm the estimate if the window includes a significant distribution drift. Much prior work has addressed the problem of drifting data in the supervised learning setting [23; 31]. These methods generally rely on labeled data that is unavailable in the weakly supervised setting. Another broad line of work has viewed drift detection as an unsupervised problem, looking for non-stationarity in arbitrary distributions [11; 28; 46]. These methods generally assume a prior distribution on the locations in time of drift. That prior can be either defined explicitly in a Bayesian framework or implicitly via a heuristic cost function that penalizes the trade off between better fitting the data and finding more drift points. In a similar vein, previous works on relaxing the i.i.d. assumption with multiple noisy labelers have placed assumption on how much their accuracies can drift [13; 22]. In contrast, our goal is to estimate the labelers' accuracies, without prior assumption on the drift, and without explicitly quantify it from the input. This is a very challenging problem, as the drift is unknown and we cannot estimate the drift from the data, as we have access to only a single sample from each distribution. **Our Contributions.** We introduce the first _adaptive_ algorithm for programmatic weak supervision in the presence of drift with formal guarantees on the quality of its parameter estimates. The advantage of an adaptive algorithm is that it can react in a rigorously principled way to changes in the accuracies of the weak labelers as they occur (as opposed to having to make an assumption on how much drift will occur). When the underlying process is stationary, it can accumulate as much data as possible in a large window of time in order to best estimate the accuracies of the labelers. When drift does occur, it can react by using only the most recent (and most relevant) data to estimate the accuracies. Our method selects the amount of data to use based on differences in the rates of agreement among the labelers. We derive a principled decision rule for this selection and provide a rigorous analysis that bounds the resulting error of the estimated accuracies of the labelers. Our novel bound separates the statistical error of estimating the parameters from the error caused by possible drift. This analysis enables the algorithm to select a close-to-optimal trade-off to minimize the worst-case error. The conceptual difference between our approach and all previous work is that we do not rely on prior information about the drift, or try to learn the drift from the data (both unrealistic in many applications). Instead, at each time step, our algorithm compares its estimation obtained using different window sizes and uses this information to detect drift and adjust the window size for the decision at that step. We analytically prove that this information is sufficient to allow the algorithm to efficiently adapt to drift in distribution, without explicitly estimating the magnitude of the drift. In our experimental evaluation, we show that on synthetic data and a drifting dataset constructed from Animals with Attributes2 [50], our algorithm adapts to the drift as it occurs, dynamically selecting the amount of data to use in an effective way. Unlike fixed-window-size strategies, we find this approach consistently maintains good performance as drift occurs. As the better window sizes change over time, the algorithm adapts by shifting to the better ones. ## 2 Problem Statement Given a vector \(\mathbf{v}\in\mathbb{R}^{q}\), let \(\|\mathbf{v}\|_{\infty}=\sup_{1\leq i\leq q}|v_{i}|\) be the largest component of \(\mathbf{v}\) in absolute value. Similarly, given a matrix \(\mathbf{C}\in\mathbb{R}^{q\times q}\), we define \(\|\mathbf{C}\|_{\infty}=\sup_{i,j}|C_{ij}|\). Let \(\mathcal{X}=\mathbb{R}^{d}\) be our classification domain. A binary classification task is specified by a function \(y:\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{Y}=\{-1,1\}\) is the label space. Given \(x\), we would like to infer its label \(y(x)\). We assume access to \(n\) weak labeling functions \(\ell_{1},\ldots,\ell_{n}\), where each \(\ell_{i}:\mathcal{X}\rightarrow\mathcal{Y}\) provides a tentative labeling of the item \(x\). For example, each weak classifier can be a classifier that was trained for a related task, or a decision rule based on a simple programmatic criterion. The weak labeling functions \(\ell_{1}(x),\ldots,\ell_{n}(x)\) are the only information sources for our classification task. We receive a sequence of examples \(X_{1},X_{2},\ldots\) over time. For any given time \(t\), our goal is to obtain an accurate estimate of the correct label \(y(X_{t})\) of \(X_{t}\) given the weak labelling functions \(\ell_{1},\ldots,\ell_{n}\) and the input sequence up to time \(t\), \(X_{1},\ldots,X_{t}\). We adapt the standard assumptions used in analyzing weak supervision, in particular crowdsourcing, with no drift [20, 38]. We first assume that the input sequence \((X_{t})_{t\in\mathbb{N}}\) is an independent, but not identically distributed stochastic process. Any finite subset of its random variables are mutually independent, and each \(X_{t}\) is sampled from a distribution \(D_{t}\) over \(\mathcal{X}\) that can drift over time. Formally, this is stated with the following assumption. **Assumption 1**.: _For any finite \(t\geq 1\), the input vector \((X_{1},\ldots,X_{t})\) is distributed as \(\prod_{i=1}^{t}D_{i}\)._ The second standard assumption is that the weak labelers have independent errors. **Assumption 2**.: _For any \(t\geq 1\) and \(i\neq j\), for \(X_{t}\sim\mathcal{D}_{t}\), we have that the events \(\{\ell_{i}(X_{t})\neq y(X_{t})\}\) and \(\{\ell_{j}(X_{t})\neq y(X_{t})\}\) are independent given \(y(X_{t})\)._ We define the accuracy of the weak labeler \(i\) at time \(t\) as \[p_{i}(t)\doteq\Pr_{X\sim D_{t}}\left(\ell_{i}(X)=y(X)\right) \tag{1}\] The value \(p_{i}(t)\in[0,1]\) represents the probability that the weak labeler \(\ell_{i}\) is correct with a sample \(X_{t}\sim D_{t}\). The accuracy probability \(p_{i}(t)\) is a function of the input distribution \(D_{t}\) and therefore may drift in time. **Example.** Assume that the classification task is to distinguish whether an input image contains a cat or a dog. Assume that there is a weak labeler \(\ell_{\mathrm{tail}}\) that detects whether an animal has a tail or not. This weak labeler provides no signal if we only observe images of cats and dogs that both have tails, however the relevance of this classifier can change over time: if the probability of observing a dog without a tail (e.g., a bulldog) grows over time, this weak labeler can provide a stronger signal towards the right classification. Our goal is to adapt dynamically to the change in accuracy of the weak labelers. **Remark.** For concreteness, we analyze our algorithm with respect to a drift in the input distribution over \(\mathcal{X}\). However, our analysis applies to a more general case, where in addition to drift in the input distribution there can be drifts over time in the accuracy of individual labelling functions. Such drift can be the results of a change in the functionality of the labeling functions. For example, a human labeler can get tired and make more mistakes, or a sensor's accuracy can be affected by a change of light or temperature. Formally, instead of a labelling function \(\ell_{i}(X)\) we have a family of labelling functions \(\{\ell_{i,t}(X)\mid t\geq 1\}\). Equation (1) is replaced with \[p_{i}(t)\doteq\Pr_{X\sim D_{t}}\left(\ell_{i,t}(X)=y(X)\right),\] and the algorithm and analysis are the same. ## 3 Related Work To our knowledge only two works have considered relaxing the i.i.d. assumption in any way when learning from multiple noisy sources of labels. They both require assumptions on how much the accuracies of the labelers can change over time. The first [13] assumes that the accuracy of the weak labelers can change at most by a constant at each step. In particular, their assumptions imply that there exists a value \(\Delta>0\), known a priori, that upper bounds the magnitude of the drift at each step, i.e. \(\|\mathbf{p}(t)-\mathbf{p}(t+1)\|_{\infty}\leq\Delta\) for all \(t\geq 1\). The second [22] assumes that the KL divergence between two consecutive distributions is upper bounded by a constant \(\Delta\). These are essentially similar assumptions: an assumed upper bound on the magnitude of the drift allows these methods to determine before execution how much information they should use from the past. These algorithms are unpractical as the value \(\Delta\) is unknown in practice, and they cannot adapt if the rate of drift changes over time. If one assumes too much drift, then the algorithm will use too small an amount of data and have greater statistical error in its estimates of the labelers' accuracies. If one assumes too little drift, then the algorithm will use too large an amount of data and have errors from using data from too different a distribution. A priori, there is usually no way to know how to choose. In contrast, in this work, our goal is to dynamically choose the window size as a function of the observed votes without requiring any assumption on the amount of drift. In other words, we want to _adapt_ to drift as it occurs. The challenge of coping with non-stationary drift has been studied in a number of other settings. A sequence of works [12; 30; 35; 25]) considered supervised learning settings with distribution drift in the training set, assuming some known upper bound on the drift. An alternative assumption [35; 5], is that the training set is drifting between a small number of distributions, and the algorithm has multiple samples from each distribution to learn the drift error. A minimax error for density estimation with distribution drift was studied in [34], again with some a priori assumption on the drift rate. We also note that the non-stationary setting was also extensively studied in reinforcement learning (e.g., [4; 19; 49]). That setting significantly differs from ours, as the goal is to minimize the regret, and the distribution of the samples is also affected by the decisions taken by a policy on the environment. Our work is the first to provide an adaptive algorithm for weak supervision and crowdsourcing in a non-stationary setting and without any prior assumptions on the drift. ## 4 Preliminary Results Our work builds on the following results that study the problem in settings where the accuracy probabilities are know, or there in no drift in the input distribution. Assume first that the accuracy probabilities \(\mathbf{p}(t)=(p_{1}(t),\ldots,p_{n}(t))\) of the weak labelers at any time \(t\geq 1\) are known. With Assumption 2, it is known that the optimal aggregation rule for classifying \(X_{t}\) is a weighted majority vote, where the weights are functions of the accuracies of the weak labelers at time \(t\)[36]. In that case the optimal classification of \(x_{t}\) doesn't use information about the preceding inputs. In particular, consider the family of weighted majority classifiers \(f_{\mathbf{w}}:\mathcal{X}\mapsto\mathcal{Y}\) with weights \(\mathbf{w}=(w_{1},\ldots,w_{n})^{T}\), i.e. \[f_{\mathbf{w}}(x)=\operatorname{sign}\left(\sum_{i=1}^{n}w_{i}\ell_{i}(x)\right).\] It is shown in [36] that the optimal aggregation of \(\ell_{1}(X_{t}),\ldots,\ell_{n}(X_{t})\) is given by \(f_{\mathbf{w}^{*}(t)}\) where \[\mathbf{w}^{*}(t)=\left(\ln\left(\frac{p_{1}(t)}{1-p_{1}(t)}\right),\ldots,\ln \left(\frac{p_{n}(t)}{1-p_{n}(t)}\right)\right) \tag{2}\] In weak supervision and crowdsourcing applications, the accuracy probabilities of the weak labelers are unknown. Several methods for estimating \(\mathbf{p}(t)\) using previous samples, have been proposed in the literature in a setting without distribution drift [20; 24; 55]. It is known that under mild assumptions, if we have access to enough identically distributed samples, it is possible to accurately estimate the accuracies of the weak labelers, and different minimax methods have been proposed in this setting [55; 14]. Our contribution is an adaptive method that allows for this estimation in a non-stationary setting without any prior assumption on the drift. Our estimation method is based on the technique developed by [14] that uses the weak labelers' _correlation matrix_ to estimate the expertise of each weak labeler in a no-drift setting. In particular, for each \(t\geq 1\), we let the correlation matrix \(\mathbf{C}(t)\in[-1,1]^{n\times n}\) be defined as \[C_{ij}(t)=\operatorname*{\mathbb{E}}_{X\sim D_{t}}\left[\ell_{i}(X)\ell_{j}(X )\right]\qquad\forall(i,j)\in\{1,\ldots,n\}^{2}\enspace.\] When there is no distribution drift and under mild assumptions on the bias of the estimates of the weak supervision sources, it is possible to show that a good estimation of the correlation matrix \(\mathbf{C}(t)\) implies a good estimation of the accuracies \(\mathbf{p}(t)\). The assumption on the bias is formalized as follows. **Assumption 3**.: _There exists \(\tau>0\) such that \(p_{i}(t)\geq\frac{1}{2}+\tau\) for all \(t\geq 1\) and \(i\in\{1,\ldots,n\}\)._ With this assumption, the following result holds for the non-drift setting: **Proposition 1** (Lemma 9 of [14]).: _Let \(\mathbf{C}\in[-1,1]^{n\times n}\) be a matrix such that \(\|\mathbf{C}-\mathbf{C}(t)\|_{\infty}\leq\epsilon\), and assume \(n\geq 3\). Let Assumptions 1, 2 and 3 hold. Then, there exists an estimation procedure that given in input \(\mathbf{C}\), it outputs \(\mathbf{\hat{p}}=(\hat{p}_{1},\ldots,\hat{p}_{n})\) such that \(\|\mathbf{p}(t)-\mathbf{\hat{p}}\|_{\infty}\leq(5/2)\epsilon/\tau^{2}\)._ Note that the algorithm for the non-drift case presented in [14] and the algorithm presented here for the drift case are oblivious to the value of \(\tau\). ## 5 Algorithm As explained in the previous section, our method revolves around the estimation of the correlation matrix \(\mathbf{C}(t)\) at the current time \(t\) in order to use Proposition 1 and obtain an estimate of \(\mathbf{p}(t)\). In particular, we generalize the method in [14] to handle drift. All proofs are deferred to the supplementary material. We define the following quantity: \[\hat{\mathbf{C}}^{[r]}(t)\doteq\frac{1}{r}\sum_{k=t-r+1}^{t}(\ell_{1}(X_{k}), \ldots,\ell_{n}(X_{k}))^{T}(\ell_{1}(X_{k}),\ldots,\ell_{n}(X_{k}))\enspace.\] The matrix \(\hat{\mathbf{C}}^{[r]}(t)\in[-1,1]^{n\times n}\) is the empirical correlation matrix computed using the latest \(r\) samples \(X_{t-r+1},\ldots,X_{t}\). This matrix provides the following guarantee on the estimation of \(\mathbf{C}(t)\). **Lemma 1**.: _Let \(t\geq 1\) and let \(\delta\in(0,1)\) and let Assumption 2 hold. The following inequality holds with probability at least \(1-\delta\):_ \[\|\mathbf{C}(t)-\hat{\mathbf{C}}^{[r]}(t)\|_{\infty}\leq\sqrt{\frac{2\ln(n(n-1)/ \delta)}{r}}+12\sum_{k=t-r+1}^{t-1}\|\mathbf{p}(k)-\mathbf{p}(k+1)\|_{\infty} \tag{3}\] Lemma 1 shows that the error of estimating \(\mathbf{C}(t)\) by using the previous \(r\) samples can be upper bounded with the sum of two error terms: a _statistical error_ and a _drift error_. The statistical error is related to the variance of the estimator \(\hat{C}^{[r]}(t)\), and it decays with rate \(O(1/\sqrt{r})\). The drift error is unknown and it quantifies the error introduced due to the distribution shift, and it is measured as sum of the maximum variation of the accuracy of the weak labelers at each step. The drift error is non-decreasing with respect to \(r\). There is a trade-off: we want to choose a value of \(r\) that minimizes the sum of the statistical error and the drift error. Our main contribution is an algorithm that _without_ any assumption on the drift can provide a close-to-optimal solution of the above trade-off (3). This is a challenging problem, as it is not possible to estimate the drift error, since we only have a single sample from each distribution. Specifically, our algorithm is parameterized by a sequence \(\mathcal{R}=\{r_{1},\ldots,r_{m}\}\) such that \(r_{1}\leq\ldots\leq r_{m}\). Our algorithm guarantees an estimation error of the matrix \(\mathbf{C}(t)\) that is essentially up to constant as tight as the value of \(r\in[r_{1},r_{m}]\) that minimizes the right-hand side of (3). This yields a guarantee on the estimation of \(\mathbf{p}(t)\) by using Proposition 1. The next theorem formalizes this result. **Theorem 1**.: _Let Assumptions 1, 2 and 3 hold. Let \(\delta\in(0,1)\) and \(\beta>0\). Let \(\mathcal{R}=\{r_{1},\ldots,r_{m}\}\). Assume \(n\geq 3\). If we run Algorithm 1 at time \(t\geq r_{1}\), then with probability at least \(1-\delta\) it provides an estimate \(\hat{\mathbf{p}}=(\hat{p}_{1},\ldots,\hat{p}_{n})\) such that_ \[\|\mathbf{p}(t)-\hat{\mathbf{p}}\|_{\infty}\leq\frac{5\Phi_{\mathcal{R},\beta}}{2\tau^ {2}}\min_{r\in[r_{1},\min(t,r_{m})]}\left(\frac{A_{\delta,n,m}}{\sqrt{r}}+12 \sum_{k=t-r+1}^{r-1}\|\mathbf{p}(k)-\mathbf{p}(k+1)\|_{\infty}\right)\] _where \(A_{\delta,n,m}\doteq\sqrt{2\ln[(2m-1)\cdot n(n-1)/\delta]}\), and \(\Phi_{\mathcal{R},\beta}=1+\max\left\{\frac{2\beta+2}{\gamma_{m}(1-\gamma_{M })},\frac{2\beta+2}{\beta(1-\gamma_{M})}\right\}\), with \(\gamma_{M}=\max\sqrt{r_{k}/r_{k+1}}\) and \(\gamma_{m}=\min\sqrt{r_{k}/r_{k+1}}\)._ The pseudocode of our method is reported in Algorithm 1. The goal of the algorithm is to increase the window size ending at time \(t\) as long as it does not include significant drift in distribution. As a reference for making this decision we observe that if the samples are identically distributed, we have that for any pair \(i,j\), it holds (see Proposition 5 in the Appendix): \[D_{1}=\ldots=D_{t}\Longrightarrow\mathbb{E}\left|\hat{C}_{ij}^{[r_{k+1}]}- \hat{C}_{ij}^{[r_{k}]}\right|\leq\sqrt{\frac{1}{r_{k}}-\frac{1}{r_{k+1}}} \tag{4}\] The strategy of our algorithm is the following. Starting with \(k=1\), we iteratively compare the empirical covariance matrix computed respectively with \(r_{k}\) and \(r_{k+1}\) samples. If there is minimal drift, the empirical quantity \(\left|\hat{C}_{i,j}^{[r_{k+1}]}-\hat{C}_{i,j}^{[r_{k}]}\right|\) should be smaller or comparable to the right-hand side of (4) for any entry \(i,j\). If that is the case, we increase the value of \(k\). If this empirical quantity is larger, then a significant drift must have occurred. In this case, we can stop and show that using \(r_{k}\) samples is provably close to optimal. In the algorithm, this strategy is implemented in lines 2-10. The _threshold_ used as a terminating condition for the iteration of the algorithm is the right-hand side of line 4. The lines 10-15 implement the method that maps a correlation matrix to the accuracies of the weak supervision sources and attains the guarantees of Proposition 1[14]. Our algorithm has the following parameters: * The value \(\delta\in(0,1)\) represents the failure probability of the algorithm. * The sequence \(\mathcal{R}=\{r_{1},\ldots,r_{k}\}\) represents the possible window sizes that the algorithm considers. In order to obtain better guarantees in Theorem 1, we look for a sequence \(\mathcal{R}\) such that: \((i)\) the minimum ratio between consecutive elements \(\gamma_{m}\) is large, as this avoids comparing window sizes that are very similar with one another and for which it is very hard to detect if drift occurred; \((ii)\) the maximum ratio between consecutive elements \(\gamma_{M}\) is small, as this prevents a situation in which \(\mathcal{R}\) is sparse, and there is no value in \(\mathcal{R}\) that is close to the optimal window size. With our analysis, the best guarantees of the algorithm are achieved by using a sequence of powers of \(1/(\sqrt{2}-1)^{2}\) as \(\mathcal{R}\). * The value of \(\beta\) affects the threshold used in our algorithm. Intuitively, the value of \(\beta\) is proportional on how much drift the algorithm must observe before stopping, and it affects the sensitivity of our algorithm to detect drift. The optimal value of \(\beta\) that minimizes the upper bound of our algorithm is \(\beta=\sqrt{2}-1\). ``` 1:Input:\(X_{1},\ldots,X_{t},\ell_{1},\ldots,\ell_{n},\mathcal{R}=\{r_{1},\ldots,r_{m}\},\beta,\delta\). 2:\(k\gets 1\) 3:while\((k\leq m-1)\) and \((r_{k+1}\leq t)\)do 4:if\(\|\hat{\mathbf{C}}^{[r_{k+1}]}(t)-\hat{\mathbf{C}}^{[r_{k}]}(t)\|_{\infty}\leq A_{ \delta,n,m}\left[\frac{2\beta}{\sqrt{r_{k}}}+\sqrt{\frac{1-r_{k}/r_{k+1}}{r_{ k}}}\right]\)then 5:\(k\gets k+1\) 6:else 7:break 8:endif 9:endwhile 10:\(\hat{\mathbf{C}}\leftarrow\hat{\mathbf{C}}^{[r_{k}]}(t)\) 11:for all\(h\in\{1,\ldots,n\}\)do 12:\((i,j)\leftarrow\operatorname*{argmax}_{i\neq j,j\neq h,i\neq h}\left|\hat{C} _{ij}\right|\) 13:if\(\hat{C}_{ij}=0\)then 14:\(\hat{p}_{h}\gets 1/2\) 15:else 16:\(\hat{\alpha}_{h}\leftarrow\sqrt{\left|\frac{\hat{C}_{ik}\hat{C}_{ik}}{\hat{C} _{ij}}\right|}\) 17:\(\hat{p}_{h}\leftarrow(1+\hat{\alpha}_{h})/2\) 18:endif 19:endfor 20:return\(\hat{p}=(\hat{p}_{1},\ldots,\hat{p}_{n})\) ``` **Algorithm 1** Non-Stationary Accuracy Estimation ## 6 Empirical Evaluation We demonstrate the functionality and the advantage of our algorithm over fixed-window-size strategies in two experimental settings. More experiments are reported in the supplemental material. SetupAt each time step, we receive an unlabeled example which must be labeled based on the available weak labelers. We use Algorithm 1 to estimate the accuracies of the weak labelers. We then make a prediction for the current time step's example by weighting the vote of each labeler proportionally to its estimated accuracy using the weighting \(\mathbf{w}^{*}\) described in Equation (2). For all experiments, we run our algorithm using the first \(20\) powers of two as \(\mathcal{R}\), and \(\beta=\delta=0.1\). As baselines, we consider majority vote and fixed-window-size strategies of size in \(\mathcal{R}\). These baseline algorithms are the same as Algorithm 1, except that we use a window size specified a priori to estimate \(\hat{\mathbf{C}}\) (line 10). Since the triplet method for estimating accuracies (lines 11-19) is not constrained to return a probability between 0 and 1, we clip the estimated accuracies to the interval \([0.1,0.9]\). All code for reproducing the results is included in the supplementary material. ### Synthetic Data We first show how our algorithm adapts to changing input distributions with a toy experiment on synthetic data that satisfies all of our assumptions. The algorithm receives input from three weak labelers \((n=3)\), and the input stream has \(4\cdot T\) data points with \(T=5000\). The data is partitioned into three contiguous blocks of size \(T\), \(2T\) and \(T\). The accuracies of the weak labelers do not change within the same block, but do change between blocks. In particular, for each block, two weak labelers have high accuracy equal to \(0.9\), and the other one has low accuracy equal to \(0.6\). The weak labeler with low accuracy is different in each block. We remark that our algorithm is oblivious to this partitioning of the data. In Figure 1, we plot the window size used by the adaptive algorithm and its estimates of the accuracies of each weak labeler in each time \(t\), \(1\leq t\leq 4T\). The reported results are an average over \(10\) independent generations of this synthetic data. The main observation is that our algorithm correctly identifies a change in distribution, and reduces the window size whenever it transitions to the next block. This allows for a very good estimation of the weak labeler accuracies, as the algorithm uses data mostly from the current block. Clearly, there is a delay before our algorithm can correctly identify the distribution change, since it needs to collect enough data from the new block to assess that a significant change happened. As a result, the estimation of the weak labelers accuracy is worse for the data right after a block change. The variation in the accuracy estimates in the middle of a block are due to the window size selection strategy of the algorithm: whenever the algorithm increases the window size, the larger window includes in the following few steps a small amount of samples from the previous block, resulting in a small additional error in the estimation. ### Image Classification The _Animals with Attributes2_ (AwA2) dataset [50] consists of images of animals from \(50\) disjoint classes, that are split into \(40\) seen classes, used for training, and \(10\) unseen classes, used for testing. The dataset also provides the relations among \(85\) attributes (e.g., "patches") and classes through a binary class-attribute matrix, where each entry indicates whether animals from a certain class Figure 1: We report the window size chosen by the algorithm (left) and the estimated accuracies of each weak labeler over time (right). The vertical lines represent when a change in distribution occurs. exhibit an attribute or not. Following previous work [33, 32], we obtain weak supervision sources by fine-tuning ResNet-18 models [26] on the seen classes to detect each of the attributes. We use this dataset to construct a binary classification task with drift. We define two target classes over the unseen test classes. The first target class contains images from the classes "horse" and "sheep"; the second target class contains images from classes "giraffe" and "bbcact." We use the class-attribute matrix to identify attributes that are helpful to distinguish between those two target classes. An attribute is helpful if 1) it appears only in one of the target classes and 2) it consistently appears or does not appear in both classes of each target class. Using this criteria, we choose the attribute detectors for "black", "white", "orange", "yellow", "spots", and "domestic" attributes as weak supervision sources. To create a dataset with drift, we sample \(5T\) images with repetition from the selected classes with \(T=4000\). We partition the data into five contiguous blocks of size \(T\). In the first block, we sample from "sheep" and "bbcact" classes with a probability of \(0.1\) and from "horse" and "giraffe" classes with a probability of \(0.9\). To create drift, we alternate the probability of sampling from each of the subclasses between \(0.1\) and \(0.9\) for consecutive blocks. In Table 1, we report the average accuracy over three random executions for three different methods: majority vote, a window size with all the previous examples (All Past), and our adaptive algorithm (Adaptive). Our algorithm outperforms the other two methods by \(1.87\) and \(7.02\) percentage points, respectively. These results emphasize the importance of properly selecting the window size for achieving good performance. In Figure 2, we visualize the accuracy of the adaptively selected window sizes and multiple fixed window sizes over time. As expected, the accuracy of fixed window sizes changes over time. For example, small window sizes achieve better accuracy shortly after a distribution shift occurs by limiting the number of out-of-distribution samples. And large window sizes achieve better accuracy toward the end of each block by using more samples from the same distribution. On the other hand, our algorithm successfully detects the drift and selects the best window size for each time step accordingly. As a result, our algorithm maintains a close-to-optimal performance for most of the time steps. These results emphasize that the optimal window size itself can change over time. We report the window sizes selected by our algorithm at each time step in Figure 3. Consistent with previous results on synthetic data, our algorithm successfully detects the drift and selects small window sizes to limit out-of-distribution samples. At the same time, for stationary periods, our algorithm selects large window sizes to include more samples from the same distribution. Figure 2: Accuracy of different window sizes for each time step for AwA2 dataset. The vertical lines mark a distribution shift. The accuracies are obtained from one random execution. Accuracies for window size \(r\) are only reported for time steps \(t\geq r\). For each time step \(t\), the reported accuracy is an average over the next \(256\) time steps, \(X_{t},\dots,X_{t+256-1}\). ## 7 Conclusion This paper presents the first method with rigorous guarantees for learning from multiple noisy sources of labels in non-stationary settings _without_ any prior assumptions on the nature of the changes over time. Instead of calculating a fixed strategy based on an assumed amount of drift (as in prior work [13; 22]), our method adapts to drift as it occurs based on the observed votes of the labelers. The major difference between our approach and all previous work is that we do not use prior information on the drift. Instead, at each time step, our algorithm compares its estimation obtained using different window sizes, and uses this information to adjust the window size for the decision at that step. Although the algorithm cannot explicitly quantify the drift, because it does not have any training data, we prove that the information observed is sufficient for efficiently adapting the window size to changes in the input distribution. Our experimental evaluation shows that our algorithm can dynamically adapt to drift as it occurs, adjusting its window size for best performance over time. As creating models with programmatic weak supervision becomes more common, our work offers practitioners a practical way to cope with drift in long-running and non-stationary deployments. Limitations and Future WorkA critical assumption in our work--like many in programmatic weak supervision and crowdsourcing--is that the errors of the labelers are conditionally independent (Assumption 2). In practice, this is often not a major limitation. For example, in our experiments with real-world data, the labelers are not conditionally independent. However, if the labelers' errors are strongly dependent, this can cause large errors in the estimates of their accuracies. Other work has looked at learning without any assumptions on the joint distribution of the errors of the labelers and the true label [8; 9; 10; 33; 32; 2], but they all assume that the examples are i.i.d. An important but highly challenging direction for future work is to learn from multiple noisy labelers with neither assumptions on the dependencies among labeler errors nor the i.i.d. assumption. Also, the method presented here does not extend to multi-class classification. One can follow the heuristic in [22], and execute multiple one-versus-all classifiers. However, this heuristic does not provide any formal analysis on the obtained result, and the outcome of different one-versus-all classifiers may not be consistent. Thus, a provable multi-class classification under drift is another interesting open problem. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Majority Vote & All Past & **Adaptive** \\ \hline Accuracy & \(60.64\pm 0.20\) & \(55.49\pm 0.27\) & \(\mathbf{62.51\pm 0.09}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of different methods for the AwA2 dataset. For each method, the average accuracy and standard error of the mean are reported over three random executions. Additional results for all fixed-window-size strategies with window sizes in \(\mathcal{R}\) are reported in the supplementary material. Adaptive has the highest mean accuracy. Figure 3: Adaptively selected window sizes for each time step for AwA2 dataset. Vertical lines mark a distribution shift. Each point is an average of three random executions. ## Acknowledgements We gratefully acknowledge support from Cisco. This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) under agreement number FA8750-19-2-1006 and by the National Science Foundation (NSF) under award IIS-1813444. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) or the U.S. Government. Disclosure: Stephen Bach is an advisor to Snorkel AI, a company that provides software and services for data-centric artificial intelligence.
2304.06996
Implementation of electromagnetic analogy to gravity mediated entanglement
Recently, experiments aimed at measuring gravity mediated entanglement (GME) using quantum information techniques have been proposed, based on the assumption that if two systems get entangled through local interactions with gravitational field, then this field must be quantum. While there is a debate about what could be drawn from GME, quantum simulation might provide some clarification. Here, we present electromagnetic analogy of GME using magnetic-field mediated interaction between the electron and nucleus in a single atom. Our work successfully implements the general procedures of GME experiments and confirms that the mediating field does not support the mean-field description. It also clarifies that, without considering the light-crossing time, the GME experiment would not distinguish a quantum-field-theory description from a quantum-controlled classical field one. Furthermore, this work provides a novel method to construct two-qubit systems in a single atom, and providing the first quantum simulation of GME using material qubits. It helps to conceive the future GME experiments on the scale of light-crossing time.
Ji Bian, Teng Liu, Pengfei Lu, Qifeng Lao, Xinxin Rao, Feng Zhu, Yang Liu, Le Luo
2023-04-14T08:32:25Z
http://arxiv.org/abs/2304.06996v1
# Implementation of electromagnetic analogy to gravity mediated entanglement ###### Abstract Recently, experiments aimed at measuring gravity mediated entanglement (GME) using quantum information techniques have been proposed, based on the assumption that if two systems get entangled through local interactions with gravitational field, then this field must be quantum. While there is a debate about what could be drawn from GME, quantum simulation might provide some clarification. Here, we present electromagnetic analogy of GME using magnetic-field mediated interaction between the electron and nucleus in a single atom. Our work successfully implements the general procedures of GME experiments and confirms that the mediating field does not support the mean-field description. It also clarifies that, without considering the light-crossing time, the GME experiment would not distinguish a quantum-field-theory description from a quantum-controlled classical field one. Furthermore, this work provides a novel method to construct two-qubit systems in a single atom, and providing the first quantum simulation of GME using material qubits. It helps to conceive the future GME experiments on the scale of light-crossing time. ## I Introduction Understanding gravity within the framework of quantum mechanics is one of the great challenges of modern physics. Due to the lack of empirical evidence, there is a debate on whether gravity is a quantum entity [1]. Recently, experiments aimed at measuring quantum gravitational effects, for example, the gravity mediated entanglement (GME) [2; 3] and the non-Gaussianity [4], in table-top experiments using quantum information techniques have been proposed, which might resolve this debate in the end. The GME proposals are based on a subtle logic that, if two quantum systems (e.g., two masses that could be prepared in spatial superposition) get entangled through local interactions with a third system (i.e., a gravitational field), then this third system itself must be quantum [2; 3]. Developments in quantum control of larger masses and measurement of gravitational fields of smaller masses may soon bring this important experimental test into practice [5; 6; 7]. At that time, both a positive (the masses are entangled) and a negative result will convey valuable information about quantum gravity [8]. However, there is also an on-going debate about the precise conclusions that could be drawn from the detection of GME [8; 9; 10; 11; 12; 13]. As suggested in Ref. [9], GME supports the view that gravitational fields are sourced coherently by superposition of masses, instead of a mean field description; However, it could not tell whether gravity admits a quantum field description. Instead, measuring on the light-crossing time between the two masses would truly reveal quantum features of the gravitational interaction, as shown in Ref. [11]. Based on the detailed correspondence between electromagnetism and general relativity [14; 15], conducting a GME-like experiment by analogies in electromagnetism could bring additional insights to the above debates. In addition, with the rapid development of quantum information science, quantum simulation of the fundamental properties of spacetime has achieved fruitful results [16; 17; 18]. Thus, studies of the GME experiment using gravito-electromagnetic analogies [14; 19; 20] are highly attracting. In this article, we propose and perform a quantum simulation experiment of the GME effect, using magnetic-field mediated interaction between the electron and nucleus in a single atom. While the magnetic field is the analog of gravitational field, the spins of electron and nucleus play two roles from the GME experiment. The first role is the position of each mass: different spin states result in different magnetic fields, in analog to the case that different positional combinations of the two masses create different gravitational fields. The second role is the spins possessed by the masses which are used to detect entanglement. We take quantum field theory (QFT) of electromagnetic field as a premise, and provide a preview of the general experimental procedure of the GME experiment. Observation of entanglement in the final spin state rules out classical mean-field description of the mediating field. Moreover, our experimental result confirms that the QFT description and the quantum-controlled classical field one could not be distinguished by original GME proposals, which stresses the necessity of measuring on the order of light-crossing time [11]: demonstrating that entanglement due to interaction with gravitational field is established between the two masses at a time smaller than the light-crossing time (entanglement harvesting [21]) will help to confirm the quantum nature of the mediating field (with an exception of the yet to be developed gravitational absorber theory [9]). We also point out that existing experiments on entanglement between remote trapped-ion qubits [22; 23; 24] could be modified to a simulation of such measurements in electromagnetism. The experiment utilizes the \(d(>2)\)-level system (qudit) in a single ion [25; 26; 27]. Moreover, since the four level structure is created by the hyperfine interaction between the nucleus (spin-1/2) and the valence electron (spin-1/2), the 4-dimensional Hilbert space is naturally partitioned into two 2-dimensional subspace. This allows us speak about entanglement of two spins even in a single ion, and makes the experiment easier, as otherwise two trapped ions have to be used to simulate the GME experiment [28; 29]. This work thus demonstrates the advantage that multilevel structure of a quantum unit offers, i.e, simplification of the experimental setup [30; 31; 32], and may shed new light on quantum simulation using trapped ions [33]. ## II Realization of the single-atom game quantum-simulator The GME proposal [2; 3] is illustrated in Fig.1(a) (see Appendix VI.1). Consider two objects \(a\) and \(b\) with gravitational interaction, each has mass \(m\) and posses spin-1/2, as shown in Fig.1(a). Denote the angular momentum operators \[I_{x}=\frac{\hbar}{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\ I_{y}=\frac{\hbar}{2}\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\ I_{z}=\frac{\hbar}{2}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix},\] and eigenstates of \(I_{z}\) to be \(|\uparrow\rangle\) and \(|\downarrow\rangle\), with corresponding eigenvalues \(\hbar/2\) and \(-\hbar/2\) (we take \(\hbar=1\)). Each mass could be prepared in spatial quantum superposition of two places: up (\(u\)) and down (\(d\)), correlated with their spin (e.g., via some Stern-Gerlach scheme). In this paper we presume the interaction is not instantaneous and is mediated by some field. The derivation of this scenario using a QFT-like framework of gravitational field is already given in, e.g., Ref. [9; 10; 11]. Here as the light crossing time \(t_{c}\approx d_{0}/c\) is negligible, a quantum-controlled local classical field derivation would give essentially the same result. In the following we give a Hilbert space to the gravitational field merely to make easy comparison with the electromagnetic simulation. Denote the state of the whole system as \(|\psi_{a}\rangle\otimes|\psi_{b}\rangle\otimes|g\rangle:=|\psi_{a}\rangle| \psi_{b}\rangle|g\rangle\), where \(|\psi_{a,b}\rangle\) is the spin state of \(a,b\) and \(|g\rangle\) represents the gravitational field. The initial state reads \(\frac{1}{2}(|\uparrow\rangle+|\downarrow\rangle)(|\uparrow\rangle+|\downarrow \rangle)|g_{0}\rangle.\) After the gravitational interaction for time \(\tau\gg t_{c}\), the state becomes \[\begin{split}&\frac{1}{2}(e^{-i\phi_{\uparrow\uparrow}^{ \prime}}\left|\uparrow\right\rangle\left|\uparrow\right\rangle|g_{uu})+e^{-i \phi_{\uparrow\downarrow}^{\prime}}\left|\uparrow\right\rangle|\downarrow \rangle|g_{ud}\rangle\\ &+e^{-i\phi_{\downarrow\uparrow}^{\prime}}\left|\downarrow \right\rangle|\uparrow\rangle|g_{du}\rangle+e^{-i\phi_{\downarrow\downarrow}^ {\prime}}\left|\downarrow\right\rangle|\downarrow\rangle|g_{dd}\rangle), \end{split} \tag{1}\] where \(\phi_{\uparrow\uparrow}^{\prime}=\Phi_{uu}\tau/\hbar,\phi_{\uparrow\downarrow }^{\prime}=\Phi_{ud}\tau/\hbar,\phi_{\downarrow\uparrow}^{\prime}=\Phi_{du} \tau/\hbar,\) and \(\Phi_{nj}=-Gm^{2}/d_{nj}\), (\(n,j=u,d\)) is the gravitational potential energy when \(a\) is in position \(n\) and \(b\) is in position \(j\), \(|g_{nj}\rangle\) is the corresponding gravitational field state. Finally, one preserves the spin while disentangle the gravitational field, this could be done by coherently bringing \(a\) and \(b\) back to the middle position in all branches and factor out \(|g_{0}\rangle\). Neglecting the time it takes for this procedure and omitting a global phase, one arrives at the final state (See Appendix VI.1) \[\frac{1}{2}(e^{-i\phi_{\uparrow\uparrow}}\left|\uparrow\right\rangle|\uparrow \rangle+|\uparrow\rangle\left|\downarrow\right\rangle+|\downarrow\rangle \left|\uparrow\right\rangle+e^{-i\phi_{\uparrow\uparrow}}\left|\downarrow \right\rangle|\downarrow\rangle)|g_{0}\rangle, \tag{2}\] where \(\phi_{\uparrow\uparrow}=(\Phi_{uu}-\Phi_{ud})\tau/\hbar\). As long as \(e^{-2i\phi_{\uparrow\uparrow}}\neq 1\), \(a\) and \(b\) are entangled [34]. To detect entanglement, one could utilize Bell inequalities, entanglement witness, or quantum state tomography as explained later. The observation of entanglement will support the view that gravitational fields are sourced coherently by the superposition of sources, and rule out classical mean field descriptions. In the following, we present two intuitive analogies of GME in electromagnetism, one involves two electric charges with electric interaction, the other involves two spins with magnetic interaction. While the first case could be realized by two trapped-ion qubits, the latter is suitable for the single-atom simulation implemented in this work. First, we describe the case of two electric charges. Consider the same situation as in Fig.1(a), and suppose the gravitational interaction and the spin-spin interaction between \(a\) and \(b\) are too small and could be neglected. But now they both posses electric charges \(q\) and interact through electric field. The derivation of this scenario using a QFT description of electromagnetic field is already given in, e.g., Ref. [9; 10]. Here as the light crossing time is negligible, a quantum-controlled classical field derivation [11] would give a result with negligible difference. Due to the same reason, a non-relativistic (instantaneous interaction) calculation is further applied merely as a convenient approximation. The state of the whole system before final measurement is then \[\frac{1}{2}(e^{-i\phi_{\uparrow\uparrow}}\left|\uparrow\right\rangle|\uparrow \rangle+|\uparrow\rangle\left|\downarrow\right\rangle+|\downarrow\rangle \left|\uparrow\right\rangle+e^{-i\phi_{\uparrow\uparrow}}\left|\downarrow \right\rangle|\downarrow\rangle)|E_{0}\rangle, \tag{3}\] where \(|E_{0}\rangle\) represents the electric field when \(a\) and \(b\) are both in the middle position. \(\phi_{\uparrow\uparrow}\) has the same form as (2), and \(\Phi_{nj}(n,j=u,d)\) now represents electric potential energy. In this example, the quantum nature (ability to stay in quantum superposition correlated with the source) of electric field is necessary to generate the final entanglement between \(a\) and \(b\). It is worth mentioning that two-qubit entangling gates in trapped-ion quantum information processors essentially utilize similar mechanism as given above [35; 36; 37; 38; 39; 40; 41; 42]. In Sec.IV.2, we provide a reinterpretation of existing experiments [39] as a simulation of GME. Secondly, consider the scenario of two spins \(a\) and \(b\) in free space, they interact with each other through the magnetic field they produce, i.e., the magnetic dipole-dipole interaction [43; 44]. The derivation of this interaction in QFT framework is given in Sec.VI.4 [45; 46]. When the light crossing time is negligible, the interaction Hamiltonian is well approximated by \(H_{dd}=\lambda[3(\mathbf{I}_{a}\cdot\mathbf{r})(\mathbf{I}_{b}\cdot\mathbf{r})- \mathbf{I}_{a}\cdot\mathbf{I}_{b}]\), where \(\mathbf{I}_{a(b)}\) is angular momentum operator with components \(I_{a(b),k},(k=x,y,z)\), \(\mathbf{r}\) is the unit vector in the direction of the line joining the two spins, and \(\lambda\) is a constant depending on the distance between the spins and their gyromagnetic ratios (the tensor products '\(\otimes\)' between operators are omitted) [43]. When there is a strong magnetic field present in \(z\), \(H_{dd}\) could be further simplified to \[H_{zz}=\lambda I_{a,z}I_{b,z}, \tag{4}\] which is called the secular approximation [43]. This form of interaction will result in a final state similar to (2), and could be alternatively realized by choosing some proper rotating reference frame, as explained later. Denote the eigenvalues of \(H_{zz}\) to be \(S_{nj}\) (\(n,j\) could be \(\uparrow,\downarrow\)), with corresponding eigenstates \(|n\rangle|j\rangle\). \(S_{nj}\) equals the magnetic potential energy when \(a\) is in \(|n\rangle\) and \(b\) is in \(|j\rangle\). Simulating the GME experiment using magnetic field under \(H_{zz}\) is slightly different from the electric field. This is because spin states also determine magnetic field, unlike in GME or the electric field case, where the gravitational field or the electric field directly depends on the position of \(a\) and \(b\) only. Different spin states \(|\psi_{a}\rangle|\psi_{b}\rangle\) could give same gravitational field (as long as the position configurations are the same), while this is generally not true for the magnetic field case. Consequently, the protocol using magnetic field is as follows. Assume at the beginning, \(a\) and \(b\) are far apart (or magnetically shielded) from each other such that the interaction between them could be ignored, and their spin states are separable. First prepare each spin to be \(\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)\). To see the time-evolved state, first note that starting from \(|n\rangle|j\rangle|M_{0}\rangle\), where \(n,j=\uparrow,\downarrow\) and \(|M_{0}\rangle\) is the initial magnetic field state, the total state of spin and field at time \(t\) under the QFT interaction Hamiltonian \(H_{\mathrm{F}}\) (containing the spin+field degrees of freedom, see Sec.VI.4) could be written as \(\alpha(t)|n\rangle|j\rangle|M_{nj}(t)\rangle+\sum_{n^{\prime}j^{\prime}}\beta _{n^{\prime}j^{\prime}}(t)|n^{\prime}\rangle|j^{\prime}\rangle|M_{n^{\prime}j ^{\prime}}(t)\rangle\), where \(n^{\prime}j^{\prime}\) are combinations different from \(nj\), \(|\alpha(t)|^{2}+\sum_{n^{\prime}j^{\prime}}|\beta_{n^{\prime}j^{\prime}}|^{2}=1\) and \(|M_{nj(n^{\prime}j^{\prime})}(t)\rangle\) is the corresponding magnetic field state. As \(H_{\mathrm{F}}\) could be simplified to \(H_{zz}\), we have \(|\alpha(t)|\approx 1\) Figure 1: (a) Configuration of the GME experiment considered in this work. The left-right configuration in the original GME proposal [2] is modified to be up-down here to simplify the simulation experiment, they are essentially the same. Objects \(a\) and \(b\) could be put into spatial superposition of \(u\) and \(d\) correlated with their spin states, and there is gravitational interaction between them. (b) The s-electron (e) in the outermost shell and the nucleus (n) in a Hydrogen-like atom, or in an alkaline-like ion. The electron spin and the nuclear spin interacts through magnetic field and results in the interaction Hamiltonian \(H_{f}\). In the \({}^{171}\)Yb\({}^{+}\) ion used here, there is a magnetic field \(B_{0}\) in \(z\) providing the quantization axis. The corresponding four-level subspace is controlled by microwave fields. The total population on \(|1\rangle\), \(|2\rangle\), and \(|4\rangle\) is measured by applying the 369.5 nm detection laser beam and measuring fluorescence. (c) The experimental pulse sequence in \(f_{O}\) (see Sec.VI.3). After initializing to \(|3\rangle\) by optical pumping, we prepare \(|\Psi_{0}\rangle\) by sequentially applying microwave pulses. We then evolve the system by (9) and arrive at \(|\Psi_{f}\rangle\). Finally we apply tomographic pulses and readout the population on \(|3\rangle\) by fluorescence measurement. Here as an example the tomographic pulses correspond to measuring Re(\(\langle 1|\rho_{f}|4\rangle\)). The labels on pulses are explained in the main text. and \(|M_{nj}(t)\rangle\approx|M_{0}\rangle\), that is, the field state remains unchanged approximately and the initial state is an approximate eigenstate of \(H_{\rm F}\) hence only generates a phase during the interaction period. On the other hand, the difference of \(|M_{nj}(t)\rangle\) for different \(nj\), although tiny, is crucial in generating the final entanglement of spins, as explained in Sec.VI.4. So we write \(|M_{nj}(t)\rangle=|M_{nj}\rangle\) as a constant state and set \(|\alpha(t)|=1\). Thus after letting the two spins interact through magnetic field for time \(\tau\), the final state is \[\begin{split}|\Psi_{f}\rangle=&\frac{1}{2}(e^{-i \phi_{\uparrow\uparrow}}\ket{\uparrow}\ket{\uparrow}|M_{\uparrow\uparrow} \rangle+\ket{\uparrow}|\downarrow\rangle|M_{\uparrow\downarrow}\rangle\\ &+\ket{\downarrow}\ket{\uparrow}|M_{\downarrow\uparrow}\rangle+ e^{-i\phi_{\uparrow\uparrow}}\ket{\downarrow}|\downarrow\rangle|M_{\downarrow \downarrow}\rangle),\end{split} \tag{5}\] omitting the irrelevant global phase, where \(\phi_{\uparrow\uparrow}=(S_{\uparrow\uparrow}-S_{\uparrow\downarrow})\tau= \lambda\tau/2\). Finally, we are going to measure and detect entanglement. We choose to perform quantum state tomography, as it has the ability to conclude a definitive negative experimental outcome (i.e., no entanglement), and is also more straightforward to implement on our setup. Unlike the gravitational field case, we are not able to disentangle the magnetic field state. However, as explained in Sec.VI.2 and Sec.VI.4, the fact that \(H_{F}\) could be simplified to \(H_{zz}\) indicates \(|M_{nj}\rangle\approx|M_{n^{\prime}j^{\prime}}\rangle\) in (5). Denote \(\rho_{f}\) the density matrix after tracing out the field degrees of freedom in (5), we could thus apply conventional two-qubit state tomography technique and calculate the entanglement of formation \(W(\rho_{f})\)[47] to detect entanglement. This also suggests that if the field states in different branches are made close to each other in future GME experiments, then it might not be necessary to coherently bring back the masses (as this might introduce additional experimental error). One could trace out the field and detect entanglement between the spins. We are going to simulate the GME experiment using an electron spin and a nuclear spin in a single atom, as shown in Fig.1(b). The idea is essentially the same as explained above. Consider the s-electron in the outermost shell and the nucleus in a Hydrogen-like atom, or in an alkaline-like ion, e.g., the \({}^{171}\)Yb\({}^{+}\) ion used here. The s-electron forms a spherically symmetric charge distribution surrounding the nucleus. The nuclear spin feels the magnetic field produced by the electron and vice versa. The system could be treated as a uniformly magnetised sphere interacts with a spin at its center [48]. The magnetic dipole-dipole interaction between two dipoles is already derived in sec.VI.4, under QFT framework, and it could be well approximated by \(H_{\rm dd}\) when the two dipoles are close to each other. So if we neglect the light crossing time (thus a QFT or a quantum-controlled classical field derivation could be conveniently approximated by a non-relativistic quantum mechanics one), to find the approximation of the QFT description, we could directly starting from \(H_{\rm dd}\). Then following e.g., Ref. [48] we obtain the hyperfine interaction \(H_{h}=A(I_{a,x}I_{b,x}+I_{a,y}I_{b,y}+I_{a,z}I_{b,z})\), where \(A\) is the hyperfine interaction constant, \(a\) represents nucleus and \(b\) electron. The influence of the remaining symmetrical core of paired electrons could be approximated as a correction term that contributes to the interaction constant [49]. The ion is typically in an external magnetic field \(B_{0}\) providing the quantization axis (denote it \(z\) axis), so the free evolution Hamiltonian is \[\begin{split} H_{f}&=A(I_{a,x}I_{b,x}+I_{a,y}I_{b,y }+I_{a,z}I_{b,z})\\ &-B_{0}(\gamma_{a}I_{a,z}I+\gamma_{b}II_{b,z})\\ &=E_{1}|1\rangle\langle 1|+E_{2}|2\rangle\langle 2|+E_{3}|3 \rangle\langle 3|+E_{4}|4\rangle\langle 4|,\end{split} \tag{6}\] where \(\gamma_{a,b}\) are the gyromagnetic ratios, \(I\) is the \(2\times 2\) identity matrix. \(|1\rangle\), \(|2\rangle\), \(|3\rangle\), \(|4\rangle\) and \(|\uparrow\rangle\)\(|\uparrow\rangle\), \(|\uparrow\rangle\)\(|\downarrow\rangle\), \(|\downarrow\rangle\), \(|\downarrow\rangle\), \(|\downarrow\rangle\), \(|\downarrow\rangle\) are related by a mapping operator \(R\) as explained in sec.VI.3: \[\begin{split}|1\rangle&=|\uparrow\rangle\ket{ \uparrow},\quad|2\rangle=\cos\frac{\theta}{2}\ket{\uparrow}|\downarrow\rangle- \sin\frac{\theta}{2}\ket{\downarrow}|\uparrow\rangle\\ |3\rangle&=\sin\frac{\theta}{2}\ket{\uparrow}| \downarrow\rangle+\cos\frac{\theta}{2}\ket{\downarrow}|\uparrow\rangle\,, \quad|4\rangle=|\downarrow\rangle\ket{\downarrow},\end{split} \tag{7}\] where \(\theta\) depends on \(B_{0}\) (\(\theta\approx-\frac{\pi}{2}\) here). In \({}^{171}\)Yb\({}^{+}\), (6) corresponds to the \({}^{2}S_{1/2}\) four-level subspace, as shown in Fig.1(b). Although (6) is different from (4) in the lab frame, (6) seen from a rotating reference frame \(f_{O}\) will take the same form as (4): Define a rotating frame \(f_{O}\) with rotation operator \(O\), \(|\psi\rangle_{R}=O|\psi\rangle_{L}\), where \(|\psi\rangle_{L}\) is the state in the lab frame and \(|\psi\rangle_{R}\) in the rotating frame. The evolution operators and Hamiltonian are also changed accordingly. The rotation operator is \[\begin{split} O=&\text{exp}[i\int_{0}^{t^{\prime}}( \delta_{1}(t)|1\rangle\langle 1|+\delta_{2}(t)|2\rangle\langle 2|\\ &+\delta_{3}(t)|3\rangle\langle 3|+\delta_{4}(t)|4\rangle\langle 4| )dt].\end{split} \tag{8}\] Working in this frame makes control-sequence-designing easier and it does not affect the population readout. In the interaction period \(\tau\), we set \(\delta_{1,4}=E_{1,4}-\Delta\) and \(\delta_{2,3}=E_{2,3}\). The evolution operator in this period is then \[\begin{split} U(\Delta,\tau)&=\text{exp}[-i(OH_{f}O^{ \dagger}+i\dot{O}O^{\dagger})\tau]\\ &=\text{exp}[-i(\Delta|1\rangle\langle 1|+\Delta|4\rangle\langle 4|) \tau]\\ &=\text{exp}[-i(\frac{\Delta}{2}|1\rangle\langle 1|-\frac{\Delta}{2}|2 \rangle\langle 2|\\ &-\frac{\Delta}{2}|3\rangle\langle 3|+\frac{\Delta}{2}|4\rangle \langle 4|)\tau]\text{exp}[-i\frac{\Delta}{2}\tau]\\ &=\text{exp}(-i2\Delta I_{a,z}I_{b,z}\tau)\text{exp}(-i\frac{ \Delta}{2}\tau).\end{split} \tag{9}\] This is the desired evolution under the Hamiltonian of the form \(H_{zz}\) (ignoring the irrelevant global phase \(\text{exp}(-i\frac{\Delta}{2}\tau)\)). The fact that the last equality holds is crucial as this allows a straightforward implementation of the experiment. We also set \(\delta_{1,2,3,4}=E_{1,2,3,4}\) during the initial state preparation and final tomography steps. With appropriate choice of the control-pulse phases, in this \(f_{O}\), the interaction between spins is "tailored" to the form of \(H_{zz}\) during the interaction period. The fact that (8) is itself an entangling operation does not affect the validity of the simulator [See sec.VI.2 for explanation]. To sum up, the protocol using a single ion is 1) First prepare the state \[\begin{split}\ket{\Psi_{0}}=&\frac{1}{2}(\ket{\uparrow }\ket{\uparrow}\ket{M_{\uparrow\uparrow}}+\ket{\uparrow}\ket{\downarrow}\ket{ M_{\uparrow\downarrow}}\\ &+\ket{\downarrow}\ket{\uparrow}\ket{M_{\downarrow\uparrow}}+ \ket{\downarrow}\ket{\downarrow}\ket{M_{\downarrow\downarrow}})\end{split} \tag{10}\] in \(f_{O}\). 2) Then let the system evolves for time \(\tau\). In \(f_{O}\), the evolution is under the Hamiltonian \(H=2\Delta I_{z}I_{z}\), where \(\Delta\) could be varied by adjusting \(\delta_{1,2,3,4}\) in (8), and the final state \[\begin{split}\ket{\Psi_{f}}=&\frac{1}{2}(e^{-i\phi_{ \uparrow\uparrow}}\ket{\uparrow}\ket{\uparrow}\ket{M_{\uparrow\uparrow}}+ \ket{\uparrow}\ket{\downarrow}\ket{M_{\uparrow\downarrow}}\\ &+\ket{\downarrow}\ket{\uparrow}\ket{M_{\downarrow\uparrow}}+e^{-i \phi_{\uparrow\uparrow}}\ket{\downarrow}\ket{\downarrow}\ket{M_{\downarrow \downarrow}}),\end{split} \tag{11}\] where \(\phi_{\uparrow\uparrow}=\Delta\tau\) is achieved. 3) Detect entanglement of the two spins in (11) after tracing out the field. ## III Observation of the field mediated entanglement The experiment is performed on the trapped \({}^{171}\)Yb\({}^{+}\) ion quantum information processor, as shown in Fig.1(b). A magnetic field with strength \(B_{0}\) along \(z\) provides the quantization axis. The \({}^{2}\)S\({}_{1/2}\) hyperfine energy levels result from the hyperfine interaction between the \({}^{171}\)Yb nuclear spin and the s-electron spin in the valence shell. A microwave field with tunable frequency, amplitude and phase is used to control the spin states. The population on \(|3\rangle\), \(P_{3}\) could be readout by first measuring the total population \(P_{1}+P_{2}+P_{4}\) on \(|1\rangle\), \(|2\rangle\), and \(|4\rangle\) through applying the 369.5 nm detection beam and measuring the fluorescence, then \(P_{3}=1-(P_{1}+P_{2}+P_{4})\). The initial state of each experiment is \(|3\rangle\), which is dissipatively prepared by optical pumping (note interestingly that \(|3\rangle\) is actually already an entangled state). Details of the experimental apparatus and parameters of the four-level subspace are given in Sec.VI.3. The experimental sequence is illustrated in Fig.1(c), in the following, the states and operators are all in \(f_{O}\): 1) After initializing the system to \(|3\rangle\) through optical pumping, prepare the state \(|\Psi_{0}\rangle\) by sequentially applying microwave pulses resonant with transitions \(|3\rangle\leftrightarrow|4\rangle\), \(|1\rangle\leftrightarrow|3\rangle\), \(|2\rangle\leftrightarrow|3\rangle\). Here pulses in transitions \(|n\rangle\leftrightarrow|m\rangle\) (\(n<m\)) with labels \(\pm y,\theta\) represent evolution operators \(U_{p}=\exp[\mp i(-i|n\rangle\langle m|+i|m\rangle\langle n|)\theta]\). The desired evolution could be realized by adjusting the pulse width, microwave field strength and phase, as explained in Sec.VI.3, \(\theta_{13,23}\) are also given there. 2) The interaction period. Evolve the system by \(U(\Delta,\tau)\) (eq.(9)) for \(\tau\). To achieve this, recall that we are in \(f_{O}\), we just need to let the system evolve freely for \(\tau\). For this to work, we have to apply an extra \(\phi=\Delta\tau\) phase shift for the following readout microwave pulses. This is a common technique in quantum information processing experiments to achieve operations under \(H_{zz}\), e.g., when implementing controlled-phase (CZ) gates [50; 51]. 3) Finally, we apply quantum state tomography to obtain the full density matrix of the two-spin system, and detect entanglement. As the direct observable is \(|3\rangle\langle 3|\), we convert the populations and coherence to the \(\{|2\rangle,|3\rangle\}\), \(\{|1\rangle,|3\rangle\}\), or \(\{|3\rangle,|4\rangle\}\) subspace by \(\pi\) pulses, and then read them out by standard qubit-tomography techniques. For example, in Fig.1(c) we show the readout pulses obtaining \(\mathrm{Re}(\langle 1|\rho_{f}|4\rangle)\). Assume after this sequence the population on \(|3\rangle\) is measured to be \(P_{3}^{\prime}\), and we have already measured \(P_{1}\) and \(P_{4}\) (by \(\pi\) pulses on \(|1,4\rangle\leftrightarrow|3\rangle\) and readout), then \(\mathrm{Re}(\langle 1|\rho_{f}|4\rangle)=(P_{1}+P_{4})/2-P_{3}^{\prime}\). After obtaining all \(\langle u|\rho_{f}|v\rangle\), (\(u,v=1\), 2, 3, 4), we easily recover \(\langle nj|\rho_{f}|kl\rangle\), (\(n,j,k,l=\uparrow\) or \(\downarrow\)) by applying the mapping operator \(R\). When \(\tau=0\), the system is in the initial separable state. The theoretical and experimental density matrix \(\rho_{\mathrm{th}}\) and \(\rho_{\mathrm{exp}}\) are shown in Fig.2(a). The state fidelity \(F=\left|\mathrm{Tr}(\rho_{\mathrm{th}}\rho_{\mathrm{exp}})\right|/\sqrt{ \mathrm{Tr}(\rho_{\mathrm{th}}\rho_{\mathrm{th}})\mathrm{Tr}(\rho_{\mathrm{ exp}}\rho_{\mathrm{exp}})}\approx 0.97\pm 0.01\). To further obtain the degree of entanglement, we calculate the entanglement of formation \(W(\rho)\) of \(\rho\). \(W[\rho_{\mathrm{exp}}(\phi_{\uparrow\uparrow}=0)]=0\), confirming the separability. After an interaction period \(\tau\), the two spins are generally entangled. For example, we choose \(\tau=2\)\(\mu\)s, \(\Delta=\pi/(2\tau)=0.785\) MHz, and the final state will be maximally entangled (\(\phi_{\uparrow\uparrow}=\pi/2\)). This could serve to simulate a GME experiment with same \(\phi_{\uparrow\uparrow}\), e.g, \(m=10^{-14}\) kg, \(\tau=2.5\) s, \(d_{uu}=200\)\(\mu\)m, \(d_{ud}=280\)\(\mu\)m, which could be realized by future interferometers using masses with embedded spins [2]. The experimental results are shown in Fig.2(b), where \(F\approx 0.98\pm 0.01\). \(W[\rho_{\mathrm{exp}}(\phi_{\uparrow\uparrow}=\pi/2)]=0.66\pm 0.06>0\), which means the state is entangled. Deviation from the ideal value 1 Figure 2: (a) Real parts of the quantum state tomography of \(\rho_{\mathrm{exp}}(\phi_{\uparrow\uparrow}=0,\tau=0)\) (lower panel) and the corresponding \(\rho_{\mathrm{th}}\) (upper panel). The imaginary parts of \(\rho_{\mathrm{th}}\), \(\mathrm{Im}(\rho_{\mathrm{th}})\) are all zero, \([\mathrm{Im}(\rho_{\mathrm{exp}})]\) are all smaller than 0.06 and are not shown. The fidelity \(F\approx 0.97\pm 0.01\). \(|i\rangle|j\rangle\) are shown as \(|ij\rangle\), (\(i,j=\uparrow\) or \(\downarrow\)) for simplicity. (b) Real and imaginary parts of the quantum state tomography of \(\rho_{\mathrm{exp}}(\phi_{\uparrow\uparrow}=\pi/2,\tau=2\)\(\mu\)s), \(F\approx 0.98\pm 0.01\) is mainly due to pulse errors and decoherence in steps 1) and 3). We also vary \(\Delta\) with fixed \(\tau=2\)\(\mu\)s, the results are, e.g., \(W[\rho_{\rm exp}(\phi_{\uparrow\uparrow}=\pi/4)]=0.40\pm 0.02\) with \(F=0.98\pm 0.01\) (\(W[\rho_{\rm th}(\phi_{\uparrow\uparrow}=\pi/4)]=0.5\)), and \(W[\rho_{\rm exp}(\phi_{\uparrow\uparrow}=\pi)]=0.02^{+0.03}_{-0.02}\) with \(F=0.98\pm 0.01\) (\(W[\rho_{\rm th}(\phi_{\uparrow\uparrow}=\pi)]=0\)). If a negative outcome is encountered in future GME experiments, it could results from the following three possibilities: 1. Gravity does not have standard quantum mechanical properties; 2. There are experimental noises such as magnetic noise and gravitational noise that causes decoherence; 3. There are other new mechanisms present, such as the spontaneous collapse mechanism [52] that leads to a strong loss of coherence on the time scale of the experiment. We observe the decoherence effect in the simulation experiment. While in our case such an effect is due to the magnetic noise described in possibility 2, it also provides a mechanism to simulate possibility 3. In the \(zz\) basis (defined in Sec.VI.3), denote the density matrix corresponds to the spin part of (2) \(\rho_{f}(\tau)=1/4\sum_{n,j,k,l}\alpha_{njkl}|n\hat{y}\rangle\langle kl|\), ignoring the gravitational part as it is already factored out. Assume for simplicity that the decoherence introduces a decay factor \(e^{-\tau/\beta_{njkl}}\) to each nondiagonal element, then \(\alpha_{njkl}=e^{-\tau/\beta_{njkl}}e^{i\varphi_{njkl}}\), where \(e^{i\varphi_{njkl}}\) are phases without decoherence. Denote the final state of the simulator \(\rho_{f}^{s}(\tau)=1/4\sum_{n,j,k,l}\alpha_{njkl}^{s}|n\hat{y}\rangle\langle kl|\), also ignoring the magnetic field states. The simulator is also undergoing decoherence process, as shown by the Ramsey \(T_{2}^{*}\) measurement among different transitions. Through the mapping operator \(k\), the exponential decay of the coherence \(\langle b|\rho_{f}^{s}(\tau)|a\rangle\), (\(a,b=1,\,2,\,3,\,4\)) manifest themselves as the decay factors \(e^{-\tau/\beta_{njkl}^{s}}\) in \(\alpha_{njkl}^{s}\). Thus \(\alpha_{njkl}^{s}=e^{-\tau/\beta_{njkl}^{s}e^{i\varphi_{njkl}^{s}}}\), where \(e^{i\varphi_{njkl}^{s}}\) are phases without decoherence. Clearly the decoherence in GME could be simulated by the decoherence process in the simulator. We fix \(\phi_{\uparrow\uparrow}=\pi/2\), and vary \(\tau\) from 0 to 400 \(\mu\)s (\(\Delta\) is varied accordingly). The resulting \(W[\rho_{\rm exp}(\phi_{\uparrow\uparrow}=\Delta\tau=\pi/2)]\) is illustrated in Fig.3. When \(\tau\) is small, \(W\) is relatively large, indicating an entangled state with high-degree of entanglement. The deviation of \(W\) from ideal value 1 is due to the error and decoherence in the preparation and tomography steps. As \(\tau\) is increased, \(W\) decreases, and eventually approaches 0. This demonstrates the importance of minimizing decoherence due to technical noise in future GME experiments. It would also be crucial to have a good characterization of the magnetic noise if one wishes to further tell apart decoherence due to spontaneous collapse from decoherence due to magnetic noise. ## IV Discussion ### Indistinguishability of the quantum-controlled classical field and the QFT description The result of entanglement detection in this work "rules out" mean-field source for the electromagnetic potential [9]. As described in Sec.VI.4, the result is in consistent with both the QFT description of electromagnetic field and the quantum-controlled classical field description. So, the distinguishability of the two scenarios is lacking in the current GME experiments. To further tell these two descriptions apart, measuring on the order of the light-crossing time in future GME experiments is required, that is, demonstrating the two masses begin to get entangled due to gravitational interaction within the light crossing time, or measuring tinier quantitative effects around the light-crossing time [11]. It is interesting to point out that existing experiments on entanglement between remote trapped-ion qubits [22; 23; 24] could be modified to a demonstration of this effect in electromagnetism. For example, in Ref.[22], starting from time \(t=0\), two \({}^{171}\)Yb\({}^{+}\) atomic qubits (\(|2,3\rangle\) form the qubit levels) remotely located with light crossing time \(t_{c}\) get entangled mediated by electromagnetic field, i.e., the photons they produce. To achieve this, each atom first emit a single photon to create an entangled state between the atomic qubit and the photonic qubit (frequency qubit). The photons are then directed to a beam splitter to perform two-photon-interference. Coincident photon detection at time \(t_{d}\) in the two output ports of the beam splitter projects the atomic qubits to an entangled state. The desired observable of atomic qubits is then measured by internal-state control and readout with a total duration \(t_{r}\). If the experimental setup could be modified such that \(t_{d}+t_{r}<t_{c}\), one would be able to show that the entanglement is established within the Figure 3: Decoherence of spins due to magnetic noise. The entanglement of formation \(W[\rho_{\rm exp}(\phi_{\uparrow\uparrow}=\Delta\tau=\pi/2)]\) is plotted with varying \(\tau\) and \(\Delta\). The light red curve is added to guide the eye. light crossing time between the two atomic qubits, thus falsify the quantum-controlled classical field description. The implication of this for GME is worth further study. ### Two-ion entangling gate as a simulation of GME Electromagnetic field's ability to stay in quantum superposition correlated with the source is necessary for two-qubit entangling gates based on Coulomb interaction in trapped ions, e.g., the Cirac-Zoller gate [35], Molmer-Sorensen gate [36; 37], light-shift gate [38], and fast gates that are not confined within the Lamb-Dicke regime [39]. Starting from \(1/2(\ket{\uparrow\uparrow}+\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow}+ \ket{\downarrow\downarrow})\), during these gates, entangled states between internal degrees of freedom (spin) and the collective motional degrees of freedom (phonon) are generated. After motional states in all branches (correspond to different collective spin states) become the same and factored out, an entangled state of two spins is generated. Relative phases among different branches are coherently accumulated under the joint action of spin energy, motional energy, interaction energy with the control field, and importantly, the electromagnetic field energy caused by the electromagnetic interaction between two ions. If there is no such interaction, the center of mass mode and relative motion mode will have same mode frequency, then \(\phi_{\uparrow\uparrow}+\phi_{\downarrow\downarrow}=\phi_{\uparrow\downarrow}+ \phi_{\downarrow\uparrow}\), hence no entanglement between the two spins will be generated. If the electromagnetic field is not quantum, instead it admits a mean-field description and takes the same state in all branches, then the situation is the same as if there is no electromagnetic interaction between the two ions (plus a global electric field which only creates a global phase), hence no entanglement is generated either. Take the experiment in Ref.[39] as an example, two trapped \({}^{171}\)Yb\({}^{+}\) ions (\(\ket{2,3}\) form the qubit levels) get entangled through mutual electrical interaction and interaction with laser control field. One basic operation of the experiment is the "spin-dependent-kick" (SDK) that creates entangled states of spin and motion of each ion, which is similar to Stern-Gerlach operation. The SDK evolution operator for the two-qubit system in the rotating frame is \(U_{\rm SDK}=e^{2i\phi(t)}\sigma_{1+}\sigma_{2+}D_{\rm C}(i\eta_{\rm C})+e^{-2i \phi(t)}\sigma_{1-}\sigma_{2-}D_{\rm C}(-i\eta_{\rm C})+\sigma_{1+}\sigma_{2-} D_{\rm R}(i\eta_{\rm R})+\sigma_{1-}\sigma_{2+}D_{\rm R}(-i\eta_{\rm R})\), where \(\sigma_{1,2+(-)}=\ket{\downarrow}\bra{\uparrow}\bra{\uparrow}\bra{\downarrow}\) for spin 1 and 2, \(\phi(t)\) and \(\Delta k\) is related to the laser control field, \(x_{1,2}\) are position operators for ion 1 and 2, \(D_{\rm C,R}(i\eta_{\rm C,R})\) are coherent state displacement operators for the center of mass mode and relative motion mode, and \(\eta_{C,R}\) are the corresponding Lamb-Dicke parameters. Starting from \(1/2(\ket{\uparrow\uparrow}+\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow}+ \ket{\downarrow\downarrow})\), combination of several \(U_{\rm SDK}\) and free evolution disentangles the motional state and generates phase difference among different branches. The final entangled state is \(\ket{\Psi_{e}}=e^{i\Phi_{g}}/2(e^{i\gamma}\ket{\downarrow})\downarrow+e^{-i \gamma}\ket{\uparrow}\ket{\uparrow})-1/2(\ket{\uparrow}\ket{\downarrow}+\ket{ \downarrow}\uparrow)\), where \(\Phi_{g}=\pi/2.06\) and \(\gamma\) is a constant in each experiment. The difference of electromagnetic field in different branches is crucial to generate \(\Phi_{g}\neq n\pi\) (\(n\) is an integer), i.e., entanglement. As a simulation of GME, the electromagnetic field plays the role of gravitational field. Different motional states of the two ions play the role of different positional combination of the masses. The combination of \(U_{\rm SDK}\) and free evolution is the analog of the interaction period (for \(\tau\)) plus "bringing back to the middle position" in GME. ## V Conclusion To conclude, by taking the QFT of electromagnetic field as a premise, this work propose and perform a quantum simulation experiment of the GME effect. It could provide a preview of the general experimental procedure of the future GME experiment. Observing that the experimental result could not distinguish between a QFT description from a quantum-controlled classical field one, this work helps to stress the benefits of measuring on the order of the light-crossing time. It also points out that existing experiments on entanglement between remote trapped-ion qubits [22; 23; 24] could be modified to demonstrate such measurements in electromagnetism. The experimental system consists of two spins and the magnetic field they create which mediates their interaction, in a single atom. The use of the naturally-existing multilevel hyperfine structure in a single ion reduces the experimental cost of simulating the GME experiment. The quantum control techniques presented here could be further developed to achieve experimental complete-control of qudit systems in \({}^{171}\)Yb\({}^{+}\). This work is also among the first to abstract a naturally-existing two-spin system from a single \({}^{171}\)Yb\({}^{+}\) ion [31] and perform two-qubit simulation experiments upon it, which may shed new light on quantum simulation using trapped ions [33]. ## VI Appendix ### Brief description of GME. Consider two objects \(a\) and \(b\), each has mass \(m\) and posses spin-1/2, as shown in Fig.1(a). The radius of the masses are neglected, \(d_{uu}=d_{dd}=d_{0}\), \(d_{ud}=d_{du}>d_{0}\) and the spin-spin interaction between \(a\) and \(b\) is so weak that could be neglected. Initially one prepares the spin state of each mass to be \(\frac{1}{\sqrt{2}}(\ket{\uparrow}+\ket{\downarrow})\) and hold the position of \(a\) and \(b\) to be in the middle of \(u\) and \(d\). Denote the state of the gravitational field corresponds to this configuration \(\ket{g_{0}}\), then the total system becomes \(\frac{1}{2}(\ket{\uparrow}+\ket{\downarrow})(\ket{\uparrow}+\ket{\downarrow}) \ket{g_{0}}\). Next, one releases \(a\) and \(b\). If there is only one mass, the mass would be in spatial superposition of \(u\) and \(d\) (neglecting the time it takes to develop this superposition). As explained in Sec.II, after the gravitational interaction for time \(\tau\) the state becomes (1). Each component of (1) is approximately an eigenstate of the total Hamiltonian hence only generates a phase during the interaction period [8]. The phase acquired by the internal (spin) energy \(\hbar\omega_{nj}\) (\(n,j=\uparrow\) or \(\downarrow\)) is ignored, as one could compensate this by going to the rotating frame. As \(\Phi_{uu}=\Phi_{dd}\), \(\Phi_{du}=\Phi_{ud}\), we could subtract an irrelevant global phase factor and define the new relative phases before each branch as \(\phi_{\uparrow\uparrow}=(\Phi_{uu}-\Phi_{ud})\tau/\hbar,\phi_{\uparrow\downarrow }=0,\phi_{\downarrow\uparrow}=0,\phi_{\downarrow\downarrow}=(\Phi_{uu}-\Phi_{ud })\tau/\hbar\). Thus one obtains the final state (2). In the full language of general relativity, there are higher-order corrections to the phases. For example, consider the leading correction term to \(\phi^{\prime}_{\uparrow\uparrow}\), \(\tau/\hbar E_{\rm m}=\tau/\hbar[\Phi_{uu}\hbar(\omega_{\uparrow}+\omega_{ \uparrow})/(mc^{2})]\), where \(E_{\rm m}\) could be intuitively think of as originated from the internal energy contribution to the weight of the system and thus also couples to gravity [53]. These terms are neglected in the original GME proposals, as \(\hbar(\omega_{n}+\omega_{j})/(mc^{2})\ll 1\) for typical experimental parameters. Interestingly, it is argued that measuring these terms would allow one take a step forward from concluding the superposition of gravitational fields to concluding the superposition of spacetime [8; 13], as these terms correspond to the gravitational time dilation (red shift) and could be measured by quantum clocks [54; 55]. ### Details of the single-atom GME simulator We first explain entanglement detection in (5). Unlike the gravitational field case, one is not able to disentangle the magnetic field state. Nevertheless, as is derived in Sec.VI.4, the QFT Hamiltonian \(H_{F}\) is well approximated by \(H_{dd}\), this means the electromagnetic field state is almost unchanged, as \(H_{dd}\) only affects the spins, not the electromagnetic field. Starting from \(H_{dd}\), one obtains \(H_{h}\), and hence \(H_{zz}\) in the simulator. This results in \(|M_{\uparrow\uparrow}\rangle\approx|M_{\uparrow\downarrow}\rangle\approx|M_{ \downarrow\uparrow}\rangle\approx|M_{\downarrow\downarrow}\rangle\approx|M_{0}\rangle\). So tracing out the field state in (11), one obtains a density matrix that is negligibly different from the density matrix after disentangling the field. Note however that the tiny difference between field states in different branches is required to generate different phases, hence the final entanglement, as explained in Sec.VI.4. To further obtain the degree of entanglement, for a two-qubit state \(\rho\) (pure or mixed), one could calculate its entanglement of formation \(W(\rho)\)[47]. A larger \(W\) indicates a higher degree of entanglement: \(W=0\) indicates a separable state while \(W=1\) a maximally entangled state. We then explain the rotating operator (8). Observing that (8) is an entangling operation might raises concern: whether the entanglement of the spins is due to the quantum properties of the magnetic field, or is merely an artifact due to the rotating frame transformation. This is not a problem, as already mentioned that the logic of this work is to take the quantum nature of electromagnetic fields as premise, and conduct a simulation experiment that might bring insights to the GME experiment. Transforming to the rotating frame merely makes the simulation more close to the original GME experiment, i.e., their final entangled states have the same form. We could have stayed in the lab frame and utilize the nonseparable evolution under \(H_{f}\), to achieve an entangled spin state, but the form of the entangled state will not be the same as (5). The introduction of \(f_{O}\) merely "tailored" the naturally-existing entangling-interaction between spins to the form of \(H_{zz}\) during the interaction period. ### Experimental setup The magnetic field \(B_{0}\) defining the quantization axis (\(z\) axis) is created by the electric current in a Helmholtz coil. The electric current is produced by a dc power supply, and is stabilized at 1.20152\(\pm\)0.00002 A by a proportional-integral (PI) controller. This allows us determine the resonance frequencies of all three transitions (by Ramsey curves) within 3 kHz during data collection. Hence we obtain the frequencies of the on-resonance microwaves to be applied (Table.1), and \(B_{0}\) (used to design the pulse sequence and calculate \(R\)) is calculated to be 5.615 Gs by the transition \(|3\rangle\leftrightarrow|1\rangle\). From simulation results, the final state fidelity corresponds to, e.g., \(\rho(\phi_{\uparrow\uparrow}=\pi/2,\tau=2\ \mu\)s) is still above 0.97 when there is 3 kHz mismatch in all three resonance frequencies (\(|3\rangle\leftrightarrow|4\rangle\), \(|3\rangle\leftrightarrow|1\rangle\), \(|3\rangle\leftrightarrow|2\rangle\)), plus 2 mGs miscalibration (corresponds to the 3 kHz mismatch) in the magnetic field. This allows us achieve high-fidelity experimental results. The frequencies of the applied on-resonance microwaves \(\omega=\omega_{0}+\omega^{\prime}\) (\(\omega_{0}=2\pi\times 12.61157173\) GHz), amplitude of Rabi frequencies \(|\Omega|\) and coherence times \(T_{2}^{*}\) of the relevant transitions within the four-level subspace (Fig.1(b)) are given in Table.1. As an example, the Rabi curve of \(|3\rangle\leftrightarrow|4\rangle\) is shown in Fig.4(a), and the Ramsey measurement of \(T_{2}^{*}\) of \(|2\rangle\leftrightarrow|4\rangle\) is shown in Fig.4(b). The \(T_{2}^{*}\) between different Zeeman sublevels could be easily upgraded to above 1 ms using triggering to ac-line techniques [56]. Details of the quantum-control technique for the four-level/two-qubit system is given below. We first introduce a mapping operator that simplifies the discussion. Define the "zz basis" where \(\left|\uparrow\right\rangle\left|\uparrow\right\rangle_{\rm zz}=[1,0,0,0]^{ \rm T}\), \(\left|\uparrow\right\rangle\left|\downarrow\right\rangle_{\rm zz}=[0,1,0,0]^{ \rm T}\), \(\left|\downarrow\right\rangle\left|\uparrow\right\rangle_{\rm zz}=[0,0,1,0]^{ \rm T}\), \(\left|\downarrow\right\rangle\left|\downarrow\right\rangle_{\rm zz}=[0,0,0,1]^{ \rm T}\), the subscript indicates zz basis. Also define the "number basis" where \(|1\rangle_{\rm n}=[1,0,0,0]^{\rm T}\), \(|2\rangle_{\rm n}=[0,1,0,0]^{\rm T}\), \(|3\rangle_{\rm n}=[0,0,1,0]^{\rm T}\), \(|4\rangle_{\rm n}=[0,0,0,1]^{\rm T}\), the subscript "n" indicates number basis. Define a mapping operator \(R\) (depends on \(B_{0}\)) satisfying \[R|k\rangle_{\rm n}=|k\rangle_{\rm zz}\quad(k=1,2,3,4), \tag{12}\] i.e., it maps the matrix form of an operator in number basis to that in zz basis, and vice versa. The explicit form of \(R\) reads \[R=\begin{pmatrix}1&0&0&0\\ 0&\cos(\theta/2)&\sin(\theta/2)&0\\ 0&-\sin(\theta/2)&\cos(\theta/2)&0\\ 0&0&0&1\end{pmatrix}, \tag{13}\] where \(\theta=2\text{tan}^{-1}(\lambda)\) and \[\lambda=\frac{-B_{0}\gamma_{a}+B_{0}\gamma_{b}-\sqrt{A^{2}+B_{0}^{2}\gamma_{a }^{2}+B_{0}^{2}\gamma_{b}^{2}-2B_{0}^{2}\gamma_{a}\gamma_{b}}}{A}. \tag{14}\] When \(A\gg B_{0}\gamma_{a,b}\), as in our case, \(\theta\approx-\pi/2\). Nevertheless we still use the explicit form of (13). During step 1) in the experiment, the Hamiltonian in \(f_{O}\) after rotating wave approximation (RWA) reads \[H_{c}=c_{13}(t)|1\rangle\langle 3|+c_{23}(t)|2\rangle\langle 3|+c_{34}(t)|3 \rangle\langle 4|+\text{H.c.}, \tag{15}\] where \(c_{13,23,34}\) are complex numbers determined by the strength and phase of the on-resonance microwave control fields. We tune the microwave field parameters to realize \(U_{p}\) in step 1 of the experiment. \(\theta_{13}=2\sin^{-1}(1/\sqrt{3})\), and \[\theta_{23}= 2\cos^{-1}\{\frac{-\sqrt{2}/2[1-\tan^{-1}(\theta/2)]}{\cos( \theta/2)\tan^{-1}(\theta/2)+\sin(\theta/2)}\}. \tag{16}\] Step 2) is realized by free evolution plus corresponding phase correction in the following microwave readout pulses, as explained in Sec.III. With \(H_{f}\) and the available control fields, the four-level system is completely controllable, as could be proved following Ref.[57]. This means arbitrary SU(4) quantum operations could be achieved in this subspace. In this work, we only pick out several special pieces from the complete control-toolbox, namely, a state-to-state operation achieving (10), \(\pi\) and \(\pi/2\) rotations among the available transitions, and the entangling operation (9). It is promising to demonstrate complete controllability in this system by realizing a universal gate set, i.e., arbitrary single-spin rotations and CNOT gates via, e.g., numerical optimal control [58]. Or in the language of qudit, realizing arbitrary single-qudit rotation gates [25; 27]. The fidelities of the controls and the coherence time could be further upgraded by magnetic shielding [56]. ### Magnetic dipole-dipole interaction mediated by electromagnetic field Following Ref. [45; 46], when the two magnetic dipoles (spin-1/2) \(\hat{m}_{i}=\sum_{x,y}\tilde{m}_{i}^{x}\hat{\tau}_{i}^{xy}\) (\(x,y=\uparrow\) or \(\downarrow\), \(i=1,2\)), with the dipole moment amplitude \(\tilde{m}_{i}^{xy}:=\langle x|\hat{m}_{i}|y\rangle_{i}\), and \(\hat{\tau}_{i}^{xy}:=|x\rangle_{i}\langle y|\), are placed in the EM field, they change the original field dynamics. This influence generates the interaction between the two dipoles. In the Heisenberg picture, the quantized magnetic field in some volume \(V\) is \[\hat{\mathbf{B}}(\mathbf{x},t)=\sum_{\mathbf{k}\sigma}i\hat{e}_{\mathbf{k} \tilde{\sigma}}Z_{\mathbf{k}}[\hat{a}_{\mathbf{k}\sigma}(t)e^{i\mathbf{k} \cdot\mathbf{x}}-\text{H.c.}], \tag{17}\] where \(Z_{k}=\sqrt{\mu_{0}\omega_{\mathbf{k}}/(2V)}\), \(\hat{e}_{\mathbf{k}\tilde{\sigma}}:=\hat{e}_{\mathbf{k}}\times\hat{e}_{ \mathbf{k}\sigma}\) are polarization directions. It interacts with the dipole through the interaction Hamiltonian \(-\hat{m}_{i}\cdot\hat{\mathbf{B}}(\mathbf{x},t)\). The part of the field resulting from the presence of dipole 1 reads \[\begin{split}\hat{\mathbf{B}}_{D1}=&\sum_{\mathbf{k} \sigma,xy}\hat{e}_{\mathbf{k}\tilde{\sigma}}Z_{\mathbf{k}}(g_{1,\mathbf{k} \sigma}^{xy})^{*}\\ &\times e^{i\mathbf{k}\cdot\mathbf{x}}\int_{0}^{t}dse^{-i\omega_ {\mathbf{k}}s}\hat{\tau}_{1}^{yx}(t-s)+\text{H.c.},\end{split} \tag{18}\] where \(g_{i,\mathbf{k}\sigma}^{xy}=-i(\tilde{m}_{i}^{xy}\cdot\hat{e}_{\mathbf{k} \tilde{\sigma}})Z_{k}e^{i\mathbf{k}\cdot\mathbf{x}_{i}}\). Note \(\hat{\tau}_{1}^{yx}(t-s)\) are operators in Heisenberg picture, they evolve in time (through the Heisenberg equation) and in general act non-trivially on both spin and field states. Its interac tion with dipole 2 results in \[\begin{split} H_{1\to 2}&=-\hat{m}_{2}(t)\cdot\hat{\mathbf{B}}_{ D1}(\mathbf{x}_{2},t)\\ &=\sum_{uv,xy}\int_{0}^{t}dsD_{1\to 2}^{yx,uv}(s)\hat{\tau}_{1}^{yx}(t-s) \cdot\hat{\tau}_{2}^{uv}(t)+\text{H.c.},\end{split} \tag{19}\] where \[\begin{split} D_{1\to 2}^{yx,uv}(s)&:=\sum_{\mathbf{k} \sigma}-i(g_{1,\mathbf{k}\sigma}^{xy})^{*}g_{2,\mathbf{k}\sigma}^{uv}e^{-i \omega_{\mathbf{k}}s}\\ &:=-i\int_{0}^{\infty}\frac{d\omega}{2\pi}J_{1\to 2}^{yx,uv}( \omega)e^{-i\omega s},\end{split} \tag{20}\] and \[J_{1\to 2}^{yx,uv}(\omega):=2\pi\sum_{\mathbf{k}\sigma}(g_{1,\mathbf{k} \sigma}^{xy})^{*}g_{2,\mathbf{k}\sigma}^{uv}\delta(\omega-\omega_{\mathbf{k}}). \tag{21}\] For free-space EM field, \[\begin{split} J_{1\to 2}^{yx,uv}(\omega)=&\frac{\mu_{0}}{ 2\pi r^{3}}\{\tilde{m}_{1}^{yx}\cdot\tilde{m}_{2}^{uv}[\eta^{2}\sin\eta+\eta \cos\eta-\sin\eta]\\ &-3(\tilde{m}_{1}^{yx}\cdot\hat{e}_{\mathbf{r}})(\tilde{m}_{2}^{ uv}\cdot\hat{e}_{\mathbf{r}})\\ &\times[\frac{1}{3}\eta^{2}\sin\eta+\eta\cos\eta-\sin\eta]\}, \end{split} \tag{22}\] where \(\eta:=kr=\omega r/c\), and \(r=|\mathbf{x}_{1}-\mathbf{x}_{2}|\) is the distance between the two dipoles. The interaction (19) has a retarded form, and \(D_{1\to 2}^{yx,uv}(s)\) characterizes the propagation of the interaction. One could similarly obtain \(H_{2\to 1}\), hence the QFT-interaction Hamiltonian \(H_{\text{F}}=1/2(H_{1\to 2}+H_{2\to 1})\). From the above derivation, we see that \(H_{\text{F}}\) in general entangles the spins with field. The field is perturbed and deviates from the initial state during the interaction, as could be seen from (18). Because \(D_{1\to 2}^{yx,uv}(s)\) approaches zero rapidly with increasing \(s\)[46], the interaction is approximately local in time. As \(D_{1\to 2}^{yx,uv}(s)\) is of the second order of \(g_{i,\mathbf{k}\sigma}^{xy}\), we could make the approximation \[\hat{\tau}_{1}^{yx}(t-s)\approx\hat{\tau}_{1}^{yx}(t)\cdot e^{i\Omega_{1}^{yx }s}, \tag{23}\] where \(\Omega_{1}^{yx}:=E_{1}^{(y)}-E_{1}^{(x)}\) is the energy difference between the two levels \(|y,x\rangle_{1}\), hence the interaction becomes time-local. This approximation means we could neglect the higher-order interaction between the dipole and field. As \(r/c\) is small, the time to build up the dipole-dipole interaction is small compared to the interaction period in the experiment. Thus we could extend the time integral to infinity (Markovian approximation), the interaction reduces to a time-independent form \[H_{1\to 2}=\sum_{xy,uv}[K_{1\to 2}^{yx,uv}(\Omega_{1}^{yx})-\frac{i}{2}J_{1\to 2}^{ yx,uv}(\Omega_{1}^{yx})]\hat{\tau}_{1}^{yx}\hat{\tau}_{2}^{uv}, \tag{24}\] where \[\begin{split} K_{1\to 2}^{yx,uv}(\Omega)=\frac{\mu_{0}}{4\pi r^{3}}\{ \tilde{m}_{1}^{yx}\cdot\tilde{m}_{2}^{uv}\\ [-\eta^{\prime 2}\cos\eta^{\prime}+\eta^{\prime}\sin\eta^{\prime}+ \cos\eta^{\prime}]-3(\tilde{m}_{1}^{yx}\cdot\hat{e}_{\mathbf{r}})(\tilde{m}_{2 }^{uv}\cdot\hat{e}_{\mathbf{r}})\\ \times[-\frac{1}{3}\eta^{2}\cos\eta^{\prime}+\eta^{\prime}\sin \eta^{\prime}+\cos\eta^{\prime}]\},\end{split} \tag{25}\] and \(\eta^{\prime}=\Omega r/c\). \(H_{2\to 1}\) could be similarly simplified. The total interaction is then \[H_{F}^{\prime}=\sum_{xy,uv}(G_{yx,uv}^{(P)}+G_{yx,uv}^{(D)})\hat{\tau}_{1}^{yx} \hat{\tau}_{2}^{uv}, \tag{26}\] where \[G_{yx,uv}^{(P)}=\frac{1}{2}[K_{1\to 2}^{yx,uv}(\Omega_{1}^{yx})+K_{2\to 1}^{ uv,yx}(\Omega_{2}^{uv})], \tag{27}\] and \[G_{yx,uv}^{(D)}=\frac{1}{4i}[J_{1\to 2}^{yx,uv}(\Omega_{1}^{yx})+J_{2\to 1}^{ uv,yx}(\Omega_{2}^{uv})]. \tag{28}\] From the simplified (26), we see that the field state is almost unperturbed. In our experiment, \(\Omega_{1,2}^{yx}r/c\to 0\), then all \(G_{xy,uv}^{(D)}\) approach zero, and all \(G_{yx,uv}^{(P)}\) approach \(K_{1\to 2}^{yx,uv}(0)\). Thus \[\hat{H}_{1\leftrightarrow 2}=\frac{\mu_{0}}{4\pi r^{3}}[\hat{m}_{1}\cdot\hat{m}_{2} -3(\hat{m}_{1}\cdot\hat{e}_{\mathbf{r}})(\hat{m}_{2}\cdot\hat{e}_{\mathbf{r}})], \tag{29}\] which could be written in the form \(\lambda[3(\mathbf{I}_{1}\cdot\mathbf{r})(\mathbf{I}_{2}\cdot\mathbf{r})- \mathbf{I}_{1}\cdot\mathbf{I}_{2}]\), this is just the approximation \(H_{dd}\) used in the main text. From \(H_{dd}\) one further obtains \(H_{zz}\). The initial state \(1/2(|\uparrow\uparrow\rangle+|\uparrow\downarrow\rangle+|\downarrow\uparrow \rangle+|\downarrow\downarrow\rangle)\) evolves to \(1/2(e^{-i\phi_{\uparrow\uparrow}}\left|\uparrow\uparrow\right\rangle+| \uparrow\downarrow\rangle+e^{-i\phi_{\downarrow}}\left|\downarrow\downarrow\right\rangle\) under \(H_{zz}\). This could be described as resulting from the phase accumulation in different branches by a quantum-controlled classical field, without giving a Hilbert space to the field. The accumulated phases in each branch could be calculated by classical magnetic dipole interaction. Thus we see that due to \(r/c\) being small, the QFT description and the quantum-controlled classical field one is indistinguishable. ## Acknowledgement This work is supported by the Key-Area Research and Development Program of Guang Dong Province under Grant No.2019B030330001, the National Natural Science Foundation of China under Grant No.11774436, No.11974434 and No. 12074439, Natural Science Foundation of Guangdong Province under Grant 2020A1515011159, Science and Technology Program of Guangzhou, China 202102080380, the Central - leading-local Scientific and Technological Development Foundation 2021Szvup172. Ji Bian receives support from China Postdoctoral Science Foundation under Grant No.2021M703768. Le Luo acknowledges the support from Guangdong Province Youth Talent Program under Grant No.2017GC010656. The authors are thankful to Dr. Rongxin Miao and Dr. Longlong Feng for their discussion.
2305.18659
Well-posedness for a transmission problem connecting first and second-order operators
We establish the existence and uniqueness of viscosity solutions within a domain $\Omega\subseteq\mathbb R^n$ for a class of equations governed by elliptic and eikonal type equations in disjoint regions. Our primary motivation stems from the Hamilton-Jacobi equation that arises in the context of a stochastic optimal control problem.
Héctor A. Chang-Lara
2023-05-29T23:47:08Z
http://arxiv.org/abs/2305.18659v1
# Well-posedness for a transmission problem connecting first and second-order operators ###### Abstract. We establish the existence and uniqueness of viscosity solutions within a domain \(\Omega\subseteq\mathbb{R}^{n}\) for a class of equations governed by elliptic and eikonal type equations in disjoint regions. Our primary motivation stems from the Hamilton-Jacobi equation that arises in the context of a stochastic optimal control problem. Key words and phrases:Discontinuous dynamics, transmission problems, viscosity solutions, comparison principle 2020 Mathematics Subject Classification: 35D40, 35B51, 35F21, 49L12, 49L25 ## 1. Introduction Let \(\Omega\subseteq\mathbb{R}^{n}\) be an open set, and let \(\Omega_{B}\subseteq\Omega\) also be open, non-empty, with \(\Omega_{E}:=\Omega\setminus\overline{\Omega_{B}}\) being non-empty as well. We consider a stochastic optimal control problem in which the objective is to minimize the expected time taken by a particle to travel from an initial position \(x\in\Omega\) to its first exit from \(\Omega\). The controller can determine the direction of the particle at any given moment when the particle is in the region \(\overline{\Omega}_{E}\cap\Omega\), maintaining a constant speed, assumed to be one without loss of generality. In the complementary region \(\Omega_{B}\), the particle follows a Brownian motion. This dynamics allow the particle to switch between the regions multiple times before exiting. See Figure 1. ## 2. Figure 1. A possible realization of a path from \(x\) to \(\partial\Omega\). The dynamics on \(\Omega_{E}\) are chosen by the controller and follow a Brownian motion in \(\Omega_{B}\). The analysis of the optimal strategy is closely related to the Hamilton-Jacobi-Bellman equation satisfied by the value function \(u:\overline{\Omega}\to\mathbb{R}\), defined as the least expected exit time when the particle starts at \(x\). In the regions \(\Omega_{E}\), \(\Omega_{B}\), and \(\partial\Omega\), the value function satisfies an eikonal equation, a Poisson equation, and a boundary condition, respectively \[\begin{cases}|Du|=1\text{ in }\Omega_{E},\\ \frac{1}{2}(-\Delta)u=1\text{ in }\Omega_{B},\\ u=0\text{ in }\partial\Omega\end{cases}\] In order to obtain a unique solution, we anticipate an additional transmission condition over the interface \(\Gamma:=\partial\Omega_{E}\cap\Omega=\partial\Omega_{B}\cap\Omega\). The primary goal of this work is to reveal this transmission condition from a partial differential equations point of view. Moreover, we establish the existence and uniqueness of viscosity solutions for a broader family of equations, using the model described above as the principal guiding example. A discrete implementation of this problem suggests that over the interface \(\Gamma\) either one of the equations must hold. Indeed, we have observed that numerical solutions converge to a limit whenever the points in the discretization may include or not points on the interface. When we have an interface point a natural condition would be to consider the two possible dynamics, eikonal or random walk, and choose the one leading to the least expected time. We plan to present a deeper analysis of our findings in a forthcoming work. The relaxed notion of the problem where either of two given equations can hold over the boundary has been previously used in the fields of optimal control theory and viscosity solutions for quite some time [3, Section V.4] and [16, Section 7]. It is the natural requirement for achieving stability of viscosity solutions. Specifically, for the sub-solution equations, we must consider \[\begin{cases}|Du|-1\leq 0\text{ in }\Omega_{E},\\ \frac{1}{2}(-\Delta)u-1\leq 0\text{ in }\Omega_{B},\\ \min(|Du|-1,\frac{1}{2}(-\Delta)u-1)\leq 0\text{ on }\Gamma.\end{cases}\] Meanwhile, for the super-solution equations, we reverse the direction of all inequalities and replace the minimum with a maximum in the expression over the interface \[\begin{cases}|Du|-1\geq 0\text{ in }\Omega_{E},\\ \frac{1}{2}(-\Delta)u-1\geq 0\text{ in }\Omega_{B},\\ \max(|Du|-1,\frac{1}{2}(-\Delta)u-1)\geq 0\text{ on }\Gamma.\end{cases}\] A solution is then a function that simultaneously satisfies both sub-solution and super-solution conditions. Colloquially, we say that \[\begin{cases}|Du|-1=0\text{ in }\Omega_{E},\\ \frac{1}{2}(-\Delta)u-1=0\text{ in }\Omega_{B},\\ \text{Either }|Du|-1\text{ or }\frac{1}{2}(-\Delta)u-1=0\text{ on }\Gamma.\end{cases} \tag{1.1}\] Although it might appear that the condition over the interface is too weak to guarantee the uniqueness of solutions, our aim is to demonstrate that, in the context of viscosity solutions and flat interfaces, this is surprisingly not the case. We will do this by deducing a stronger equation for the interface and establishing a comparison principle. There are several challenges associated with the comparison principle for this problem. Firstly, note that the operator governing this equation over \(\Omega\) as a whole is discontinuous across \(\Gamma\). This lack of translation invariance renders the comparison principle a non-trivial question. On the other hand, we may hope to apply the comparison principle for general boundary conditions, as established by Barles in [4]. However, the eikonal operator on \(\Gamma\) is not monotone in the exterior normal direction to \(\partial\Omega_{B}\), which prevents the direct application of such theory. ### Main results In an effort to enhance the applicability of our results to related problems, we have extended our hypotheses beyond the eikonal/Poisson equations illustrated in this introduction. The corresponding hypotheses will be announced in each section. For the moment and to keep things simple, let us state our contributions in the context of the example we have already discussed. We demonstrate as a consequence of Theorem 3.1 that for any \(C^{2}\) regular interface \(\Gamma\), the problem (1.1) is equivalent to the following stronger equation \[\begin{cases}|Du|-1=0\text{ in }\Omega_{E}\cup\Gamma,\\ \frac{1}{2}(-\Delta)u-1=0\text{ in }\Omega_{B}.\end{cases} \tag{1.2}\] Colloquially speaking, we could say that the eikonal mechanism dominates the dynamics at the interface. For Theorem 3.1 we mainly need a coercivity from the second order operator resulting from uniform ellipticity. For flat interfaces we establish a comparison principle for the equation (1.2) in Corollary 4.3. Our general comparison principle is stated in Theorem 4.1 for continuous sub and super-solutions with their corresponding equations being separated by some gap (see the hypotheses (Sub) and (Sup) at the beginning of Section 4). This gap can be taken to be just zero whenever the operators satisfies some further coercivity type assumptions, such as in the case of convex operators (the case in the introduction), parabolic problems, and equations arising from geometric discounted costs, see Corollary 4.3 and Remark 4.2. Theorem 4.1 also relies on further hypotheses on the operators (see the hypotheses (\(H_{-}\)) and (\(H_{+}\)) also at the beginning of Section 4). In the preliminary section we have included, besides the main definitions, a few results which are direct sapplications of the classical theory. For instance, the existence of viscosity solution by Perron's method stated in Theorem 2.7, relies on the global regularity for eikonal type and uniformly elliptic equations. ### Related work One of the main examples of problems that combine disjoint regions with diffusion and eikonal type regimes, emerges in the theory of singular stochastic control, as seen in Chapter VIII of the book by Fleming and Soner [21]. The representative equation for the value function takes the following form, known as a _gradient constrained problem_ \[\max(H(Du),Lu)=0\text{ in }\Omega.\] In the equation above, \(L\) is a second-order elliptic operator, and \(H\) is typically a convex function. A stochastic control problem that motivates this equation for \(Lu=\frac{1}{2}(-\Delta)u-1\) and \(H(Du)=|Du|-1\) can be described as follows: We aim to minimize the expected exit time of a particle starting at \(x\in\Omega\), such that at each instant, we can decide to move either with Brownian motion or with speed one in a chosen direction. In a few words, this is an extension of the problem in the introduction where the controlled also can conveniently choose the region \(\Omega_{B}\). As usual, the value function \(u\) represents the least expected exit time when the particle starts from \(x\). From a dynamic programming principle, we derive the Hamilton-Jacobi equation of singular control provided above. This model has significant applications in Merton's portfolio problem and spacecraft control, with multiple references available in [21, Section VIII.7]. Other motivations for gradient constrained problems, arising from variational inequalities, can be found in applications of elastoplasticity of materials [44]. The survey [20] explore the connections with recent developments for the obstacle problem. The literature concerning the analysis of solutions for gradient constrained problems with a convex \(H\) is extensive. Notable works include the \(C^{1,1}\) optimal regularity of solutions by Evans [18, 19], Wiegner [47], Ishii and Koike [30], and Soner and Shreve [42, 41]. Brezis and Sibony [10] show the equivalence with an obstacle problem. Recent works include those by Andersson, Shahgholian, and Weiss [1], Hynd [23, 24, 25], Hynd and Mawi [26], and Safdari [37, 38, 39]. In the last few years, attention has also been given to cases where \(H\) is non-convex, motivated by applications in optimal dividend strategies for multiple insurances [2]. See Safdari [40] and our own collaboration with Pimentel [15] for the analysis of solutions. In the previous scenario, the solution is the one that determines the regions where either the first-order or the second-order operator is active. Furthermore, the solution must globally satisfy \(Lu\leq 0\) and \(H(Du)\leq 0\), which imposes significant rigidity on the function. The problem considered in this paper fixes the regions and does not assume global relations like in the gradient constrained problem; as a result, solutions are expected to be more flexible across this interface. In our case, we no longer expect better regularity than Lipschitz continuity, this is illustrated by the examples in the Section 1.3.2 and the Section 2.2. Another research direction related to our model involves transmission problems. These typically deal with differential equations over given disjoint domains connected by some prescribed condition over their common boundaries. Borsuk's book [9] provides a detailed exposition of second-order problems. Some recent regularity results have been established by Caffarelli, Soria-Carro, and Stinga in [13], and by Soria-Carro and Stinga in [43]. Problems connecting operators with different (fractional) orders have been studied, for instance, by Kriventsov in [32], D'Elia, Perego, Bochev, and Littlewood in [17], and Capana and Rossi in [14]. An intermediate step in our proof of Theorem 3.1 involves uncovering a transmission-type equation for our problem (Lemma 3.2, Corollary 3.3, and Lemma 3.4). The works referenced in the previous paragraph feature some ellipticity condition on both sides of the interface. Problems governed by hyperbolic or eikonal-type operators on both sides are also very relevant in applications and have been studied recently. These works are motivated by models in deterministic optimal control. The contributions we have in mind include those by Barles, Briani, and Chasseigne [5, 6], Lions and Souganidis [34, 35], Imbert and Monneau [27], and more recently Barles, Briani, Chasseigne, and Imbert [7]. Our weak formulation on the interface is analogous to the so-called _natural_ junction condition found in [7, Equations (1.3) and (1.4)]. In a recent publication by Imbert and Nguyen [28], the authors consider junction problems with second order operators on either side of the interface or junction. The model in this case assumes that the equations degenerate to first order problem at the junction. As in the models discussed in the previous paragraph, there is an additional junction condition from where it is shown that the problem is well posed. Similar to one of our results, an intermediate step consists the equivalence between a relaxed and a strong notion of viscosity solutions (know as flux limited solution). In contrast to the first order problems, the presence of uniformly elliptic effects up to the interface seems to make our model rigid enough to yield uniqueness of solutions by itself, without additional conditions on the interface which in this case may lead to an over-determined ill-posed problem. ### Main ideas In this brief section we aim to emphasize the fundamental points that underlie the main theorems presented in this paper. We acknowledge that a significant portion of the article is devoted to discussing technical modifications on well-established classical notions in the theory of viscosity solutions, these have been included for the sake of completeness. However, this may obscure the novel ideas and challenges that we have encountered in our work and will appear toward the Section 3 and the Section 4. To address this concern, we would like to provide a brief discussion in this introduction. The upcoming presentation is informal and assumes a certain level of familiarity with the concept of viscosity solutions, which will be revisited in Section 2. #### 1.3.1. Strong equation To show that the weak equation (1.1) implies the strong equation (1.2), we demonstrate as an intermediate step that (1.1) can also be characterized with test functions that may have discontinuous gradients at the interface. In other words, they belong to the following space of functions (here \(B_{r}(x_{0})\subseteq\Omega\)) \[C(B_{r}(x_{0}))\cap C^{2}(B_{r}(x_{0})\cap(\Omega_{E}\cup\Gamma))\cap C^{2}(B _{r}(x_{0})\cap(\Omega_{B}\cup\Gamma)).\] Whenever the contact occurs at the interface, we find that either the eikonal equation holds on the Brownian side, or there is a transmission condition between the normal derivatives. To be more specific, we show that (1.1) implies the following problem in an appropriate viscosity formulation \[\begin{cases}|Du|-1=0\text{ in }\Omega_{E},\\ \frac{1}{2}(-\Delta)u-1=0\text{ in }\Omega_{B},\\ \text{Either }|Du_{B}|-1\text{ or }\partial_{\nu}u_{E}-\partial_{\nu}u_{B}=0 \text{ on }\Gamma.\end{cases}\] In the equation above, we have denoted by \(\nu\) the interior normal of \(\Omega_{B}\) over \(\Gamma\) (or the exterior normal of \(\Omega_{E}\) over \(\Gamma\)). Moreover, \(u_{B}\) and \(u_{E}\) represent the restrictions of \(u\) to \(\Omega_{B}\) and \(\Omega_{E}\), respectively. Geometrically, \(\partial_{\nu}u_{E}-\partial_{\nu}u_{B}\) is positive if the graph of \(u\) forms a concave angle along \(\Gamma\), and negative if the angle is instead convex. #### 1.3.2. Existence and uniqueness of solutions in one dimension Let \(\Omega=(-1,1)\), \(\Omega_{E}=(-1,0)\), \(\Omega_{B}=(0,1)\), and \(\Gamma=\{0\}\). Our goal is to see that there is at most one viscosity solution \(u\in C(\overline{\Omega})\) for \[\begin{cases}|u^{\prime}|-1=0\text{ in }\Omega_{E},\\ -u^{\prime\prime}=0\text{ in }\Omega_{B},\\ \text{Either }|u^{\prime}|-1\text{ or }-u^{\prime\prime}=0\text{ on }\Gamma,\\ u(-1)=0,\\ u(1)=\alpha.\end{cases} \tag{1.3}\] The two equations and the condition at \(x=-1\) indicate that for some parameter \(\beta\in[-1,0]\) \[u(x)=\begin{cases}1+\beta-|x-\beta|\text{ in }\Omega_{E},\\ \alpha x+(1+2\beta)(1-x)\text{ in }\Omega_{B}.\end{cases}\] See Figure 2. We would like to see that there is at most one \(\beta\) for which \(u\) is a solution. **Case 1:**\(\alpha\geq 0\). If \(\beta<0\) then \(u\) is not a super-solution because around zero \[u(x)=u(0)+\begin{cases}-x\text{ if }x<0,\\ \gamma x\text{ if }x\geq 0\end{cases},\qquad\gamma=\alpha-(1+2\beta)>-1.\] Then it can be touched at \(x_{0}=0\) from below by \[\varphi(x):=u(0)+\frac{\min(\gamma,1)-1}{2}x+x^{2}\] which does not satisfy \(|\varphi^{\prime}(x_{0})|-1\geq 0\) or \(-\varphi^{\prime\prime}(x_{0})\geq 0\). The only option left is \(\beta=0\) in which case \[u(x)=\begin{cases}1+x\text{ if }x<0,\\ (\alpha-1)x+1\text{ if }x\geq 0,\end{cases}\] is indeed a viscosity solution. This assertion can be verified in two steps, depending on whether \(\alpha\) is larger than \(2\) or lies between \(0\) and \(2\): If \(\alpha>2\), then \(u\) can only be touched from below at \(x_{0}=0\). In this case, the derivative of the test function is at least \(1\), satisfying the super-solution condition. If \(\alpha\in[0,2)\), then \(u\) can only be touched from above at \(x_{0}=0\) Figure 2. Graph of a one dimensional candidate for the solution of (1.3). In this case, the derivative of the test function lies between \(-1\) and \(1\), satisfying the sub-solution condition. If \(\alpha=2\), then \(u=1+x\), and it clearly satisfies both the sub and super-solution criteria at every point. **Case 2: \(\alpha\in[-2,0)\)**. Now we can check in a similar fashion as before that \(\beta=\alpha/2\) is the only possibility that gives a viscosity solution. This choice of the parameter \(\beta\) is the one that makes \(u\) a translation of the function \(-|x|\) \[u(x)=1+\beta-|x-\beta|=1+\alpha/2-|x-\alpha/2|.\] **Case 3: \(\alpha<-2\)**. In this case there are no solutions for the given boundary values. #### 1.3.3. Comparison principle in one dimension Let \(\Omega=(-1,1)\), \(\Omega_{E}=(-1,0)\), \(\Omega_{B}=(0,1)\), and \(\Gamma=\{0\}\) as in the previous example. Assume that \(v\) is a super-solution of \[\begin{cases}|v^{\prime}|-1\geq 0\text{ in }\Omega_{E},\\ -v^{\prime\prime}\geq 0\text{ in }\Omega_{B},\\ \max(|v^{\prime}|-1,-v^{\prime\prime})\geq 0\text{ on }\Gamma,\end{cases}\] meanwhile \(u\) a sub-solution of the following equation, which leaves a gap \(\eta>0\) between both equations \[\begin{cases}|u^{\prime}|-1+\eta\leq 0\text{ in }\Omega_{E},\\ -u^{\prime\prime}\leq 0\text{ in }\Omega_{B},\\ \min(|u^{\prime}|-1+\eta,-u^{\prime\prime})\leq 0\text{ on }\Gamma.\end{cases}\] Assuming that both functions are continuous and moreover smooth outside of zero, we will see that a contradiction arises by assuming that the graph of \(v\) touches the graph of \(u\) from above at the origin. To the left of zero we find that \(u^{\prime}_{E}(0)\in[-1+\eta,1-\eta]\). Since \(v_{E}\) is a super-solution touching \(u_{E}\) at zero, we must have that \(v^{\prime}_{E}(0)\leq-1\). Let us now examine the behavior of the functions to the right of zero. If \(u^{\prime}_{B}(0)<-1+\eta\), the graph of \(u\) forms a concave angle at zero, with an upper supporting line of slope less than \(-1+\eta\). In this case, we can construct a concave paraboloid \(\varphi\) that touches \(u\) from above at the origin, and further impose \(\varphi^{\prime}(0)<-1+\eta\). However, this contradicts the sub-solution condition at the interface, indicating that \(u^{\prime}_{B}(0)\geq-1+\eta\) must hold instead. Since \(v_{B}\) touches \(u_{B}\) at zero, we conclude that also \(v^{\prime}_{B}(0)\geq-1+\eta\). Notice that this step indicates that the eikonal equation gets somehow transmitted towards the Brownian side at the interface. Putting the two conclusions on \(v\) together, we notice that the graph of \(v\) forms a convex angle at zero, with a lower supporting plane of slope \(-1+\eta/2\). This contradicts \(v\) being a super-solution, as it can be touched by a convex paraboloid \(\varphi\) with \(\varphi^{\prime}(0)=-1+\eta/2\in(-1,1)\). The case with no gap, \(\eta=0\), can be recovered by an approximation argument. See the comments after Remark 4.2 and Corollary 4.3. #### 1.3.4. Challenges and strategies The previous and rather simple reasoning in one dimension works because we are assuming some regularity on the solutions. In higher dimensions, the solutions are more flexible because of the variations in the directions parallel to the interface, therefore the comparison principle for viscosity solutions becomes delicate. Our strategy involves the use of inf/sup regularizations and boundary estimates. We perform inf/sup convolutions along the directions parallel to the flat interface in order to assume that the sub-solution \(u\) is semi-convex along the interface, and the super-solution \(v\) is semi-concave also along the interface. If we assume by contradiction that \(v\) touches \(u\) from above at some point \(x_{0}\in\Gamma\), the semi-convexity assumption implies that the functions \(C^{1,1}\) at \(x_{0}\) along the interface. In other words, they are squeezed between two paraboloids at \(x_{0}\) and along the interface. Lemma 4.6 and its Corollary 4.7 show that if the boundary data for the solution of a uniformly elliptic equation is trapped between two paraboloids, then the solution is differentiable at such point. The main observation is that the normal derivative is well defined. The argument for this proof is due to Caffarelli [31, Lemma 4.31] and [22, Theorem 9.31]. On the other hand, we can see Lemma 4.12 and Lemma 4.13 as analogous boundary regularity results for solutions of first order equations. While we can present a fairly general result, our approach does not appear to yield a comparison principle for non-flat interfaces and general non-translation invariant operators. ### Further questions One of the main reasons that justify the analysis of this problem relies on a verification theorem for some particular class of games. In [15] we proposed and analyzed a Hamilton Jacobi equation for the following one: a particle starts at some position \(x\in\Omega\) and it is driven by two players with opposite goals, the first one wants to maximize the expected exit time from \(\Omega\), meanwhile the second wants to minimize it. The first player chooses \(\Omega_{B}\subseteq\Omega\) and the second player fixes a unit vector field \(v:\Omega_{E}=\Omega\setminus\overline{\Omega_{B}}\to\partial B_{1}\) such that the dynamics are given by the following SDE (\(B_{t}\) is a Brownian motion in \(\mathbb{R}^{n}\)) \[d\gamma=\mathbb{1}_{\Omega_{E}}(\gamma)v(\gamma)dt+\mathbb{1}_{\Omega_{B}}( \gamma)dB_{t},\qquad\gamma(0)=x.\] In a casual manner, we can describe the dynamic of \(\gamma\) as being influenced by two players as follows: the first one determines who drives, however when this player takes the wheel it does so in some random fashion and without any preference for any given direction (a drunkard's walk). Meanwhile the second (and sober) player, whenever it gets the opportunity, aims to escape \(\Omega\) by driving at maximum speed in some given set of directions. The value function for this game is the least expected exit time and should satisfy the gradient constrained problem \[\begin{cases}\min(|Du|-1,\frac{1}{2}(-\Delta)u-1)=0\text{ in }\Omega,\\ u=0\text{ on }\partial\Omega.\end{cases}\] The corresponding verification theorem remains an open problem. One possible way to address this question would be to fix the set \(\Omega_{B}\) and solve a corresponding optimal control problem (for the second player) and then consider the maximum among all the values. If we let \(u_{\Omega_{B}}\) to be _the solution_ of (1.1) with zero boundary data, we should then expect that the solution \(u\) of the problem above is actually given by the envelope \[u=\max_{\Omega_{B}\subseteq\Omega}u_{\Omega_{B}}.\] One of the reasons why we are not able to answer this is because **we do not even know if \(u_{\Omega_{B}}\) is well defined for a general \(\Omega_{B}\subseteq\Omega\)** (or at least a dense set in some suitable topology). On the other hand, if both players seek to minimize the expected exit time under the same rules, then we are actually considering an optimal control problem where the value function should satisfy the following gradient constrained problem \[\begin{cases}\max(|Du|-1,\frac{1}{2}(-\Delta)u-1)=0\in\Omega,\\ u=0\text{ on }\partial\Omega.\end{cases}\] This is the simplest model for the problem usually known as gradient constrained that was addressed in the previous section [18, 19, 47, 30, 42, 41, 10, 1, 23, 24, 25, 26, 37, 38, 39]. Once again, it would not be surprising that in this case \(u=\min_{\Omega_{B}\subseteq\Omega}u_{\Omega_{B}}\). As far as we know, this question has not been addressed before in the literature either. In either case, the well-posedness for the problems that determine \(u_{\Omega_{B}}\) not only fulfills a theoretical inquiry. They can also be used to estimate how far from optimal is a given strategy, as they are bounds for the corresponding value function of interest. ### Organization of the paper In Section 2, we provide precise definitions of viscosity solutions for equations with discontinuous dynamics. We revisit some classical results, such as stability and Perron's method. The existence of solutions, stated in Theorem 2.7, is a rather straightforward consequence of known results in the classical theory and does not require the comparison principle. Section 3 is devoted to show that the equations (1.1) and (1.2) are equivalent. This result can be interpreted as the canonical transmission condition at the interface. Theorem 3.1 is stated to a more general class of problems. The main result of this work, the comparison principle, is proven in Section 4 for flat interfaces. As already announced, we require of inf/sup regularizations and some boundary regularity estimates proven in Section 4.2. ### Acknowledgments I would like to thank Ryan Hynd, Arturo Arellano, and Edgard Pimentel for their helpful feedback on this project. The author was supported by CONACyT-MEXICO grant A1-S-48577. ## 2. Preliminaries ### Notation For a real number or a real-value function \(\alpha\), we denote its positive and negative part as \(\alpha_{\pm}:=\max(\pm\alpha,0)\). Given a point \(x\in\mathbb{R}^{n}\) we use \(x_{1},x_{2},\ldots,x_{n}\in\mathbb{R}\) to denote its coordinates. The notation \(x^{\prime}\in\mathbb{R}^{n-1}\) may be used to denote the first \((n-1)\) coordinates of \(x\), i.e. \(x^{\prime}=(x_{1},x_{2},\ldots,x_{n-1})\) and \(x=(x^{\prime},x_{n})\). Occasionally, we may also use \(x^{\prime}\in\mathbb{R}^{n}\) as a point which is actually in \(\{x_{n}=0\}\). Sometimes \(x_{0}\) may denote a fixed point in \(\mathbb{R}^{n}\), or perhaps \(\{x_{k}\}\) may be a sequence of points in \(\mathbb{R}^{n}\). In such cases, the coordinates will be denoted by \((x_{k})_{1},(x_{k})_{2},\ldots,(x_{k})_{n}\) and \(x_{k}^{\prime}=((x_{k})_{1},(x_{k})_{2},\ldots,(x_{k})_{n-1})\). The open ball in \(\mathbb{R}^{n}\) of radius \(r>0\) and center \(x_{0}\in\mathbb{R}^{n}\) is denoted by \(B_{r}(x_{0})\) and we may omit the center when \(x_{0}=0\). Whenever we talk about a ball in dimension \((n-1)\), we denote it by \(B_{r}^{n-1}(x_{0}^{\prime})\) or \(B_{r}^{n-1}\). We use \(\mathbb{R}^{n\times n}_{\rm sym}\) to denote the space of \(n\times n\) symmetric matrices. For \(M_{1},M_{2}\in\mathbb{R}^{n\times n}_{\rm sym}\) we say that \(M_{1}\leq M_{2}\) iff \(\xi\cdot M_{1}\xi\leq\xi\cdot M_{2}\xi\) for any \(\xi\in\mathbb{R}^{n}\setminus\{0\}\). The remaining inequalities (\(\geq\), \(<\), and \(>\)) are understood in a similar way. We use the notation \(L^{\infty}\), \(C\), \(C^{k}\), and \(C^{k,\alpha}\) to respectively denote the spaces of bounded functions, continuous functions, \(k^{th}\)-order continuously differentiable functions, and \(k^{th}\)-order continuously differentiable functions with derivatives of order \(k\) being \(\alpha\)-Holder continuous (\(\alpha\in(0,1]\)). Given a set \(\Omega\subseteq\mathbb{R}^{n}\), a relatively open subset \(\Gamma\subseteq\partial\Omega\) is said to be uniformly Lipschitz regular iff for every \(x_{0}\in\Gamma\) there exists some change of variables \(\Phi:B_{1}\to U\) where \(U=\Phi(B_{1})\) is a neighborhood of \(x_{0}=\Phi(0)\), and \(\Phi\) is a bi-Lipchitz map, with a Lipschitz norm independent of \(x_{0}\), such that \[\Phi(B_{1}\cap\{x_{n}<0\}) =U\cap(\mathbb{R}^{n}\setminus\overline{\Omega}),\] \[\Phi(B_{1}\cap\{x_{n}>0\}) =U\cap\Omega,\] \[\Phi(B_{1}\cap\{x_{n}=0\}) =U\cap\Gamma.\] We will also consider the scenario where the maps are assumed to be \(C^{2}\)-regular diffeomorphism, in this case we say that \(\Gamma\) is \(C^{2}\)-regular. A _second-order operator_ over \(\Omega\subseteq\mathbb{R}^{n}\) is defined in terms of a function \(H=H(M,p,z,x)\in C(\mathbb{R}^{n\times n}_{\rm sym}\times\mathbb{R}^{n}\times \mathbb{R}\times\Omega)\). For \(u\) second-order differentiable at \(x_{0}\in\Omega\) we compute \[Hu(x_{0}):=H(D^{2}u(x_{0}),Du(x_{0}),u(x_{0}),x_{0}).\] We may occasionally also refer to the function \(H\) as the operator. In the present article, we will say that an operator \(H\) is _degenerate elliptic_ if for any pair \((M_{1},p,z_{1},x),(M_{2},p,z_{2},x)\in\mathbb{R}^{n\times n}_{\rm sym}\times \mathbb{R}^{n}\times\mathbb{R}\times\Omega\) \[M_{1}\geq M_{2},\quad z_{1}\leq z_{2}\qquad\Rightarrow\qquad H(M_{1},p,z_{1},x )\leq H(M_{2},p,z_{2},x).\] Our convention makes \(H(M,z)=-\operatorname{tr}(AM)+\lambda z\) degenerate elliptic for any \(A\in\mathbb{R}^{n\times n}_{sym}\) with \(A\geq 0\), and \(\lambda\geq 0\). The monotonicity with respect to \(z\) is usually called properness in the literature, meanwhile the monotonicity with respect to the Hessian is the one referred to as degenerate ellipticity. If \(H=H(M,p,z)\) is independent of \(x\in\Omega\) we say that the operator is _translation invariant_. We say the operator is _convex_ if for every \(x\in\Omega\) the function \(H(\cdot,\cdot,\cdot,x)\) is convex. It is instead _quasi-convex_ if for every \(x\in\Omega\) and \(\lambda\in\mathbb{R}\), the sub-level sets \(\{H(\cdot,\cdot,\cdot,x)\leq\lambda\}\) are convex. If \(H=H(p,z,x)\) is independent of the matrix variable we say that it is a _first-order operator_ and \[Hu(x_{0}):=H(Du(x_{0}),u(x_{0}),x_{0})\] can be evaluated for \(u\) first-order differentiable at \(x_{0}\). If a first-order operator satisfies that for every \(z,\lambda\in\mathbb{R}\), the set \(\bigcup_{x\in\Omega}\{H(\cdot,z,x)\leq\lambda\}\) is bounded, then we say that it has _bounded sub-level sets_. Clearly the bound on the sets depends on \(z\); however, as we will be considering bounded solutions we can also make the sub-level sets uniformly bounded in \(z\). To be precise, let us consider that \(|z|\leq M\), then \(\bigcup_{(z,x)\in[-M,M]\times\Omega}\{H(\cdot,z,x)\leq\lambda\}\) is bounded by the continuity of \(H\). This property is usually a consequence of some sort of coercivity or super-linearity assumption on \(H\). A second-order operator is _uniformly elliptic_ if it is controlled by a family of elliptic linear operators with uniformly bounded coefficients (from above and away from zero). In order to do this it is convenient to introduce as well the _extremal Pucci operators_. These are defined with respect of some interval \([\lambda,\Lambda]\subseteq(0,\infty)\) such that \(\mathcal{M}^{\pm}_{\lambda,\Lambda}:\mathbb{R}^{n\times n}_{\mathrm{sym}} \to\mathbb{R}\) is given by \[\mathcal{M}^{+}_{\lambda,\Lambda}(M) :=\sup\{-\operatorname{tr}(AM)\ |\ A\in\mathbb{R}^{n\times n}_{ \mathrm{sym}},\,\lambda I\leq A\leq\Lambda I\}=-\sum_{e\in\mathrm{eig}(M)}( \lambda e_{+}-\Lambda e_{-}),\] \[\mathcal{M}^{-}_{\lambda,\Lambda}(M) :=\inf\{-\operatorname{tr}(AM)\ |\ A\in\mathbb{R}^{n\times n}_{ \mathrm{sym}},\,\lambda I\leq A\leq\Lambda I\}=-\sum_{e\in\mathrm{eig}(M)}( \Lambda e_{+}-\lambda e_{-}).\] We say that \(H\) gives a _uniformly elliptic operator_ with respect to the interval \([\lambda,\Lambda]\subseteq(0,\infty)\) iff for every \((M_{1},p_{1},z_{1},x),(M_{2},p_{2},z_{2},x)\in\mathbb{R}^{n\times n}_{\mathrm{ sym}}\times\mathbb{R}^{n}\times\mathbb{R}\times\Omega\), \(M=M_{2}-M_{1}\), \(p=p_{2}-p_{1}\), and \(z=z_{2}-z_{1}\), we have that \[\mathcal{M}^{-}_{\lambda,\Lambda}(M)-\Lambda|p|-\Lambda z_{-}\leq H(M_{2},p_{2 },z_{2},x)-H(M_{1},p_{1},z_{1},x)\leq\mathcal{M}^{+}_{\lambda,\Lambda}(M)+ \Lambda|p|+\Lambda z_{+}.\] ### Viscosity solutions Consider for \(j\in\{1,\ldots,k\}\), degenerate elliptic operators \(H_{j}\) defined over some subset \(\Omega_{j}\subseteq\Omega\). In the following definition we give a notion of sub and super-solutions of the problem \[\begin{cases}H_{1}u=0\text{ in }\Omega_{1},\\ \vdots\\ H_{k}u=0\text{ in }\Omega_{k}.\end{cases} \tag{2.4}\] We say that a function \(\varphi:\Omega_{\varphi}\to\mathbb{R}\)_touches_ another function \(u:\Omega_{u}\to\mathbb{R}\) from above at some \(x_{0}\in\Omega_{\varphi}\cap\Omega_{u}\) iff \[u\leq\varphi\text{ in }\Omega_{\varphi}\cap\Omega_{u},\text{ with equality at }x_{0}.\] If additionally, \[u<\varphi\text{ in }\Omega_{\varphi}\cap\Omega_{u}\setminus\{x_{0}\},\] we say that the contact is _strict_. Contact from below is defined similarly. **Definition 2.1**.: _A function \(u\in C(\Omega)\) is a viscosity sub-solution of (2.4) iff for every \(\varphi\in C^{2}(B_{r}(x_{0}))\) that touches \(u\) from above at \(x_{0}\in\Omega\), it holds that_ \[H_{j}\varphi(x_{0})\leq 0\text{ if }x_{0}\in\Omega_{j}.\] _A function \(v\in C(\Omega)\) is a viscosity super-solution of (2.4) iff for every \(\varphi\in C^{2}(B_{r}(x_{0}))\) that touches \(v\) from below at \(x_{0}\in\Omega\), it holds that_ \[H_{j}\varphi(x_{0})\geq 0\text{ if }x_{0}\in\Omega_{j}.\] _Finally, \(u\in C(\Omega)\) is a solution of (2.4) iff it is both a sub and super-solution of the respective problem._ For sub-solutions, we could also say that the function satisfies the following inequalities in the viscosity sense \[\begin{cases}H_{1}u\leq 0\text{ in }\Omega_{1},\\ \vdots\\ H_{k}u\leq 0\text{ in }\Omega_{k}.\end{cases}\] A similar notation is also used for super-solutions. Finally, if we ask for strict contact in the definitions above we obtain the exact same notion of solutions, a useful trick in a few proofs. In the next definition we consider \(\Omega_{-},\Gamma,\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\), such that \(\Gamma=\Omega\setminus(\Omega_{-}\cup\Omega_{+})\). The two main operators \(H_{-}\) and \(H_{+}\) are defined over \(\Omega_{-}\cup\Gamma\) and \(\Omega_{+}\cup\Gamma\) respectively. Our goal is to define the viscosity solutions for the following problem \[\begin{cases}H_{-}u=0\text{ in }\Omega_{-},\\ H_{+}u=0\text{ in }\Omega_{+},\\ \text{Either }H_{-}u\text{ or }H_{+}u=0\text{ on }\Gamma.\end{cases} \tag{2.5}\] In this case the operators used for sub and super solutions are different over the set \(\Gamma\). **Definition 2.2**.: _A function \(u\in C(\Omega)\) is a viscosity sub-solution of (2.5) iff it is a viscosity sub-solution of_ \[\begin{cases}H_{-}u\leq 0\text{ in }\Omega_{-},\\ H_{+}u\leq 0\text{ in }\Omega_{+},\\ \min(H_{-}u,H_{+}u)\leq 0\text{ on }\Gamma.\end{cases}\] _A function \(u\in C(\Omega)\) is a viscosity super-solution of (2.5) iff it is a viscosity super-solution of_ \[\begin{cases}H_{-}u\geq 0\text{ in }\Omega_{-},\\ H_{+}u\geq 0\text{ in }\Omega_{+},\\ \max(H_{-}u,H_{+}u)\geq 0\text{ on }\Gamma.\end{cases}\] _Finally, \(u\in C(\Omega)\) is a solution of (2.5) iff it is both a sub and super-solution of the respective problem._ We leave as an exercise to check that the examples discussed in Section 1.3.2 are viscosity solutions. Let us give a further example of a viscosity solution in an annular domain \(\Omega=B_{R}\setminus B_{r}\subseteq\mathbb{R}^{n}\) with \(n\geq 2\) and \(0<r<R\), taken from [15]. The operators under consideration will be as in the introduction \(H_{-}=|p|-1\) and \(H_{+}=\frac{1}{2}(-\Delta)-1\). Let \(\rho\in(r,R)\), \(\Omega_{-}=B_{R}\setminus B_{\rho}\), \(\Omega_{+}=B_{\rho}\setminus B_{r}\) and \[u(x):=\begin{cases}R-|x|\text{ in }\overline{\Omega_{-}},\\ A+B\Phi(|x|)-|x|^{2}/n\text{ in }\overline{\Omega}\setminus\overline{\Omega_{-}}. \end{cases}\] The function \(\Phi\) is a multiple of the fundamental solution for the Laplacian \[\Phi(s)=\begin{cases}-\ln s\text{ if }n=2,\\ s^{2-n}\text{ if }n\geq 2.\end{cases}\] The constants \(\rho\), \(A\), and \(B\) are chosen such that \(u\) is continuous in \(\Omega\), and attains the boundary value \(u=0\) on \(\partial\Omega\), and \(|Du|\leq 1\) on both sides of \(\Gamma\). The specific equations one should consider are the following and have infinite solutions \[A+B\Phi(r)-r^{2}/n=0,\] \[A+B\Phi(\rho)-\rho^{2}/n=R-\rho,\] \[B\Phi^{\prime}(\rho)-2\rho/n\leq 1.\] See Figure 3. Under these requirements it is rather easy to check that \(u\) is a viscosity solution of (1.1) which is Lipschitz continuous but not \(C^{1}\). One may wonder if solutions will be semi-concave, as for the eikonal equation. However, the one-dimensional examples in Section 1.3.2 show that this is not necessarily the case. #### 2.2.1. Stability and Perron's method The flexibility of the problem (2.5) over \(\Gamma\) has the benefit of providing the following stability property. **Lemma 2.1**.: _Let \(\Omega_{-},\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be all open sets, and let \(\Gamma:=\Omega\setminus(\Omega_{-}\cup\Omega_{+})\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\) respectively, and let \(\{u_{k}\}\subseteq C(\Omega)\) be a sequence of viscosity sub-solutions for (2.5) that converges uniformly to some \(u\in C(\Omega)\). Then \(u\) is also a viscosity sub-solution for (2.5)._ Proof.: Let \(\varphi\in C^{2}(B_{r}(x_{0}))\) be a test function that strictly touches \(u\) from above at \(x_{0}\in\Omega\). Up to a vertical translation and a sub-sequence, \(\varphi+m_{k}\) touches \(u_{k}\) from above at \(x_{k}\to x_{0}\), and \(m_{k}\to 0\), due to the uniform convergence. If \(x_{0}\in\Omega_{+}\) then we can assume without loss of generality that \(\{x_{k}\}\subseteq\Omega_{+}\) as well. This means that \(H_{+}(\varphi+m_{k})(x_{k})\leq 0\) and the desired inequality for \(H_{+}\varphi(x_{0})\) follows by continuity. The same reasoning applies if \(x_{0}\in\Omega_{-}\) instead. Figure 3. (a) Graph of a viscosity solution of (1.1) in an annular domain. (b) Lateral view. If \(x_{0}\in\Gamma\), then either \(H_{-}(\varphi+m_{k})(x_{k})\leq 0\) or \(H_{+}(\varphi+m_{k})(x_{k})\leq 0\) must be true an infinite number of times. In either case we conclude once again by continuity that \(\min(H_{-}\varphi(x_{0}),H_{+}\varphi(x_{0}))\leq 0\). The following consequence of the stability given by the previous lemma is the first step towards the construction of the viscosity solution of the boundary value problem by Perron's method. **Corollary 2.2**.: _Let \(\Omega_{\pm}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be all open sets, and let \(\Gamma:=\Omega\setminus(\Omega_{-}\cup\Omega_{+})\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\) respectively. Let \(\mathcal{S}\) be a set of viscosity sub-solutions for the problem (2.5) which is equicontinuous and bounded. Then the upper envelope \(u_{\mathcal{S}}(x):=\sup_{u\in\mathcal{S}}u(x)\) is also a sub-solution of (2.5)._ In the case where \(\mathcal{S}\) is finite, the equicontinuity and boundedness hypotheses are superfluous and the result is known as the lattice property: The maximum of two sub-solutions is a sub-solution. The following existence result does not require the comparison principle to hold. However, it assumes as in the previous lemma that the upper envelope of the family of sub-solutions of the Dirichlet problem turns out to be continuous up to the boundary. **Lemma 2.3** (Perron's Solution).: _Let \(\Omega_{\pm}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be all open sets such that \(\partial\Omega\neq\emptyset\), and let \(\Gamma:=\Omega\setminus(\Omega_{-}\cup\Omega_{+})\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\) respectively. Given \(g\in C(\partial\Omega)\), define the set \(\mathcal{S}_{g}\) such that_ \[\mathcal{S}_{g}:=\{u\in C(\overline{\Omega})\ |\ u\text{ is a viscosity sub-solution of \eqref{eq:cond-s} and }u=g\text{ in }\partial\Omega\}.\] _If there exists an equicontinuous and bounded subset \(\mathcal{S}\subseteq\mathcal{S}_{g}\) such that_ \[u_{g}(x):=\sup_{u\in\mathcal{S}_{g}}u(x)=\sup_{u\in\mathcal{S}}u(x),\] _then \(u_{g}\) is a viscosity solution of (2.5) on \(\Omega\) taking the boundary value \(g\) continuously on \(\partial\Omega\)._ Notice that if \(\Omega\) is bounded, as we will set in the next section, the boundedness condition in \(\mathcal{S}\) follows from the equicontinuity assumption. Proof.: By Corollary 2.2 we know that \(u_{g}\) is indeed a sub-solution. In this proof we will just check that \(u_{g}\) is also a super-solution. Assume by contradiction that \(\varphi\in C^{2}(\overline{B_{r}(x_{0})})\) strictly touches \(u_{g}\) from below at some \(x_{0}\in\Gamma\) over \(B_{r}(x_{0})\subseteq\Omega\), but nonetheless \(\max(H_{-}\varphi,H_{+}\varphi)<0\) over \(B_{r}(x_{0})\). Although we focus on the case where the contact occurs over the interface, the following argument can also be adapted to the case where \(x_{0}\) belongs to either \(\Omega_{-}\) or \(\Omega_{+}\). Letting \(\delta\in(0,\max_{\partial B_{r}(x_{0})}(u_{g}-\varphi))\), so that we still have \(\max(H_{-}(\varphi+\delta),H_{+}(\varphi+\delta))<0\) over \(B_{r}(x_{0})\), we construct \[v:=\begin{cases}\max(u_{g},\varphi+\delta)\text{ in }B_{r}(x_{0}),\\ u_{g}\text{ in }\overline{\Omega}\setminus B_{r}(x_{0}).\end{cases}\] This is a function in \(\mathcal{S}_{g}\) that contradicts the maximality of \(u_{g}\). **Remark 2.4**.: _One can easily extend the notions of viscosity solutions to semi-continuous functions. In this case there are a similar stability properties under half-relaxed limits or gamma-convergence. These will not be used in our work as the regularity of the operators will allow us to assume that the solutions are always continuous._ #### 2.2.2. Compactness for sub-solutions Just as sub-solutions of the eikonal equation are Lipschitz continuous, we can show that viscosity sub-solutions of first-order, degenerate elliptic operators have a modulus of continuity that depends on the zero sub-level set of the operator, which we assume to be bounded. **Lemma 2.5**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\) be an open bounded set with uniformly Lipschitz regular boundary. Let \(H\in C(\mathbb{R}^{n}\times\mathbb{R}\times\Omega)\) be first-order, degenerate elliptic operator with bounded sub-level sets. Then any bounded sub-solution \(u\in C(\Omega)\) of \(Hu\leq 0\) in \(B_{r}(x_{0})\) is Lipschitz continuous._ Proof.: Follows because \(u\) is also a sub-solution of the eikonal equation \(|Du|-R\leq 0\) in \(\Omega\) for some radius \(R>0\) sufficiently large such that \(\bigcup_{(z,x)\in[-M,M]\times\Omega}\{H(\cdot,z,x)\leq 0\}\subseteq B_{R}\) with \(M:=\|u\|_{L^{\infty}(\Omega)}\). This observation allows to show first that \([u]_{C^{0,1}(\Omega)}\leq R\) if \(\Omega\) is convex. In general, the Lipschitz assumption on \(\partial\Omega\) suffices to extend the Lipschitz regularity to the whole domain by a covering argument. A detailed discussion can be found in [3, Section IV.3.1]. In the following lemma, we consider sub-solutions of (2.5) with continuous boundary data on \(\Omega\). We then perform a replacement over \(\Omega_{+}\) using the solution of the problem \(H_{+}u=0\) in \(\Omega_{+}\). A sufficient condition for this construction to be well-defined is that \(H_{+}\) is uniformly elliptic and \(\Omega_{+}\) is a bounded set with uniformly Lipschitz regular boundary (also an exterior cone condition would be enough). This ensures the existence of barriers controlling the behavior at the boundary, as shown in [36]. The result is a continuous function on \(\overline{\Omega}\) with a modulus of continuity depending on the problem's data. **Lemma 2.6**.: _Let \(\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be open bounded sets, \(\Omega_{-}=\Omega\setminus\overline{\Omega_{+}}\), and \(\Gamma=\Omega\setminus(\Omega_{+}\cup\Omega_{-})=\partial\Omega_{\pm}\cap\Omega\). Assume that \(\Omega_{\pm}\) have uniformly Lipschitz regular boundaries. Consider \(H_{\pm}\) degenerate elliptic operators defined over \(\Omega_{\pm}\cup\Gamma\) respectively, such that \(H_{-}\) is a first-order operator with bounded sub-level sets, while \(H_{+}\) is a uniformly elliptic second-order operator. Let \(g\in C(\partial\Omega)\) be the boundary data of the problem._ _Given \(u\in C(\overline{\Omega})\), a bounded sub-solution of (2.5) such that \(u=g\) over \(\partial\Omega\), consider \(\bar{u}\in C(\overline{\Omega})\) to be the solution of the Dirichlet boundary value problem_ \[\begin{cases}H_{+}\bar{u}=0\text{ in }\Omega_{+},\\ \bar{u}=u\text{ on }\Omega\setminus\Omega_{+}.\end{cases}\] _Then \(\bar{u}\) is well defined and its modulus of continuity depends only on \(\Omega\), \(\Omega_{\pm}\), \(H_{\pm}\), and \(g\). Moreover, \(\bar{u}\) is a sub-solution of (2.5)._ We sketch the main arguments in the proof which combines some of the fundamental constructions and estimates for uniformly elliptic equations developed by several authors. Proof.: By Lemma 2.5 we get that \(u\) is Lipschitz on \(\Omega\setminus\Omega_{+}\). From this and the existence of barriers provided in [36] we obtain the existence of viscosity solution by Perron's method [16, Section 4] and [29]. In other words, \(\bar{u}\) is well defined. The uniform modulus of continuity for the solution is a consequence of the interior Holder estimates for uniformly elliptic equations [11, 46], and the existence of barriers with uniform modulus, once again found in the explicit construction given in [36]. The procedure for combining the modulus of continuity at the boundary with the interior regularity estimate is standard. See for example, the end of Chapter 4 in [12]. To see that \(\bar{u}\) is a sub-solution, we just have to consider the situation in which a test function touches \(\bar{u}\) from above over the interface, being the other two cases immediate by construction. In this scenario, the test function also touches the original sub-solution \(u\), and the desired inequality follows promptly. By combining the previous result with Perron's method stated in Lemma 2.3 we recover the existence of solutions to the Dirichlet problem. **Theorem 2.7**.: _Let \(\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be open bounded sets, \(\Omega_{-}=\Omega\setminus\overline{\Omega_{+}}\), and \(\Gamma=\Omega\setminus(\Omega_{+}\cup\Omega_{-})=\partial\Omega_{\pm}\cap\Omega\). Assume that \(\Omega_{\pm}\) have uniformly Lipschitz regular boundaries. Consider \(H_{\pm}\) as degenerate elliptic operators defined over \(\Omega_{\pm}\cup\Gamma\) respectively, such that \(H_{-}\) is a first-order operator with bounded sub-level sets, while \(H_{+}\) is a uniformly elliptic second-order operator._ _Given \(g\in C(\overline{\Omega})\) being a sub-solution of (2.5), there exists a viscosity solution of (2.5) in \(\Omega\) taking the boundary value \(g\) on \(\partial\Omega\)._ For instance, the function \(g=0\) serves as a sub-solution for the original problem discussed in the introduction. **Corollary 2.8**.: _Let \(\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be open bounded sets, \(\Omega_{-}=\Omega\setminus\overline{\Omega_{+}}\), and \(\Gamma=\Omega\setminus(\Omega_{+}\cup\Omega_{-})=\partial\Omega_{\pm}\cap\Omega\). Assume that \(\Omega_{\pm}\) have uniformly Lipschitz regular boundaries. Then there exists a viscosity solution \(u\in C(\overline{\Omega})\) of (1.1) with \(u=0\) on \(\partial\Omega\)._ In the Section 1.3.2 we discussed the one dimensional problem over \(\Omega=(-1,1)\), with \(\Omega_{+}=(0,1)\). Recall that in this case, when we set \(g(-1)=0\) and \(g(1)<-2\) there are no solutions to the boundary value problem. In a few words, any pair of sub-solutions in \(\Omega_{\pm}\), with the given boundary values, and coinciding at the interface \(\Gamma=\{0\}\), will not satisfy the sub-solution property at zero. From this point the main goal of the article is to establish uniqueness of solutions for the Dirichlet boundary value problem. ## 3. Strong equation In this section we find a stronger characterization for the problem (2.5) with a \(C^{2}\)-regular interface \(\Gamma\), and \(H_{+}\) uniformly elliptic. Namely, the solutions of (2.5) are exactly the solutions of \[\begin{cases}H_{-}u=0\text{ in }\Omega_{-}\cup\Gamma,\\ H_{+}u=0\text{ in }\Omega_{+}.\end{cases} \tag{3.6}\] **Theorem 3.1**.: _Let \(\Omega_{+}\subseteq\Omega\subseteq\mathbb{R}^{n}\) be open sets, \(\Omega_{-}=\Omega\setminus\overline{\Omega_{+}}\), and \(\Gamma=\Omega\setminus(\Omega_{+}\cup\Omega_{-})=\partial\Omega_{\pm}\cap\Omega\) a \(C^{2}\)-regular interface. Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\), with \(H_{+}\) uniformly elliptic. Then \(u\in C(\Omega)\) is a viscosity sub-solution (super-solution or solution) of the relaxed problem (2.5) if and only if it is a viscosity sub-solution (super-solution or solution) of the strong problem (3.6)._ After a local \(C^{2}\)-regular change of variables we can assume without loss of generality that \(\Omega=B_{1}\), \(\Omega_{\pm}=B_{1}\cap\{\pm x_{n}>0\}\) and \(\Gamma=B_{1}\cap\{x_{n}=0\}\). This is an assumption adopted in the rest of the section. As it will be shown in our proof, we only need a coercivity hypothesis on \(H_{+}\) with respect to the Hessian variable. Namely that for any \((M,p,z,x)\in\mathbb{R}_{\mathrm{sym}}^{n\times n}\times\mathbb{R}^{n}\times \mathbb{R}\times\Omega\) there exists some \(\alpha>0\) such that1 Footnote 1: For \(v,w\in\mathbb{R}^{n}\), we denote \(v\otimes w\in\mathbb{R}^{n\times n}\) such that \((v\otimes w)_{ij}=v_{i}w_{j}\). \[H_{+}(M-\alpha(e_{n}\otimes e_{n}),p,z,x)>0.\] This follows from a uniform ellipticity hypothesis because of the inequality \[H_{+}(M-\alpha(e_{n}\otimes e_{n}),p,z,x)-H_{+}(M,p,z,x)\geq\mathcal{M}_{ \lambda,\Lambda}^{-}(-\alpha(e_{n}\otimes e_{n}))=\alpha\lambda.\] In the following lemmas we denote by \(\varphi_{\pm}\) the restrictions of a given function \(\varphi:\Omega\subseteq\mathbb{R}^{n}\to\mathbb{R}\) to \(\Omega_{\pm}\cup\Gamma\) respectively. The hypothesis \(\varphi\in C(\Omega)\cap C^{2}(\Omega_{-}\cup\Gamma)\cap C^{2}(\Omega_{+}\cup\Gamma)\) means that \(\varphi\) is continuous across \(\Gamma\), and each one of the restrictions \(\varphi_{\pm}\) are \(C^{2}\)-regular up to \(\Gamma\). **Lemma 3.2**.: _Let \(\Omega=B_{1}\), \(\Omega_{\pm}=B_{1}\cap\{\pm x_{n}>0\}\), and \(\Gamma=B_{1}\cap\{x_{n}=0\}\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\), with \(H_{+}\) uniformly elliptic. Then, for any \(\varphi\in C(\Omega)\cap C^{2}(\Omega_{-}\cup\Gamma)\cap C^{2}(\Omega_{+}\cup\Gamma)\) such that_ \[\min(H_{-}\varphi_{+}(0),\partial_{n}\varphi_{-}(0)-\partial_{n}\varphi_{+}(0) )>0,\] _there exists a test function \(\psi\in C^{2}(B_{\rho})\), with \(\rho\in(0,1)\), that touches \(\varphi\) from above at the origin and satisfies_ \[\min(H_{-}\psi(0),H_{+}\psi(0))>0.\] We would like to emphasize the geometric interpretation of the inequality \(\partial_{n}\varphi_{-}(0)-\partial_{n}\varphi_{+}(0)>0\). Specifically, this condition implies that the graph of \(\varphi\) forms a concave angle around the origin and along \(\Gamma\). Proof.: Given \(\varepsilon\in(0,1)\), and \(\alpha>0\), consider the functions \[P_{+}(x) :=\varphi(0)+D\varphi_{+}(0)\cdot x+\tfrac{1}{2}x\cdot D^{2} \varphi_{+}(0)x+\tfrac{\varepsilon}{2}|x|^{2},\] \[P_{-}(x) :=\varphi(0)+D\varphi_{-}(0)\cdot x+\frac{1}{2}x\cdot D^{2}\varphi _{-}(0)x+\tfrac{\varepsilon}{2}|x|^{2},\] \[\psi(x) :=P_{+}(x)+\varepsilon x_{n}-\alpha x_{n}^{2}.\] By having that \(\varphi_{+}\) and \(\varphi_{-}\) are second-order differentiables at the origin we have that there exists some \(\rho_{0}\in(0,1)\) sufficiently small such that \[\begin{cases}\varphi_{+}\leq P_{+}\text{ in }B_{\rho_{0}}\cap\{x_{n}\geq 0\},\\ \varphi_{-}\leq P_{-}\text{ in }B_{\rho_{0}}\cap\{x_{n}\leq 0\}.\end{cases}\] For \(\varepsilon\in(0,1)\) sufficiently small we get by continuity and the degenerate ellipticity that \[H_{-}\psi(0)>0\qquad\text{ and }\qquad\partial_{n}\varphi_{-}(0)-\partial_{n} \varphi_{+}(0)-2\varepsilon>0.\] Take now \(\alpha\) sufficiently large such that, thanks to the uniform ellipticity, we can enforce \(H_{+}\psi(0)>0\). Hence the proposed \(\psi\) already satisfies the desired inequality, \[\min(H_{-}\psi(0),H_{+}\psi(0))>0.\] To finish the proof we need to show that \(\psi\) touches \(\varphi\) from above at the origin over some sufficiently small ball \(B_{\rho}\). This follows if for some \(\rho\in(0,\rho_{0})\) it holds that \[\begin{cases}P_{+}\leq\psi\text{ in }B_{\rho}\cap\{x_{n}>0\},\\ P_{-}\leq\psi\text{ in }B_{\rho}\cap\{x_{n}<0\}.\end{cases}\] The inequality over \(B_{\rho}\cap\{x_{n}>0\}\) is equivalent to checking that \(\varepsilon x_{n}\geq\alpha x_{n}^{2}\). This is clearly the case once we take \(\rho\leq\varepsilon/\alpha\). To prove the inequality over \(B_{\rho}\cap\{x_{n}<0\}\), we need to show that \(\varepsilon x_{n}\geq P_{-}-P_{+}+\alpha x_{n}^{2}\) holds over the same domain. Here, we should keep in mind that \(P_{-}-P_{+}=x_{n}R\), where \(R(x)\) is defined as follows \[R(x):=(\partial_{n}\varphi_{-}(0)-\partial_{n}\varphi_{+}(0))+\sum_{j=1}^{n-1 }(\partial_{nj}\varphi_{-}(0)-\partial_{nj}\varphi_{+}(0))x_{j}+\frac{1}{2}( \partial_{nn}\varphi_{-}(0)-\partial_{nn}\varphi_{+}(0))x_{n}.\] We use that \(\partial_{n}\varphi_{-}(0)-\partial_{n}\varphi_{+}(0)>2\varepsilon\) and bound the quadratic terms by \(C\rho|x_{n}|\), then \(P_{-}-P_{+}+\alpha x_{n}^{2}\leq 2\varepsilon x_{n}+C\rho|x_{n}|\). This quantity gets bounded by \(\varepsilon x_{n}\) if we choose \(\rho\leq\varepsilon/C\). As a consequence of the previous lemma we see that the transmission condition only takes into consideration the uniform ellipticity of \(H_{+}\) and ignores the rest of its structure. The following corollary gives an intermediate formulation, still in the viscosity sense spirit, between the problems (2.5) and (3.6). We skip the proof as it follows directly from the definitions. **Corollary 3.3**.: _Let \(\Omega=B_{1}\), \(\Omega_{\pm}=B_{1}\cap\{\pm x_{n}>0\}\), and \(\Gamma=B_{1}\cap\{x_{n}=0\}\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\), with \(H_{+}\) uniformly elliptic. Let \(u\in C(\Omega)\) be a viscosity sub-solution of (2.5). Then for every_ \[\varphi\in C(B_{r})\cap C^{2}(B_{r}\cap(\Omega_{-}\cup\Gamma))\cap C^{2}(B_{ r}\cap(\Omega_{+}\cup\Gamma))\] _that touches \(u\) from above at \(0\,(\in\Gamma)\) it holds that_ \[\min(H_{-}\varphi_{+}(0),\partial_{n}\varphi_{-}(0)-\partial_{n}\varphi_{+}(0 ))\leq 0.\] This following lemma makes the final connection with the strong problem (3.6). **Lemma 3.4**.: _Let \(\Omega=B_{1}\), \(\Omega_{\pm}=B_{1}\cap\{\pm x_{n}>0\}\), and \(\Gamma=B_{1}\cap\{x_{n}=0\}\). Let \(H_{\pm}\) be degenerate elliptic operators over \(\Omega_{\pm}\cup\Gamma\), with \(H_{+}\) uniformly elliptic. Let \(u\in C(\Omega)\) be such that for every_ \[\varphi\in C(B_{r}(x_{0}))\cap C^{2}(B_{r}(x_{0})\cap(\Omega_{-}\cup\Gamma)) \cap C^{2}(B_{r}(x_{0})\cap(\Omega_{+}\cup\Gamma))\] _that touches \(u\) from above at \(x_{0}\in\Omega\) it holds that_ \[\begin{cases}H_{-}\varphi(x_{0})\leq 0\text{ if }x_{0}\in\Omega_{-},\\ H_{+}\varphi(x_{0})\leq 0\text{ if }x_{0}\in\Omega_{+},\\ \min(H_{-}\varphi_{+}(x_{0}),\partial_{n}\varphi_{-}(x_{0})-\partial_{n} \varphi_{+}(x_{0}))\leq 0\text{ if }x_{0}\in\Gamma.\end{cases}\] _Then \(u\) is a viscosity sub-solution of (3.6)._ Proof.: Let \(\varphi\in C^{2}(B_{r}(x_{0}))\) touching \(u\) from above at \(x_{0}\in\Omega\). The verification of the sub-solution condition reduces to check the case \(x_{0}\in\Gamma\). Assume by contradiction that \(H_{-}\varphi>0\) over \(B_{\rho_{1}}(x_{0})\cap(\Omega_{-}\cup\Gamma)\) for some small \(\rho_{1}\in(0,r)\). Let \[\psi(x):=\varphi(x)+\varepsilon\eta(|x^{\prime}-x_{0}|/\rho_{1})x_{n}(x_{n}+ \rho_{1}/2)\] where \(\eta\in C^{\infty}_{0}((-1/2,1/2))\) is even and non-negative, with \(\eta=1\) in \((-1/4,1/4)\). The value of \(\varepsilon>0\) is chosen sufficiently small such that \(H_{-}\psi>0\) over \(B_{\rho_{1}}(x_{0})\cap(\Omega_{-}\cup\Gamma)\). By construction \(\psi\geq\varphi\) in \(\partial(B_{\rho_{1}}(x_{0})\cap\Omega_{-})=(\partial B_{\rho_{1}}(x_{0})\cap \{x_{n}<0\})\cup(B_{\rho_{1}}(x_{0})\cap\{x_{n}=0\})\), and \(\psi(x_{0})=\varphi(x_{0})\). So, either \(\psi\) continues touching \(u\) form above at \(x_{0}\), or there is some \(c>0\) such that \(\psi+c\) touches \(u\) from above at some point \(x^{*}\in B_{\rho_{1}}(x_{0})\cap\Omega_{-}\). However, this last alternative would contradict the main hypothesis on \(u\), because \(\psi+c\) is a test function for which \(H_{-}(\psi+c)(x^{*})\geq H_{-}\psi(x^{*})>0\) given that \(x^{*}\in B_{\rho_{1}}(x_{0})\cap\Omega_{-}\). Hence we must have that \(\psi\) continues touching \(u\) form above at \(x_{0}\). We construct in this way the following test function over \(B_{\rho_{1}/4}(x_{0})\) \[\phi(x):=\begin{cases}\psi(x)\text{ if }x\in B_{\rho_{1}/4}(x_{0})\cap\{x_{n} \leq 0\},\\ \varphi(x)\text{ if }x\in B_{\rho_{1}/4}(x_{0})\cap\{x_{n}>0\}.\end{cases}\] Given that \[\partial_{n}\phi_{-}(x_{0})=\partial_{n}\varphi(x_{0})+\varepsilon\rho_{1}/2 >\partial_{n}\varphi(x_{0})=\partial_{n}\phi_{+}(x_{0}),\] we must have from the main hypothesis on \(u\) that \(H_{-}\phi_{+}(x_{0})\leq 0\). However this goes against our initial assumption given that \(H_{-}\phi_{+}(x_{0})=H_{-}\varphi(x_{0})>0\). The previous lemmas also have analogues for super-solutions with similar proofs. As a consequence we get the proof of Theorem 3.1 in the case of a flat interface. The general scenario with \(C^{2}\) interfaces follows by a standard change of variables argument around each point of the interface where the equation is tested. The \(C^{2}\) regularity of the change of variables suffices to get operators with the same hypothesis as in the flat case. ## 4. Comparison Principle In this section we consider the open and bounded sets \(\Omega\subseteq\mathbb{R}^{n}\), \(\Omega_{\pm}=\Omega\cap\{\pm x_{n}>0\}\), and the flat interface \(\Gamma=\Omega\cap\{x_{n}=0\}\). These are the hypotheses on the operators with respect to some fixed uniform ellipticity parameters \([\lambda,\Lambda]\subseteq(0,\infty)\): * Let \(H_{-}\in C^{0,1}_{loc}(\mathbb{R}^{n}\times\mathbb{R}\times(\Omega_{-}\cup \Gamma))\) be a first-order quasi-convex operator with bounded sub-level sets. * Let \(H_{+}\in C(\mathbb{R}^{n\times n}_{\text{sym}}\times\mathbb{R}^{n}\times \mathbb{R}\times(\Omega_{+}\cup\Gamma))\) be degenerate elliptic of the form \[H_{+}(M,p,z,x)=F(M,p,z)+f(z,x),\] where \(F\in C(\mathbb{R}^{n\times n}_{\text{sym}}\times\mathbb{R}^{n}\times \mathbb{R})\) is translation invariant and uniformly elliptic with respect to the interval \([\lambda,\Lambda]\); and \(f\in C(\mathbb{R}\times(\Omega_{+}\cup\Gamma))\) is increasing in the \(z\) variable. Our hypotheses on the operators include a large class of examples, in particular we can treat the Poisson and eikonal equations which appear in the introduction. However, our method of proof can not say much about diffusions that may depend on the position (i.e. \(H_{+}(M,x)=-\operatorname{tr}(A(x)M)\)), not even in the case of smooth coefficients. Notice as well that the hypotheses on each operator guarantee that each one separately enjoys a comparison principle [16, 12]. Finally, we consider the comparison principle for two equations separated by some gap \(\eta\geq 0\). For \(u,-v\in C(\overline{\Omega})\cap C^{0,1}(\Omega_{-}\cup\Gamma)\), we have the following assumptions: 1. The function \(u\) is a viscosity sub-solution of \[\begin{cases}H_{-}u+\eta\leq 0\text{ in }\Omega_{-}\cup\Gamma,\\ H_{+}u+\eta\leq 0\text{ in }\Omega_{+}.\end{cases}\] 2. The function \(v\) is a viscosity super-solution of \[\begin{cases}H_{-}v\geq 0\text{ in }\Omega_{-}\cup\Gamma,\\ H_{+}v\geq 0\text{ in }\Omega_{+}.\end{cases}\] **Theorem 4.1**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\) open and bounded, \(\Omega_{\pm}=\Omega\cap\{\pm x_{n}>0\}\), and \(\Gamma=\Omega\cap\{x_{n}=0\}\). Let \(H_{\pm}\) be degenerate elliptic operators that satisfy (H\({}_{-}\)) and (H\({}_{+}\)) respectively. Let \(u,v\in C(\overline{\Omega})\cap C^{0,1}(\Omega_{-})\) satisfy (Sub) and (Sup) respectively with \(\eta>0\). Then_ \[u\leq v\text{ on }\partial\Omega\qquad\Rightarrow\qquad u\leq v\text{ in }\Omega.\] Under some additional conditions it is possible to remove the gap between the equations. For instance: 1. For some \(\lambda>0\), \[h\geq 0\qquad\Rightarrow\qquad H_{\pm}(p,z+h,x)\geq H_{\pm}(p,z,x)+\lambda h.\] 2. For some \(\lambda>0\), and \(e\in\partial B_{1}\) \[h\geq 0\qquad\Rightarrow\qquad H_{\pm}(p+he,z,x)\geq H_{\pm}(p,z,x)+\lambda h.\] 3. \(H_{\pm}\) are convex operators independent of \(z\), and there exist \(\beta\in C^{2}(\overline{\Omega})\) and \(\bar{\eta}>0\) such that \[\begin{cases}H_{-}\beta+\bar{\eta}\leq 0\text{ in }\Omega_{-}\cup\Gamma,\\ H_{+}\beta+\bar{\eta}\leq 0\text{ in }\Omega_{+}.\end{cases}\] **Remark 4.2**.: _The first case is usually applied to operators of the form \(H_{-}(p,z,x)=\lambda z+H(p,x)\) appearing in optimal control problem with geometric discount. The second case can be applied to show a comparison principle in time evolving problems, being \(e\) the time direction. Finally, the last case applies to our model problem with the eikonal operator \(H_{-}(p)=|p|-1\) and \(H_{+}(M)=\frac{1}{2}(-\operatorname{tr}M)-1\) by taking \(\beta=0\) and \(\bar{\eta}=1\)._ **Corollary 4.3**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\) open and bounded, \(\Omega_{\pm}=\Omega\cap\{\pm x_{n}>0\}\), and \(\Gamma=\Omega\cap\{x_{n}=0\}\). Let \(H_{\pm}\) be degenerate elliptic operators that satisfy (H\({}_{-}\)) and (H\({}_{+}\)) respectively and at least of the hypotheses (Mz), (Mp), or (C). Let \(u,v\in C(\overline{\Omega})\cap C^{0,1}(\Omega_{-})\) satisfy (Sub) and (Sup) respectively with \(\eta=0\). Then_ \[u\leq v\text{ on }\partial\Omega\qquad\Rightarrow\qquad u\leq v\text{ in }\Omega.\] In any of the three cases the idea is to approximate the sub-solution with a limiting sequence that satisfies the hypothesis of Theorem 4.1 with a positive gap. Let us illustrate the last one, hypothesis (C), pertinent to the optimal control problem discussed in the introduction. Given that \(H_{\pm}\) are independent of \(z\) we may translate the barrier and assume that \(\beta\leq 0\) on \(\partial\Omega\). Assume now that \(u,v\in C(\overline{\Omega})\cap C^{0,1}(\Omega_{-})\) satisfy (\(H_{-}\)), (\(H_{+}\)) with \(\eta=0\), and \(u\leq v\) on \(\partial\Omega\). Then \(u_{t}=(1-t)u+t\beta\) and \(v_{t}=v\) satisfy (\(H_{-}\)) and (\(H_{+}\)) with \(\eta=t\bar{\eta}\) (\(t\in(0,1)\)), and still \(u_{t}\leq v\) on \(\partial\Omega\). Hence \(u_{t}\leq v_{t}\) in \(\Omega\) and the desired conclusion for \(u\) and \(v\) gets recovered in the limit as \(t\to 0^{+}\). **Corollary 4.4**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\) open and bounded, \(\Omega_{\pm}=\Omega\cap\{\pm x_{n}>0\}\), and \(\Gamma=\Omega\cap\{x_{n}=0\}\). Assume that \(\Omega_{\pm}\) have uniformly Lipschitz regular boundaries. Let \(H_{\pm}\) be degenerate elliptic operators that satisfy (\(H_{-}\)) and (\(H_{+}\)) respectively and at least of the hypotheses (Mz), (\(\mathrm{\it Mp}\)), or (C)._ _Given \(g\in C(\overline{\Omega})\) being a sub-solution of (2.5), there exists a unique viscosity solution of (2.5) in \(\Omega\) taking the boundary value \(g\) on \(\partial\Omega\)._ The proof of Theorem 4.1 goes by contradiction, here is a summary of the steps involved: Assume that \(u\leq v\) over the boundary \(\partial\Omega\), nevertheless \(\{u>v\}\) is non-empty. We use the hypothesis on the operators and the regularity of the solutions to replace the solutions by some inf/sup regularizations (along directions parallel to the interface) preserving all the other hypothesis and gaining semi-convexity along the interface. By translating \(v\) upwards (denoted the same) we can then assume that \(v\) touches \(u\) from above at some point \(x_{0}\in\Gamma\) (being the other cases clearly not possible by the classical comparison principles). At this point we also replace \(u\) and \(v\) over \(\Omega_{+}\) with exact solutions of the uniformly elliptic equation. Finally, we observe that over \(\Gamma\) the solutions are trapped between two paraboloids, from this control and the equations on both sides, we can extract further regularity that finally allow us to reproduce the argument in Section 1.3.3 for the one dimensional case. The boundary estimates are discussed in Section 4.2. ### Inf/sup convolutions Here is a quick review of the inf/sup convolutions and its properties. The main difference is that we only perform the regularization in directions parallel to \(\{x_{n}=0\}=\mathbb{R}^{n-1}\). Given \(\Omega\subseteq\mathbb{R}^{n}\) and \(r>0\) we denote \[\Omega^{r}:=\{x\in\Omega\ |\ \overline{B_{r}(x)}\subseteq\Omega\}.\] **Definition 4.1**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\), \(u\in C(\Omega)\cap L^{\infty}(\Omega)\), \(\varepsilon\in(0,1)\), \(M=\|u\|_{L^{\infty}(\Omega)}\), and \(r=2\sqrt{M\varepsilon}\)._ _We define sup-convolution \(u^{\varepsilon}:\Omega^{r}\to\mathbb{R}\) such that_ \[u^{\varepsilon}(x):=\sup\{u(y^{\prime},x_{n})-\tfrac{1}{2\varepsilon}|y^{ \prime}-x^{\prime}|^{2}\ |\ (y^{\prime},x_{n})\in\Omega\}.\] _We define the inf-convolution \(u_{\varepsilon}:\Omega^{r}\to\mathbb{R}\) such that_ \[u_{\varepsilon}(x):=\inf\{u(y^{\prime},x_{n})+\tfrac{1}{2\varepsilon}|y^{ \prime}-x^{\prime}|^{2}\ |\ (y^{\prime},x_{n})\in\Omega\}.\] By choosing \(x\in\Omega^{r}\), with \(r=2\sqrt{M\varepsilon}\) and \(M=\|u\|_{L^{\infty}(\Omega)}\), we guarantee that the supremum and infimum above are actually maximum and minimum attained at some point over the disc \(\{(y^{\prime},x_{n})\ |\ |y^{\prime}-x^{\prime}|\leq r\}\). The proof of the following lemma is a standard adaptation of the ideas in [12, Chapter 5]. As usual there is a analog version for super-solutions and their corresponding inf-convolution. **Lemma 4.5**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\), \(u\in C(\Omega)\cap L^{\infty}(\Omega)\), \(\varepsilon\in(0,1)\), \(M=\|u\|_{L^{\infty}(\Omega)}\), and \(r=2\sqrt{M\varepsilon}\). In the last two properties we will also denote \(\Omega^{r}_{\pm}=\Omega^{r}\cap\{\pm x_{n}>0\}\), and \(\Gamma^{r}=\Omega^{r}\cap\{x_{n}=0\}\). The following properties hold:_ * _Semi-convexity:_ _For each_ \(h\in\mathbb{R}\) _such that_ \(\Omega^{r}\cap\{x_{n}=h\}\neq\emptyset\)_, the function_ \(x^{\prime}\in\{x^{\prime}\in\mathbb{R}^{n-1}\ |\ (x^{\prime},h)\in\Omega^{r}\} \mapsto u^{\varepsilon}(x^{\prime},h)+\frac{1}{2\varepsilon}|x^{\prime}|^{2}\) _is convex._ * _Approximation:_ _The functions_ \(u^{\varepsilon}\) _decrease to_ \(u\) _locally uniformly as_ \(\varepsilon\searrow 0\)_._ * _Lipschitz solutions for first-order Lipschitz equations:_ _Given_ \(H_{-}\in C^{0,1}_{loc}(\mathbb{R}^{n}\times\mathbb{R}\times\Omega_{-}\cap\Gamma)\) _and_ \(u\in C^{0,1}(\Omega_{-}\cap\Gamma)\) _there exists a constant_ \(C\) _such that if_ \(u\) _is a viscosity sub-solution of the problem_ \[H_{-}u\leq 0\text{ in }\Omega_{-}\cap\Gamma\] _we also get that_ \(u^{\varepsilon}\) _is a sub-solution of_ \[H_{-}u-C\varepsilon\leq 0\text{ in }\Omega^{r}_{-}\cap\Gamma^{r}\] * _Solutions for translation invariant equations:_ _such that_ \[H_{+}(M,p,z,x)=F(M,p,z)+f(z,x)\] _with_ \(F\in C(\mathbb{R}^{n\times n}_{sym}\times\mathbb{R}^{n}\times\mathbb{R})\) _translation invariant and_ \(f\in C(\mathbb{R}\times\Omega_{+})\)_, both degenerate elliptic. Given_ \(u\in C(\Omega_{+}\cap\Gamma)\)_, there exists a modulus of continuity_ \(\omega\) _such that if_ \(u\) _is a viscosity sub-solution of the problem_ \[H_{+}u\leq 0\text{ in }\Omega_{+},\] _we also get that_ \(u^{\varepsilon}\) _is a sub-solution of_ \[H_{+}u-\omega(\varepsilon)\leq 0\text{ in }\Omega^{r}_{+}.\] ### Regularity estimates The regularity results in this section use as hypotheses the semi-convexity that results from the inf/sup-convolutions. #### 4.2.1. Uniformly elliptic regime The following lemma is closely related to the \(C^{1,\alpha}\) boundary regularity estimate due to Krylov [33]. Its proof is part of an argument due to Caffarelli [31, Lemma 4.31] and [22, proof of Theorem 9.31] using the Harnack inequality. **Lemma 4.6**.: _Let \([\lambda,\Lambda]\subseteq(0,\infty)\), \(\varepsilon\in(0,1)\), \(p^{\prime}\in\mathbb{R}^{n-1}\) with \(|p^{\prime}|\leq\Lambda\), and let \(u\in C(\overline{B_{1}\cap\{x_{n}>0\}})\) satisfy_ \[\begin{cases}\mathcal{M}^{+}_{\lambda,\Lambda}(D^{2}u)+\Lambda|Du|+\Lambda u_ {+}+\Lambda\geq 0\text{ in }B_{1}\cap\{x_{n}>0\},\\ \mathcal{M}^{-}_{\lambda,\Lambda}(D^{2}u)-\Lambda|Du|-\Lambda u_{-}-\Lambda \leq 0\text{ in }B_{1}\cap\{x_{n}>0\},\\ u\geq-\frac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}\text{ on }B_{1}\cap\{x_{n}=0\}.\end{cases}\] _Then one of the following alternatives must be true:_ 1. _For every_ \(p_{n}\in\mathbb{R}\) _there exists a sufficiently small radius_ \(r\in(0,1)\) _such that_ \[u\geq-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}+p_{n}x_ {n}\text{ in }B_{r}\cap\{x_{n}\geq 0\}.\] 2. _There exists_ \(p=(p^{\prime},p_{n})\in\mathbb{R}^{n}\) _such that the following holds: For every_ \(\eta>0\) _there is a radius_ \(r\in(0,1)\) _such that_ \[\begin{cases}u-(-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p\cdot x)\geq-\eta x _{n}\text{ in }B_{r}\cap\{x_{n}\geq 0\},\\ u-(-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p\cdot x)\leq\eta x_{n}\text{ in }B_{r}\cap\{x_{n}>\eta|x|\}.\end{cases}\] Proof.: Let \(\phi\in C(\overline{B_{1}\cap\{x_{n}>0\}})\) be defined by the boundary value problem \[\begin{cases}\mathcal{M}^{+}_{\lambda,\Lambda}(D^{2}\phi)+\Lambda|D\phi|+ \Lambda\phi_{+}=-\Lambda\text{ in }B_{1}\cap\{x_{n}>0\},\\ \phi=-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}\text{ on }B_{1}\cap\{x_{n}=0\},\\ \phi=\min_{\partial B_{1}\cap\{x_{n}>0\}}u\text{ on }\partial B_{1}\cap\{x_{n}>0\}. \end{cases}\] Global estimates for convex operators imply that this problem has a classical solution with \(\phi\in C^{1}(B_{1/2}\cap\{x_{n}\geq 0\})\), [33]. This barrier shows that for \(r\in(0,1/2)\) the following construction is well defined as a finite number \[\rho(r):=\sup\{\rho\in\mathbb{R}\ |\ u(x)\geq-\tfrac{1}{2\varepsilon}|x^{ \prime}|^{2}+p^{\prime}\cdot x^{\prime}+\rho x_{n}\text{ for all }x\in B_{r}\cap\{x_{n}>0\}\}.\] The first alternative in the theorem holds if \(\lim_{r\to 0^{+}}\rho(r)=+\infty\). So let us assume that \(p_{n}:=\lim_{r\to 0^{+}}\rho(r)<+\infty\). By construction we automatically have the lower bound and all we need to show is the upper bound in non-tangential domains of the form \(B_{r}\cap\{x_{n}>\eta|x|\}\). Assume by contradiction that there is some \(\eta\in(0,1/2)\) and a sequence \(\{y_{k}\}\) converging to the origin with \((y_{k})_{n}>\eta|y_{k}|\) such that \[u(y_{k})>-\tfrac{1}{2\varepsilon}|y^{\prime}_{k}|^{2}+p\cdot y_{k}+\eta(y_{k}) _{n}. \tag{4.7}\] Let \(r_{k}:=|y_{k}|\in(0,1/4)\) and consider the following construction defined in \(B_{2}\cap\{x_{n}>0\}\) \[v_{k}(x):=r_{k}^{-1}u_{k}(r_{k}x)-(-\tfrac{r_{k}}{2\varepsilon}|x^{\prime}|^{ 2}+p^{\prime}\cdot x^{\prime}+\rho(2r_{k})x_{n}).\] In other words, \(v_{k}\) is a Lipschitz rescaling of the non-negative difference between \(u_{k}\) and the approximating polynomial over \(B_{2r_{k}}\cap\{x_{n}>0\}\) \[u(x)=-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}+\rho (2r_{k})x_{n}+r_{k}v_{k}(r_{k}^{-1}x).\] We will use that the function \(v_{k}\) also satisfies in the viscosity sense \[\begin{cases}\mathcal{M}^{+}_{\lambda,\Lambda}(D^{2}v)+\Lambda r_{k}|Dv|+ \Lambda r_{k}^{2}v_{+}+Cr_{k}\geq 0\text{ in }B_{2}\cap\{x_{n}>0\},\\ \mathcal{M}^{-}_{\lambda,\Lambda}(D^{2}v)-\Lambda r_{k}|Dv|-\Lambda r_{k}^{2}v _{-}-Cr_{k}\leq 0\text{ in }B_{2}\cap\{x_{n}>0\},\end{cases}\] for some constant \(C\) independent of \(k\). The verification of this equation in the viscosity sense is straightforward from the equation for \(u\). Given (4.7), we get that for \(v_{k}\) the following must hold \[\sup_{K}v_{k}\geq\eta^{2},\] where \(K:=\partial B_{1}\cap\{x_{n}\geq\eta|x|\}\) is a compact subset of \(B_{2}\cap\{x_{n}>0\}\). Let \(\mu:=c_{1}\lambda/\Lambda\) for some \(c_{1}>0\) small to be fixed as a universal constant. Thanks to the Harnack inequality we get that \[\inf_{B_{1/2}^{n-1}\times\{x_{n}=\mu\}}v_{k}\geq c-Cr_{k},\] where the constant \(c=c(\mu)>0\) depends on \(\mu\). By taking \(k\) sufficiently large we get that \(\inf_{B_{1/2}^{n-1}\times\{x_{n}=\mu\}}v_{k}\geq c_{0}>0\), for \(c_{0}=c(\mu)/2\) independent of \(k\). The goal is to propagate this lower bound towards \(\{x_{n}=0\}\) with the use of a lower barrier for \(w_{k}(x):=v_{k}(x)/x_{n}\). This quotient satisfies the following degenerate elliptic equation in the viscosity sense2 Footnote 2: For \(v,w\in\mathbb{R}^{n}\), we denote \(v\odot w:=\frac{1}{2}(v\otimes w+w\otimes v)\in\mathbb{R}^{n\times n}_{\rm sym}\) \[\mathcal{M}^{+}_{\lambda,\Lambda}(x_{n}D^{2}w+2e_{n}\odot Dw)+\Lambda r_{k}|x _{n}Dw+we_{n}|+\Lambda r_{k}^{2}x_{n}w+Cr_{k}\geq 0\text{ in }B_{1}^{n-1}\times[0,1).\] Let \[\phi(x):=\mu+x_{n}-8\mu|x^{\prime}|^{2}.\] Then \(\phi\leq 0\) on \((\partial B_{1/2}^{n-1}\times(0,\mu])\cup(B_{1/2}^{n-1}\times\{x_{n}=0\})\) and \[\mathcal{M}^{+}_{\lambda,\Lambda}(x_{n}D^{2}\phi+2e_{n}\odot D \phi)(x) =\sup_{\lambda I\leq a\leq\Lambda I}\left(-x_{n}\sum_{i,j=1}^{n}a_ {ij}\partial_{ij}\phi(x)-2\sum_{i=1}^{n}a_{in}\partial_{i}\phi(x)\right),\] \[=\sup_{\lambda I\leq a\leq\Lambda I}\left(16\mu\sum_{i=1}^{n-1}(a _{ii}x_{n}+2a_{in}x_{i})-2a_{nn}\right),\] \[\leq C\mu\Lambda-2\lambda,\] \[\leq-\lambda.\] The last inequality can be achieved by taking \(c_{1}=\mu\Lambda/\lambda\) sufficiently small. The remaining terms can be considered lower order perturbations \[r_{k}|x_{n}D\phi(x)+\phi(x)e_{n}|+r_{k}^{2}x_{n}\phi(x)\leq C\mu\leq\lambda/( 2\Lambda).\] Once again we require \(c_{1}\) to be perhaps even smaller than before for the last inequality to hold. Hence, for \(k\gg 1\) \[\mathcal{M}^{+}_{\lambda,\Lambda}(x_{n}D^{2}\phi+2e_{n}\odot D\phi)+\Lambda r _{k}|x_{n}D\phi+\phi e_{n}|+\Lambda r_{k}^{2}x_{n}\phi\leq-\lambda/2\leq-Cr_ {k}\text{ in }B_{1/2}^{n-1}\times(0,\mu].\] By comparison, \(w_{k}\geq c_{0}\phi/(2\mu^{2})\) in \(B_{1/2}^{n-1}\times(0,\mu]\). We finally conclude by noticing that, for \(\mu\) sufficiently small, \(\phi\) is bounded away from zero in the cylinder \(B_{\mu}^{n-1}\times(0,\mu]\), hence for some \(\theta>0\) independent of \(k\) \[\inf_{B_{\mu}^{n-1}\times(0,\mu]}v_{k}/x_{n}\geq\theta.\] Finally, when we transfer this information back to \(u\) we get that \[u(x)\geq-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}+( \rho(2r_{k})+\theta)x_{n}\text{ in }B_{\mu r_{k}}^{n-1}\times[0,\mu r_{k}).\] This means that \(\rho(\mu r_{k})-\rho(2r_{k})\geq\theta>0\), which contradicts the convergence of \(\rho(r)\) to a finite limit as \(r\to 0^{+}\). In the following corollary we assume that the trace of the solution over the boundary gets trapped between two paraboloids. The conclusion is a bit stronger than differenciability. **Corollary 4.7**.: _Let \([\lambda,\Lambda]\subseteq(0,\infty)\), \(\varepsilon\in(0,1)\), \(p^{\prime}\in\mathbb{R}^{n-1}\) with \(|p^{\prime}|\leq\Lambda\), and let \(u\in C(\overline{B_{1}\cap\{x_{n}>0\}})\) satisfy_ \[\begin{cases}\mathcal{M}^{+}_{\lambda,\Lambda}(D^{2}u)+\Lambda|Du|+\Lambda u_{ +}\geq-\Lambda\text{ in }B_{1}\cap\{x_{n}>0\},\\ \mathcal{M}^{-}_{\lambda,\Lambda}(D^{2}u)-\Lambda|Du|-\Lambda u_{-}\leq \Lambda\text{ in }B_{1}\cap\{x_{n}>0\},\\ -\frac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}\leq u\leq \frac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime}\text{ on }B_{1}\cap\{x_{n}=0\}.\end{cases}\] _Then \(u\) is differentiable at the origin with \(Du(0)=(p^{\prime},p_{n})\) and_ \[\liminf_{\{x_{n}>0\}\ni x\to 0}\frac{u(x)-(-\frac{1}{2\varepsilon}|x^{ \prime}|^{2}+Du(0)\cdot x)}{x_{n}}=\limsup_{\{x_{n}>0\}\ni x\to 0}\frac{u(x)-( \frac{1}{2\varepsilon}|x^{\prime}|^{2}+Du(0)\cdot x)}{x_{n}}=0.\] Notice that we could also state the limits in the conclusion by saying that \[u(x) \geq Du(0)\cdot x-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+o(x_{n}),\] \[u(x) \leq Du(0)\cdot x+\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+o(x_{n}).\] This is a finer estimate than the differentiablity of \(u\) at zero given by \(u(x)=Du(0)\cdot x+o(|x|)\). #### 4.2.2. Eikonal regime In this section we present some regularity estimates for first order equations with quasi-convex operators \(H\in C(\mathbb{R}^{n})\). We measure this in terms of the modulus of continuity given by the one-homogeneous functions \[\phi^{\pm}(x):=\pm\max_{p\in\{H\leq 0\}}(\pm p\cdot x).\] The first few results (up to Corollary 4.11) are known in the literature, however the hypotheses seem to be a bit different. In [8] and also [45, Section 2] the operators are assumed to be convex. We decided to include complete proofs in the quasi-convex setting for the results we will be needing ahead. **Lemma 4.8**.: _Let \(H\in C(\mathbb{R}^{n})\) be a first-order operator with bounded sub-level sets and let \(u\in C(B_{r})\) be a viscosity sub-solution of \(Hu\leq 0\) in \(B_{r}\). Then \(u\leq u(0)+\phi^{+}\) in \(B_{r}\)._ Proof.: Given \(R\gg 1\), consider \[\phi_{R}(x):=\max_{p\in Z_{R}}p\cdot x\qquad\text{ where }\qquad Z_{R}:= \bigcap_{\overline{B_{R}(p)}\supseteq\{H\leq 0\}}\overline{B_{R}(p)}.\] Notice then that \(\phi_{R}\searrow\phi^{+}\) as \(R\to\infty\). Indeed, the inequality \(\phi_{R}\geq\phi^{+}\) and the monotonicity of the sequence follow because \(Z_{R}\) decreases to \(\operatorname{conv}(\{H\leq 0\})\supseteq\{H\leq 0\}\), the convex hull of \(\{H\leq 0\}\). Let us fix \(x^{*}\in\mathbb{R}^{n}\) and let \(p_{R}\in Z_{R}\) such that \(\phi_{R}(x^{*})=p_{R}\cdot x^{*}\). By compactness we get that for some sequence \(R_{i}\to\infty\) the points \(p_{R_{i}}\to p^{*}\in\operatorname{conv}(\{H\leq 0\})\). Given that \(p^{*}=\lambda p_{1}+(1-\lambda)p_{2}\) for some \(p_{1},p_{2}\in\{H\leq 0\}\) and \(\lambda\in[0,1]\), we get that \[\lim_{R\to\infty}\phi_{R}(x^{*})\geq\phi^{+}(x^{*})=\lambda\phi^{+}(x^{*})+(1- \lambda)\phi^{+}(x^{*})\geq(\lambda p_{1}+(1-\lambda)p_{2})\cdot x^{*}=\lim_{ R\to\infty}\phi_{R}(x^{*}).\] The advantage of this construction is that \(Z_{R}\) is strictly convex and then \(\phi_{R}\) is differentiable outside the origin. By noticing that \(\phi_{R}\) is also \(1\)-homogeneous we get the following identity for any \(x\in\mathbb{R}^{n}\setminus\{0\}\), \[\phi_{R}(x)=D\phi_{R}(x)\cdot x.\] To check that \(Z_{R}\) is strictly convex we consider \(p_{1}\neq p_{2}\in Z_{R}\), \(p_{1/2}:=(p_{1}+p_{2})/2\), and show that for \(r=R-\sqrt{R^{2}-|p_{2}-p_{1}|^{2}/4}\) it happens that \(B_{r}(p_{1/2})\subseteq Z_{R}\). For any \(\overline{B_{R}(p_{0})}\supseteq\{p_{1},p_{2}\}\) we clearly have that \(B_{r}(p_{1/2})\subseteq\overline{B_{R}(p_{0})}\). As this happens for any \(\overline{B_{R}(p_{0})}\supseteq\{H\leq 0\}\) we immediately conclude that \(B_{r}(p_{1/2})\subseteq Z_{R}\). To see that the convex function \(\phi_{R}\) is differentiable for any \(x^{*}\neq 0\) we notice that \(\phi_{R}(x^{*})=p^{*}\cdot x^{*}\) for a unique \(p^{*}\in Z_{R}\). Otherwise, \(\phi_{R}(x^{*})=p_{1}\cdot x^{*}=p_{2}\cdot x^{*}=p_{1/2}\cdot x^{*}\) for \(p_{1}\neq p_{2}\in Z_{R}\) and \(p_{1/2}=(p_{1}+p_{2})/2\). Then some \(p\in B_{r}(p_{1/2})\subseteq Z_{R}\) would improve the maximum of \(p\cdot x^{*}\), contradicting the definition of \(\phi_{R}(x^{*})\). To finally show that \(u(x)\leq u(0)+\phi_{R}(x)\) for \(R\gg 1\), assume by contradiction that \[m:=\sup_{x\in B_{r}}(u(x)-u(0)-\phi_{R}(x))>0.\] Hence, for some \(\eta>0\) the following function must touch \(u\) from above at some \(x^{*}\in B_{r}\setminus\{0\}\) \[\psi(x):=u(0)+\tfrac{m}{2}+\phi_{R}(x)+\tfrac{\eta}{r-|x|}.\] At this point we must notice that for any \(t>0\), \(H(D\phi_{R}(x^{*})+tx^{*})>0\). Indeed, if instead \(H(D\phi_{R}(x^{*})+tx^{*})\leq 0\) we get the contradiction \[\phi_{R}(x^{*})=\max_{p\in Z_{R}}p\cdot x^{*}\geq(D\phi_{R}(x^{*})+tx^{*}) \cdot x^{*}>D\phi_{R}(x^{*})\cdot x^{*}=\phi_{R}(x^{*}).\] This observation implies that \(H(D\psi(x^{*}))\) is strictly positive, which is inconsistent with \(u\) being a sub-solution of the equation. By invoking a covering argument and reversing the roles of the points in the previous lemma we obtain the following corollary. **Corollary 4.9**.: _Let \(\Omega\subseteq\mathbb{R}^{n}\) be open convex set. Let \(H\in C(\mathbb{R}^{n})\) be a first-order operator with bounded sub-level sets and let \(u\in C(\Omega)\) be a sub-solution of \(Hu\leq 0\) in \(\Omega\). Then for any \(x,y\in B_{r}(x_{0})\subseteq B_{2r}(x_{0})\subseteq\Omega\)_ \[\phi^{-}(x-y)\leq u(x)-u(y)\leq\phi^{+}(x-y).\] By Corollary 2.2, we deduce that \(\phi^{+}\) is a sub-solution of \(H\phi\leq 0\) in \(\mathbb{R}^{n}\). Additionally, thanks to the following lemma, we can conclude that \(\phi^{-}\) is as well a sub-solution of \(H\phi\leq 0\) in \(\mathbb{R}^{n}\) if \(\{H\leq 0\}\) is convex (or if we assume that \(H\) is quasi-convex). It is worth recalling that in the previous proof, we have shown that \(\phi^{\pm}\) are classical solutions of \(H\phi=0\) in \(\mathbb{R}^{n}\setminus\{0\}\) if \(\{H\leq 0\}\) is strictly convex. **Lemma 4.10**.: _Let \(H\in C(\mathbb{R}^{n})\) be a first-order quasi-convex operator with bounded sub-level sets. Let \(\Omega\subseteq\mathbb{R}^{n}\) be open, \(\{u_{k}\}\subseteq C^{1}(\Omega)\) be an equicontinuous sequence of functions satisfying \(Hu\leq 0\) in \(\Omega\), and assume that \(u(x):=\inf u_{k}(x)\) finite valued. Then \(u\) is a viscosity sub-solution of \(Hu\leq 0\) in \(\Omega\)._ This result is also valid for Lipschitz solutions and appears for instance in [45, Corollary 2.36 and Exercise 30] as a consequence of the characterization of viscosity solutions for convex Hamiltonians by Barron and Jensen in [8]. Proof.: The function \(u\) as a uniform limit of functions the form \(v_{k}:=\min(u_{1},\ldots,u_{k})\). Hence the result will follow by the stability of viscosity sub-solutions once we show that each \(v_{k}\) is a sub-solution of \(Hu=0\) in \(\Omega\). By an inductive argument we can further reduce the analysis to the case \(k=2\). Let \(\varphi\in C^{1}(B_{r}(x_{0}))\) be a test function touching \(v_{2}\) strictly from above at \(x_{0}\) with \(B_{r}(x_{0})\subseteq\Omega\). If \(u_{1}(x_{0})<u_{2}(x_{0})\) we get that \(\varphi\) touches \(u_{1}\) from above at \(x_{0}\) over a neighborhood of \(x_{0}\), so we conclude using that \(u_{1}\) is a viscosity sub-solution. The case \(u_{2}(x_{0})<u_{1}(x_{0})\) can be treated in the same way and then we are left with the alternative \(u_{1}(x_{0})=u_{2}(x_{0})\). If \(p:=Du_{1}(x_{0})\neq q:=Du_{2}(x_{0})\), then \(u_{1}=u_{2}\) forms a \(C^{1}\) surface around \(x_{0}\) with normal given by \(Du_{1}-Du_{2}\neq 0\). We can then choose a system of coordinates centered at \(x_{0}\) such that \(p^{\prime}=q^{\prime}\) and \(p_{n}<q_{n}\). Consequently, the exterior normal of \(\{u_{1}>u_{2}\}\) at the origin is the vector \(e_{n}\). As \(\varphi\) touches \(\min(u_{1},u_{2})=u_{1}=u_{2}\) over \(\{u_{1}=u_{2}\}\) from above at \(x_{0}\), we have that the tangential derivatives must coincide \[(D\varphi(x_{0}))^{\prime}=p^{\prime}=q^{\prime}.\] On the other hand, for the normal derivatives we must also have that \(p_{n}\leq\partial_{n}\varphi(x_{0})\leq q_{n}\). For instance, \(p_{n}\leq\partial_{n}\varphi(x_{0})\) follows because \(\varphi-\varphi(x_{0})\geq u_{1}-u_{1}(x_{0})\) over \(\{u_{1}<u_{2}\}\), which has \(e_{n}\) as a interior normal vector at \(x_{0}\). Now we notice that for \(\lambda:=(\partial_{n}\varphi(x_{0})-p_{n})/(q_{n}-p_{n})\in[0,1]\) we get that by the convexity of \(\{H\leq 0\}\) and using that \(u_{1}\) and \(u_{2}\) are classical sub-solutions \[D\varphi(x_{0})=\lambda Du_{1}(x_{0})+(1-\lambda)Du_{2}(x_{0})\in\{H\leq 0\}.\] If instead, \(Du_{1}(x_{0})=Du_{2}(x_{0})\) we must also have \(D\varphi(x_{0})=Du_{1}(x_{0})=Du_{2}(x_{0})\in\{H\leq 0\}\). Otherwise, we have that \(\{\varphi<u_{1}\}\) and \(\{\varphi<u_{2}\}\) have the same exterior normal vector at \(x_{0}\), so that \(\{\varphi\geq u_{1}\}\cup\{\varphi\geq u_{2}\}\) can not cover a neighborhood of \(x_{0}\). This contradicts \(\varphi\geq\min(u_{1},u_{2})\) around \(x_{0}\). **Corollary 4.11**.: _Let \(H\in C(\mathbb{R}^{n})\) be a first-order quasi-convex operator with bounded sub-level sets. Then \(\phi^{-}\) is a viscosity solution of the equation \(H\phi=0\) in \(\mathbb{R}^{n}\)._ The following lemmas are the boundary estimates we will use for the eikonal problem. The first result says that under certain conditions a paraboloid \(\varphi\) over \(B_{1}\cap\{x_{n}=0\}\) can be extended to a paraboloid with \(H(D\varphi(0))\leq 0\). **Lemma 4.12**.: _Let \(H\in C(\mathbb{R}^{n})\) and let \(u\in C^{0,1}(B_{1}^{n-1}\times(-1,0])\) be a viscosity sub-solution of \(Hu\leq 0\) in \(B_{1}^{n-1}\times(-1,0)\) such that for some \(p^{\prime}\in\mathbb{R}^{n-1}\) and \(\varepsilon>0\)_ \[u\leq u(0)+\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{\prime} \text{ on }B_{1}\cap\{x_{n}=0\}.\] _Then there exists \(p=(p^{\prime},p_{n})\in\mathbb{R}^{n}\) such that \(H(p)\leq 0\)._ Proof.: For \(L:=[u]_{C^{0,1}(B_{1}\cap\{x_{n}\leq 0\})}\) and \(r\in(0,1/2)\) small, we get that \[u\leq u(0)+\tfrac{r^{2}}{2\varepsilon}|p^{\prime}|r+\sqrt{2}Lr\text{ in }B_{r}^{n-1} \times[-r,0].\] Then, by letting \(\rho:=r/\varepsilon+|p^{\prime}|+\sqrt{2}L\), we obtain that \[\varphi(x):=u(0)+\tfrac{1}{\varepsilon}|x^{\prime}|^{2}+p^{\prime}\cdot x^{ \prime}-\rho x_{n}-\tfrac{r^{2}}{4\varepsilon}>u\text{ on }(\partial B_{r}^{n-1}\times[-r,0])\cup(B_{r}^{n-1} \times\{-r\}).\] Given that \(\varphi(0)<u(0)\) we must have that for some \(\eta>0\) the function \(\psi=\varphi-\eta x_{n}^{-1}\) touches \(u\) from above at some \(x_{r}\in B_{r}^{n-1}\times(-r,0)\). At such contact point we have \[p_{r}:=D\psi(x_{r})=(\tfrac{2}{\varepsilon}x_{r}^{\prime}+p^{\prime},-\rho+ \eta(x_{r})_{n}^{-2})\in B_{L}\] and \(H(\tfrac{2}{\varepsilon}x_{r}^{\prime}+p^{\prime},-\rho+\eta(x_{r})_{n}^{-2})\leq 0\). Then \(|p_{r}^{\prime}-p^{\prime}|\leq 2r/\varepsilon\to 0\) as \(r\to 0\) and by compactness there is an accumulation value such that \((p_{r})_{n}\to p_{n}\) for some sequence. We can now conclude the proof thanks to the continuity of \(H\). To finish this section let us consider \(\eta>0\) and \[\phi_{1}^{+}(x):=\max_{p\in\{H+\eta\leq 0\}}p\cdot x.\] For any sub-solution \(u\) of \(Hu+\eta\leq 0\) in \(B_{1}\cap\{x_{n}<0\}\) with boundary data \(u\leq\varphi\) over \(B_{1}\cap\{x_{n}=0\}\), the lower envelope given by the Hopf-Lax formula \[\psi(x):=\inf_{y^{\prime}\in\mathbb{R}^{n-1}}\varphi(y^{\prime})+\phi_{1}^{+}( x-y^{\prime})\] is an upper bound for \(u\). This is a consequence of Corollary 4.9. The following lemma gives us a further bound from above for \(\psi\) whenever \(\varphi\) can be extended to a paraboloid with \(H(D\varphi(0))=0\). **Lemma 4.13**.: _Let \(H\in C(\mathbb{R}^{n})\) be a first-order quasi-convex operator with bounded sub-level sets. Let \(\eta>0\), and assume that there exists \(p_{1}=(p_{*}^{\prime},\rho_{1})\in\mathbb{R}^{n}\) such that \(H(p_{1})+\eta\leq 0\). Let \(p_{0}=(p_{*}^{\prime},\rho_{0})\in\mathbb{R}^{n}\) such that_ \[\rho_{0}=\min\{\rho\in\mathbb{R}\ |\ H((p_{*}^{\prime},\rho))\leq 0\}<\rho_{1}.\] _Let \(\varepsilon>0\) and_ \[\varphi(x):=\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p_{0}\cdot x,\qquad\psi(x) :=\inf_{y^{\prime}\in\mathbb{R}^{n-1}}\varphi(y^{\prime})+\phi_{1}^{+}(x-y^{ \prime}).\] _Then for some \(r>0\) sufficiently small \(\psi\leq\varphi\) in \(B_{r}\cap\{x_{n}\leq 0\}\)._ Proof.: The goal is to see that we can take \(r\) sufficiently small such that for any \(x\in B_{r}\cap\{x_{n}\leq 0\}\) there exists some \(y^{\prime}\in B_{1}\cap\{x_{n}=0\}\) for which \[\frac{1}{2\varepsilon}|y^{\prime}|^{2}+p_{*}^{\prime}\cdot y^{\prime}+\max_{p \in\{H+\eta\leq 0\}}p\cdot(x-y^{\prime})\leq\frac{1}{2\varepsilon}|x^{\prime}|^{2}+ p_{0}\cdot x.\] Let \(r_{0}\in(0,1)\) such that \(B_{2r_{0}}(p_{0})\cap\{H+\eta\leq 0\}=\emptyset\). By Hahn-Banach there exists some \(z\in\partial B_{1}\) such that \[\min_{p\in\{H+\eta\leq 0\}}z\cdot p\geq\max_{q\in B_{r_{0}}(p_{0})}z\cdot q+r_{0}.\] Notice that \(z_{n}>0\). Moreover, we can even give a strictly positive lower bound for \(z_{n}\). Using that \(p_{0}\in B_{r_{0}}(p_{0})\) and \(p_{1}\in\{H+\eta\leq 0\}\) are separated by the previous application of Hanh-Banach we obtain that \[z\cdot p_{1}=z^{\prime}\cdot p_{*}^{\prime}+z_{n}\rho_{1}\geq z\cdot p_{0}+r_{0} =z^{\prime}\cdot p_{*}^{\prime}+z_{n}\rho_{0}+r_{0},\] \[\Rightarrow z_{n}\geq r_{0}/(\rho_{1}-\rho_{0})>0.\] For any \(\lambda\in(0,\varepsilon r_{0})\) the vector \(\bar{z}=\lambda z\) satisfies \[\min_{p\in\{H+\eta\leq 0\}}\bar{z}\cdot p\geq\max_{q\in B_{r_{0}}(p_{0})}\bar{z} \cdot q+\tfrac{1}{\varepsilon}\bar{z}_{n}^{2}.\] If we take \(r\) smaller than \(\varepsilon r_{0}\min(r_{0}/(\rho_{1}-\rho_{0}),1/2)\) and \(x\in B_{r}\cap\{x_{n}\leq 0\}\), we get that \(\lambda:=-x_{n}/z_{n}\in(0,\varepsilon r_{0})\) so the previous estimate holds. Let \(y^{\prime}=x+\bar{z}=x-(x_{n}/z_{n})z\in B_{1}\cap\{x_{n}=0\}\) so that \(|y^{\prime}+x|=|\bar{z}+2x|\leq|\bar{z}|+2|x|<2\varepsilon r_{0}\) and then \[\min_{p\in\{H+\eta\leq 0\}}(y^{\prime}-x)\cdot p \geq\max_{q\in B_{r_{0}}(p_{0})}(y^{\prime}-x)\cdot q+\tfrac{1}{ \varepsilon}x_{n}^{2},\] \[\geq(y^{\prime}-x)\cdot(\tfrac{1}{2\varepsilon}(y^{\prime}+x)+p_ {0})+\tfrac{1}{2\varepsilon}x_{n}^{2},\] \[=\tfrac{1}{2\varepsilon}|y^{\prime}|^{2}-\tfrac{1}{2\varepsilon}| x^{\prime}|^{2}+p_{0}\cdot(y^{\prime}-x).\] This is the desired inequality which concludes the proof. ### Proof of the comparison principle As a first step towards the proof of Theorem 4.1 we show that the comparison principle holds under the following assumptions for the solutions \(u,v\in C(B_{1}^{n-1}\times(-1,1))\cap C^{0,1}(B_{1}^{n-1}\times(-1,0])\) with respect to \(\varepsilon\in(0,1)\), \([\lambda,\Lambda]\subseteq(0,\infty)\), \(H\in C(\mathbb{R}^{n})\), and \(\eta>0\): * The following functions are convex \[x^{\prime}\in B_{1}^{n-1}\mapsto u(x^{\prime},0)+\tfrac{1}{2 \varepsilon}|x^{\prime}|^{2}\quad\text{ and }\quad x^{\prime}\in B_{1}^{n-1}\mapsto-v(x^{\prime},0)+\tfrac{1}{2 \varepsilon}|x^{\prime}|^{2}.\] * Both of the functions \(u\) and \(v\) satisfy in the viscosity sense \[\begin{cases}\mathcal{M}_{\lambda,\Lambda}^{+}(D^{2}u)+\Lambda|Du|+\Lambda u_ {+}+\Lambda\geq 0\text{ in }B_{1}^{n-1}\times(0,1),\\ \mathcal{M}_{\lambda,\Lambda}^{-}(D^{2}u)-\Lambda|Du|-\Lambda u_{-}-\Lambda \leq 0\text{ in }B_{1}^{n-1}\times(0,1).\end{cases}\] * The function \(u\) satisfies in the viscosity sense \[Hu+\eta\leq 0\text{ in }B_{1}^{n-1}\times(-1,0].\] * The function \(v\) satisfies in the viscosity sense \[Hv\geq 0\text{ in }B_{1}^{n-1}\times(-1,0].\] **Lemma 4.14**.: _Let \(\eta>0\) and let \(H\in C(\mathbb{R}^{n})\) a first-order quasi-convex operator with bounded sub-level sets. Let \(\varepsilon\in(0,1)\), \([\lambda,\Lambda]\subseteq(0,\infty)\), and let \(u,v\in C(B_{1}^{n-1}\times(-1,1))\cap C^{0,1}(B_{1}^{n-1}\times(-1,0])\) satisfy (C'), (UE'), (Sub'), (Sup') above. Then \(u\) can not touch \(v\) from below at the origin._ In the following proof we will use once again the notation \(u_{\pm}\) for the restriction of a given function to \(\Omega_{\pm}\cup\Gamma\). Proof.: Assume by contradiction that \(u\leq v\) with \(u(0)=v(0)=0\) (without loss of generality). By the convexity hypothesis on \(u\) and \(v\) we get that for some \(p^{\prime}_{*}\in\mathbb{R}^{n-1}\) \[-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}_{*}\cdot x^{\prime}\leq u(x^ {\prime},0)\leq v(x^{\prime},0)\leq\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{ \prime}_{*}\cdot x^{\prime}\text{ in }B_{1}^{n-1}. \tag{4.8}\] Given \(t\in[0,1]\) consider \[\rho_{t} :=\min\{\rho\in\mathbb{R}\ |\ H(p^{\prime}_{*},\rho)+\eta t\leq 0\},\] \[\phi_{t}^{\pm}(x) :=\pm\max_{p\in\{H+\eta t\leq 0\}}(\pm p\cdot x),\] \[\varphi_{t}^{\pm}(x) :=\pm\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}_{*}\cdot x ^{\prime}+\rho_{t}x_{n}.\] Thanks to the Lemma 4.12 we known that \(\rho_{t}\) are well defined as finite numbers, and by the continuity of \(H\) \[0\leq t_{1}<t_{2}\leq 1\qquad\Rightarrow\qquad\rho_{t_{1}}<\rho_{t_{2}}\qquad \Rightarrow\qquad\begin{cases}\varphi_{t_{1}}^{\pm}>\varphi_{t_{2}}^{\pm} \text{ in }\{x_{n}<0\},\\ \varphi_{t_{1}}^{\pm}<\varphi_{t_{2}}^{\pm}\text{ in }\{x_{n}>0\}.\end{cases}\] The functions \(\phi_{t}^{\pm}\) also have a monotonicity property \[0\leq t_{1}<t_{2}\leq 1\qquad\Rightarrow\qquad\begin{cases}\phi_{t_{1}}^{-}< \phi_{t_{2}}^{-}\text{ in }\mathbb{R}^{n}\setminus\{0\},\\ \phi_{t_{1}}^{+}>\phi_{t_{2}}^{+}\text{ in }\mathbb{R}^{n}\setminus\{0\}.\end{cases}.\] By Corollary 4.11 we have that \(\phi_{t}^{-}\) satisfies in the viscosity sense, \[H(D\phi_{t}^{-})+\eta t=0\text{ in }\mathbb{R}^{n}. \tag{4.9}\] Meanwhile, \(\phi_{t}^{+}\) satisfies instead \[H(D\phi_{t}^{+})+\eta t\leq 0\text{ in }\mathbb{R}^{n}.\] By Corollary 4.9 we obtain that for every \(x,y\in B_{1}\cap\{x_{n}\leq 0\}\) \[\phi_{1}^{-}(x-y)\leq u(x)-u(y)\leq\phi_{1}^{+}(x-y). \tag{4.10}\] Finally we also point out that \(\varphi_{t}^{\pm}(x)=-\tfrac{1}{2\varepsilon}|x^{\prime}|^{2}+p^{\prime}_{*} \cdot x^{\prime}\) on \(\{x_{n}=0\}\) such that \[\varphi_{t}^{-}\leq u\leq v\leq\varphi_{t}^{+}\text{ in }B_{1}^{n-1} \tag{4.11}\] and at the origin we get that \(H(D\varphi_{t}^{\pm}(0))=-\eta t\). **1.** Our first goal is to show that for some radius \(r\in(0,1)\) sufficiently small, \[\varphi_{1/2}^{-}\overset{?}{\leq}v\text{ in }B_{r}\cap\{x_{n}\leq 0\}. \tag{4.12}\] Let \(r\in(0,1)\) be a sufficiently small radius such that \(H(D\varphi_{1/2}^{-})\in(-\eta,0)\) in \(B_{r}\), which is possible because of continuity and \(H(D\varphi_{1/2}^{-})=-\eta/2\). Consider now the function \[\psi:=\min(\varphi_{1/2}^{-},\phi_{0}^{-}+c),\qquad c:=\inf_{\partial B_{r}}( \phi_{1}^{-}-\phi_{0}^{-})>0.\] By construction and the use of (4.11) and (4.10) we have that \(\psi\leq v\) on \(\partial(B_{r}\cap\{x_{n}<0\})\). Over \(B_{r}\cap\{x_{n}=0\}\) we use that \(\psi\leq\varphi_{1/2}^{-}\leq v\). Over \(\partial B_{r}\cap\{x_{n}<0\}\) we use that \(u\leq v\). Moreover, \(\psi\) is also sub-solution of the equation \(H\psi\leq 0\) in \(B_{r}^{n-1}\times(-r,0)\). This is a consequence of Lemma 4.10. By comparison, we conclude that \(\psi\leq v\) in \(B_{r}\cap\{x_{n}\leq 0\}\). Finally notice that by continuity \(\psi=\varphi_{1/2}^{-}\) in some neighborhood of the origin. With this we have shown that (4.12) must hold for some sufficiently small radius. **2.** Now we look at the other side, \(\{x_{n}\geq 0\}\). The new goal is to see that \[\partial_{n}v_{+}(0)\stackrel{{?}}{{\leq}}\rho_{1/2}.\] This derivative is well defined thanks to the Lemma 4.7. If we assume by contradiction that the opposite inequality holds, then Lemma 4.7 and the previous lower bound for \(v_{-}\) would imply that the paraboloid \(\varphi_{1/2}^{-}\) touches \(v\) from below at the origin. This contradicts \(v\) being a super-solution of \(Hv\geq 0\) given that \(H(D\varphi_{1/2}^{-}(0))=-\eta/2<0\). **3.** Let us show now that for some radius \(r\in(0,1)\) \[\varphi_{3/4}^{+}\stackrel{{?}}{{\geq}}u\text{ in }B_{r}\cap\{x_{n} \geq 0\}. \tag{4.13}\] Lemma 4.7 applies to \(u_{+}\), so we get that \(\partial_{n}u_{+}(0)\) is well defined. From the contact of \(v_{+}\) and \(u_{+}\) at the origin we get that \(\partial_{n}u_{+}(0)\leq\partial_{n}v_{+}(0)\leq\rho_{1/2}\). Moreover, by the same lemma we conclude that for some \(r\in(0,1)\) sufficiently small \[u_{+}-\varphi_{1/2}^{+}\leq(\rho_{3/4}-\rho_{1/2})x_{n}\text{ in }B_{r}\cap\{x_{n} \geq 0\}.\] This inequality is equivalent to (4.13). **4.** As a final step we will extend the bound in (4.13) to the other side \[\varphi_{3/4}^{+}\stackrel{{?}}{{\geq}}u\text{ in }B_{r}.\] By (4.10) we know that \[u(x)\leq\psi(x):=\inf_{y^{\prime}\in B_{1}\cap\{x_{n}=0\}}\varphi_{1}^{+}(y^{ \prime})+\phi_{1}^{+}(x-y^{\prime})\text{ for }x\in B_{1}\cap\{x_{n}\leq 0\}.\] By Lemma 4.13 we know that \(\psi\leq\varphi_{3/4}^{+}\) in \(B_{r}\). This last step concludes the proof as it contradicts \(u\) being a sub-solution of \(Hu+\eta\leq 0\). The test function \(\varphi_{3/4}^{+}\) satisfies at the contact point that \(H(D\varphi_{3/4}^{+}(0))=-3\eta/4>-\eta\). #### 4.3.1. Proof of Theorem 4.1 Assume by contradiction that \(u\) and \(v\) satisfy the hypotheses of the theorem, however \(u\leq v\) on \(\partial\Omega\) and \(m:=\sup_{\Omega}(u-v)>0\). Recall the constructions from Section 4.1 and consider the sup and inf-convolutions (in the directions parallel to \(\{x_{n}=0\}\)) of \(u\) and \(v\) respectively and with respect to some parameter \(\varepsilon>0\). Let \(M=\max(\|u\|_{L^{\infty}(\overline{\Omega})},\|v\|_{L^{\infty}(\overline{ \Omega})})\) and \(r=2\sqrt{M\varepsilon}\). By the uniform convergence convergence of \(u^{\varepsilon}\searrow u\) and \(v_{\varepsilon}\nearrow v\), we have that for \(\varepsilon\) sufficiently small we can enforce \(\max_{\partial\Omega^{r}}(u^{\varepsilon}-v_{\varepsilon})\leq m/4\) meanwhile \(\sup_{\Omega^{r}}(u^{\varepsilon}-v_{\varepsilon})\geq m/2\). The hypotheses \((H_{-})\) and \((H_{+})\), together with the assumptions \(u,v\in C(\overline{\Omega})\cap C^{0,1}(\Omega_{-})\) allows to get that for \(\varepsilon\) even smaller, \(u^{\varepsilon}\) is a sub-solution of \[\begin{cases}H_{-}u+3\eta/4\leq 0\text{ in }\Omega_{-}^{r}\cup\Gamma^{r},\\ H_{+}u+3\eta/4\leq 0\text{ in }\Omega_{+}^{r},\end{cases}\] meanwhile \(v_{\varepsilon}\) is a super-solution of \[\begin{cases}H_{-}v+\eta/4\geq 0\text{ in }\Omega_{-}^{r}\cup\Gamma^{r},\\ H_{+}v+\eta/4\geq 0\text{ in }\Omega_{+}^{r}.\end{cases}\] If \(\operatorname*{argmax}_{\Omega_{r}}(u^{\varepsilon}-v_{\varepsilon})\cap( \Omega_{-}^{r}\cup\Omega_{+}^{r})\neq\emptyset\), then we get a contradiction by the standard comparison principle. Let us assume then that there is some \(x_{0}\in\operatorname*{argmax}_{\Omega_{r}}(u^{\varepsilon}-v_{\varepsilon}) \cap\Gamma^{r}\) and \(c=(u^{\varepsilon}-v_{\varepsilon})(x_{0})>0\). Therefore, the function \(u^{\varepsilon}\) touches \(v_{\varepsilon}+c\) from below at \(x_{0}\in\Gamma\) over some small neighborhood \(\overline{B_{\rho}(x_{0})}\subseteq\Omega^{r}\). Let \(H_{0}(p):=H_{-}(p,u(x_{0}),x_{0})\). By the continuity of the solutions and the operators, we get that for \(\rho>0\) sufficiently small, the function \(u^{\varepsilon}\) is a viscosity sub-solution of \[H_{0}u+5\eta/8\leq 0\text{ in }B_{\rho}(x_{0})\cap\{x_{n}\leq 0\},\] and \(v_{\varepsilon}+c\) is a viscosity super-solution of \[H_{0}v+3\eta/8\geq 0\text{ in }B_{\rho}(x_{0})\cap\{x_{n}\leq 0\}.\] By taking \(H:=H_{0}+3\eta/8\) we get that \(u^{\varepsilon}\) and \(v_{\varepsilon}+c\) satisfy the translation invariant equations (Sub') and (Sup') respectively for the gap \(\eta/4>0\). As in Lemma 2.6, we consider now \(\bar{u}\in C(\overline{B_{\rho}(x_{0})})\) to be the lifting of \(u^{\varepsilon}\) constructed from the boundary value problem \[\begin{cases}H_{+}\bar{u}+3\eta/4=0\text{ in }B_{\rho}(x_{0})\cap\{x_{n}>0\}, \\ \bar{u}=u^{\varepsilon}\text{ on }\overline{B_{\rho}(x_{0})}\setminus(B_{\rho}(x_{0}) \cap\{x_{n}>0\}).\end{cases}\] In a similar way we let \(\bar{v}\in C(\overline{B_{\rho}(x_{0})})\) to be given by \[\begin{cases}H_{+}\bar{v}+\eta/4=0\text{ in }B_{\rho}(x_{0})\cap\{x_{n}>0\}, \\ \bar{v}=v_{\varepsilon}+c\text{ on }\overline{B_{\rho}(x_{0})}\setminus(B_{\rho}(x_{0}) \cap\{x_{n}>0\}).\end{cases}\] The comparison principle for uniformly elliptic equations guarantee that we still have that \(\bar{u}\) touches \(\bar{v}\) from below at \(x_{0}\). The advantage of this construction is that now both \(\bar{u}\) and \(\bar{v}\) now satisfy (UE') in \(B_{r}(x_{0})\), meanwhile they keep satisfying equations (Sub') and (Sup'), by the same argument towards the end in the proof of Lemma 2.6. Now that we have all the hypotheses from Lemma 4.14, we we have the desired contradiction as \(\bar{u}\) is not allowed to touch \(\bar{v}\) from below at \(x_{0}\in\Gamma\).
2306.02351
RSSOD-Bench: A large-scale benchmark dataset for Salient Object Detection in Optical Remote Sensing Imagery
We present the RSSOD-Bench dataset for salient object detection (SOD) in optical remote sensing imagery. While SOD has achieved success in natural scene images with deep learning, research in SOD for remote sensing imagery (RSSOD) is still in its early stages. Existing RSSOD datasets have limitations in terms of scale, and scene categories, which make them misaligned with real-world applications. To address these shortcomings, we construct the RSSOD-Bench dataset, which contains images from four different cities in the USA. The dataset provides annotations for various salient object categories, such as buildings, lakes, rivers, highways, bridges, aircraft, ships, athletic fields, and more. The salient objects in RSSOD-Bench exhibit large-scale variations, cluttered backgrounds, and different seasons. Unlike existing datasets, RSSOD-Bench offers uniform distribution across scene categories. We benchmark 23 different state-of-the-art approaches from both the computer vision and remote sensing communities. Experimental results demonstrate that more research efforts are required for the RSSOD task.
Zhitong Xiong, Yanfeng Liu, Qi Wang, Xiao Xiang Zhu
2023-06-04T13:01:19Z
http://arxiv.org/abs/2306.02351v1
RSSOD-Bench: A Large-Scale Benchmark Dataset for Salient Object Detection in Optical Remote Sensing Imagery ###### Abstract We present the RSSOD-Bench dataset for salient object detection (SOD) in optical remote sensing imagery. While SOD has achieved success in natural scene images with deep learning, research in SOD for remote sensing imagery (RSSOD) is still in its early stages. Existing RSSOD datasets have limitations in terms of scale, and scene categories, which make them misaligned with real-world applications. To address these shortcomings, we construct the RSSOD-Bench dataset, which contains images from four different cities in the USA 1. The dataset provides annotations for various salient object categories, such as buildings, lakes, rivers, highways, bridges, aircraft, ships, athletic fields, and more. The salient objects in RSSOD-Bench exhibit large-scale variations, cluttered backgrounds, and different seasons. Unlike existing datasets, RSSOD-Bench offers uniform distribution across scene categories. We benchmark **23** different state-of-the-art approaches from both the computer vision and remote sensing communities. Experimental results demonstrate that more research efforts are required for the RSSOD task. Footnote 1: The RSSOD-Bench dataset can be accessed via [https://github.com/EarthNets/Dataset4EO](https://github.com/EarthNets/Dataset4EO) Zhitong Xiong\({}^{1}\), Yanfeng Liu\({}^{2,3}\), Qi Wang\({}^{3}\), Xiao Xiang Zhu\({}^{1}\)\({}^{1}\) Data Science in Earth Observation, Technical University of Munich (TUM), Ottobrun, Germany \({}^{2}\) School of Computer Science, and \({}^{3}\) School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an 710072, Shaanxi, P. R. China benchmark, dataset, remote sensing, salient object detection ## 1 Introduction Automatically extracting salient objects from images can serve as an important pre-processing step for numerous computer vision and remote sensing tasks [1, 2, 3, 4]. To name a few, salient object detection (SOD) has been applied in self-supervised learning [5], image quality assessment [6], image retrieval [7], etc. SOD aims to extract visually distinctive objects from diverse complicated backgrounds at a pixel level. Namely, given an input image, two steps are required for SOD models to successfully detect the salient objects: 1) determine correct salient areas from cluttered backgrounds; 2) accurately segment the pixels of salient objects. For natural scene images, SOD has achieved remarkable success with the advent of deep learning. However, research on SOD for remote sensing and Earth observation data (RSSOD) is still in its infancy. Natural scene images are usually object-centric. In contrast, remote sensing imagery, captured through high-altitude shooting, usually covers a larger range of scenes with diverse ground objects and complicated backgrounds. Considering these differences, several datasets dedicated to remote sensing data have been released to foster the research of novel methods. The ORSSD dataset [8] contains 800 images (600 for training and 200 for testing) collected from Google Earth. EORSSD [9] is an extended version of ORSSD with 2,000 images (1400 for training, 600 for testing) in total. ORSI-4199 [10] is a large dataset, which contains 4,199 images with pixel-level annotations. Several shortcomings of existing RSSOD datasets hinder further research and progress of the SOD. The first limitation is that the scale of existing datasets is relatively small. As presented in Table 1, we list the statistical information of different existing datasets. The number of images in ORSSD and EORSSD datasets is less than 2,000, which is not enough to align the performance of models well with real-world scenarios. In contrast, our dataset contains 6,000 images collected in four different cities, which is larger than the existing ones. The second limitation is that the images are limited to some specific scene categories. The remote sensing images of existing datasets are usually collected in several scene categories and not uniformly sampled from the Earth's surface. This makes the data distribution does not align well with real-world applications. Considering this problem, we introduce RSSOD-Bench to facilitate the research in the community. ## 2 Dataset Construction For the RSSOD-Bench dataset, we annotate the salient objects that are naturally distinct from the background and are \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Datasets & \#Total Images & \#Cities & \#Training & \#Validation & \#Test \\ \hline ORSSD & 800 & — & 600 & 0 & 200 \\ EORSSD & 2,000 & — & 1,400 & 0 & 600 \\ ORSI-4199 & 4,199 & — & 2,000 & 0 & 2,199 \\ \hline **Ours** & **6,000** & **4** & **3,000** & **600** & **2,400** \\ \hline \end{tabular} \end{table} Table 1: Statistics comparison with existing RSSOD datasets. associated with certain object categories useful for specific tasks. Specifically, the following objects are annotated: buildings, lakes, rivers, highways, bridges, aircraft, ships, athletic fields, badminton courts, volleyball courts, baseball fields, basketball courts, gymnasiums, storage tanks, etc. As presented in Fig. 1, the salient objects in RSSOD-Bench have large scale variations with tiny and large regions. Also, there are severely cluttered backgrounds with different seasons. In some scenes, the salient objects can be very dense. Note that, the remote sensing images in the RSSOD-Bench dataset are uniformly distributed regarding scene categories. This is different from existing ones that are usually collected from several scene categories. ## 3 Methods and Results We report four commonly-used metrics in the field of SOD, including the **MAE**[30], **F-Measure**[12], **S-Measure**[31], and **E-Measure**[32]. **MAE** quantifies the pixel-level disparity between the predicted saliency map (SM) and the ground truth (GT). **F-Measure** (\(F_{\beta}\)) is a composite metric that combines precision and recall to assess the similarity between SM and GT, with different weights. **S-Measure** (\(S_{m}\)) utilizes balanced structural information from object-aware and region-aware levels to evaluate the structural likeness between SM and GT. **E-Measure** (\(E_{m}\)) is a metric that signifies the degree of pixel-level correspondence and image-level statistics. To comprehensively validate existing state-of-the-art methods on our proposed dataset, we conduct extensive experiments to benchmark and compare their performance. Specifically, LC [11] and FT [12] are classical saliency detection methods with no use of deep learning models. As there are considerable SOD methods proposed in the computer vision (CV) community, we choose ten typical deep learning-based methods, including DSS [13], NLDF [14], RAS [15], PoolNet [16], and so forth. Compared with classical methods, deep learning methods from the CV community can obtain clearly better results. For example, GateNet [21] and PFSNet [22] can achieve a \(S_{m}\) of over 0.82. Furthermore, 11 RSSOD methods are compared on our dataset, including the SARNet [23], FSMINet [24], MCCNet [26], and so on. The methods from the remote sensing community achieve state-of-the-art performance. As presented in Table 2, we use different colors (i.e., red, green, and blue) to highlight the best, second-best, and third-best quantitative results. Typically, several latest algorithms, MJRBM-R [10], EMFI-R [28], HFANet-R [1], ACCO-V [29], and ACCO-R [29] works well on the proposed RSSOD-Bench dataset. This is in line with the expectations of these methods, and also reflects the feasibility of the proposed dataset. However, as visualized in Fig. 2, the segmentation results of hard examples are still not satisfactory on the proposed dataset. This indicates that more research efforts are required to enhance the SOD performance for real-world applications. Figure 1: Illustration of some visualization examples in the RSSOD-Bench dataset. Several challenging examples are presented, including tiny, large, dense, incomplete, and irregular salient objects. ## 4 Conclusion We introduce the RSSOD-Bench dataset for salient object detection (SOD) in remote sensing imagery. Addressing the limitations of existing RSSOD datasets, RSSOD-Bench comprises carefully chosen images from four US cities, exhibiting diverse salient objects, varied backgrounds, and seasonal variations. Unlike previous datasets, RSSOD-Bench ensures uniform scene category distribution. We evaluate **23** state-of-the-art approaches from computer vision and remote sensing communities. While these methods perform well on RSSOD-Bench, there is still room for improving SOD accuracy compared to existing datasets. Therefore, further research efforts and advanced models are needed to enhance performance.
2301.06190
BuildSeg: A General Framework for the Segmentation of Buildings
Building segmentation from aerial images and 3D laser scanning (LiDAR) is a challenging task due to the diversity of backgrounds, building textures, and image quality. While current research using different types of convolutional and transformer networks has considerably improved the performance on this task, even more accurate segmentation methods for buildings are desirable for applications such as automatic mapping. In this study, we propose a general framework termed \emph{BuildSeg} employing a generic approach that can be quickly applied to segment buildings. Different data sources were combined to increase generalization performance. The approach yields good results for different data sources as shown by experiments on high-resolution multi-spectral and LiDAR imagery of cities in Norway, Denmark and France. We applied ConvNeXt and SegFormer based models on the high resolution aerial image dataset from the MapAI-competition. The methods achieved an IOU of 0.7902 and a boundary IOU of 0.6185. We used post-processing to account for the rectangular shape of the objects. This increased the boundary IOU from 0.6185 to 0.6189.
Lei Li, Tianfang Zhang, Stefan Oehmcke, Fabian Gieseke, Christian Igel
2023-01-15T21:09:00Z
http://arxiv.org/abs/2301.06190v1
# BuildSeg: A General Framework for the Segmentation of Buildings ###### Abstract Building segmentation from aerial images and 3D laser scanning (LiDAR) is a challenging task due to the diversity of backgrounds, building textures, and image quality. While current research using different types of convolutional and transformer networks has considerably improved the performance on this task, even more accurate segmentation methods for buildings are desirable for applications such as automatic mapping. In this study, we propose a general framework termed _BuildSeg_ employing a generic approach that can be quickly applied to segment buildings. Different data sources were combined to increase generalization performance. The approach yields good results for different data sources as shown by experiments on high-resolution multi-spectral and LiDAR imagery of cities in Norway, Denmark and France. We applied ConvNeXt and SegFormer based models on the high resolution aerial image dataset from the MapAI-competition. The methods achieved an IOU of 0.7902 and a boundary IOU of 0.6185. We used post-processing to account the rectangular shape of the objects. This increased the boundary IOU from 0.6185 to 0.6189. Image Segmentation; Deep Learning; Remote Sensing ## I Introduction Detecting buildings from remote sensing imagery has been extensively studied [1, 2, 3] as it is of great importance for many fields, such as urban planning, population estimation, economic development, and topographic map production. Since the amount of data cannot be processed manually, data-driven machine learning methods are needed to reduce the manual work required to obtain reliable urban development mappings. Segmenting buildings on a large scale is a challenging task because satellite or aerial images can be very diverse, for example, because of different styles of architecture, building materials, and topography. Quite a number of benchmarks for the segmentation of buildings have been published [4, 5, 6, 7, 8, 9, 10, 11, 12]. Since the silhouettes of buildings can be very different, combining several datasets with different characteristics can lead to more generally applicable building segmentation models. In this study, we propose a framework for building segmentation referred to as _BuildSeg_. We consider the Inria Aerial Image Labeling Benchmark [4] combined with the MapAI-competition dataset [6] to improve the segmentation performance of the latter. When designing the BuildSeg framework, our goal was to design a segmentation pipeline that is generally applicable. Therefore, several benchmarks and corresponding models are available within the framework [4, 5, 6, 7, 8, 9, 10, 11, 12]. Neural networks and in particular convolutional neural networks (CNNs) have become the go-to methods for image segmentation, see [13] for a recent review. The U-Net [14] is one of the fundamental segmentation architectures, and we have been successfully applying it to remote sensing imagery (e.g., [15, 16]). It uses an encoder/decoder structure which processes the input image at different scales and allows to detect high-frequency patterns while being computationally feasible. The original U-Net architecture can be generalized by replacing the encoder and decoder by tailored networks. This makes the U-Net very versatile and allows to utilize state-of-the-art encoders. In our framework, we consider two different U-Net variants, SegFormer [17] and ConvNeXt U-Net [18], where the decoder of the ConvNeXt U-Net are backwards strided convolutions [19]. The main contributions of this study can be summarized as follows: (1) we propose a general framework called BuildSeg based on [20] for segmenting buildings in aerial images of different resolutions; (2) we explore how 3D information from LiDAR affects the performance of deep CNN models; (3) we combine different datasets and apply rectangle-aware post-processing to create rectangular boundaries that match the labels more accurately. The proposed approach achieved the IOU of 0.7902 for the segmentation of images in _MapAI_: Figure 1: Examples from the MapAI-competition [6]. The left image (a) shows the ground-truth data, and the right image (b) the prediction created by the ConvNeXt model. ### Precision in Building Segmentation [6]
2307.02078
Graph Contrastive Topic Model
Existing NTMs with contrastive learning suffer from the sample bias problem owing to the word frequency-based sampling strategy, which may result in false negative samples with similar semantics to the prototypes. In this paper, we aim to explore the efficient sampling strategy and contrastive learning in NTMs to address the aforementioned issue. We propose a new sampling assumption that negative samples should contain words that are semantically irrelevant to the prototype. Based on it, we propose the graph contrastive topic model (GCTM), which conducts graph contrastive learning (GCL) using informative positive and negative samples that are generated by the graph-based sampling strategy leveraging in-depth correlation and irrelevance among documents and words. In GCTM, we first model the input document as the document word bipartite graph (DWBG), and construct positive and negative word co-occurrence graphs (WCGs), encoded by graph neural networks, to express in-depth semantic correlation and irrelevance among words. Based on the DWBG and WCGs, we design the document-word information propagation (DWIP) process to perform the edge perturbation of DWBG, based on multi-hop correlations/irrelevance among documents and words. This yields the desired negative and positive samples, which will be utilized for GCL together with the prototypes to improve learning document topic representations and latent topics. We further show that GCL can be interpreted as the structured variational graph auto-encoder which maximizes the mutual information of latent topic representations of different perspectives on DWBG. Experiments on several benchmark datasets demonstrate the effectiveness of our method for topic coherence and document representation learning compared with existing SOTA methods.
Zheheng Luo, Lei Liu, Qianqian Xie, Sophia Ananiadou
2023-07-05T07:39:47Z
http://arxiv.org/abs/2307.02078v1
# Graph Contrastive Topic Model ###### Abstract. Contrastive learning has recently been introduced into neural topic models to improve latent semantic discovery. However, existing NTMs with contrastive learning suffer from the sample bias problem owing to the word frequency-based sampling strategy, which may result in false negative samples with similar semantics to the prototypes. In this paper, we aim to explore the efficient sampling strategy and contrastive learning in NTMs to address the aforementioned issue. We propose a new sampling assumption that negative samples should contain words that are semantically irrelevant to the prototype. Based on it, we propose the graph contrastive topic model (GCTM), which conducts graph contrastive learning (GCL) using informative positive and negative samples that are generated by the graph-based sampling strategy leveraging in-depth correlation and irrelevance among documents and words. In GCTM, we first model the input document as the document word bipartite graph (DWBG), and construct positive and negative word co-occurrence graphs (WCGs), encoded by graph neural networks, to express in-depth semantic correlation and irrelevance among words. Based on the DWBG and WCGs, we design the document-word information propagation (DWIP) process to perform the edge perturbation of DWBG, based on multi-hop correlations/irrelevance among documents and words. This yields the desired negative and positive samples, which will be utilized for GCL together with the prototypes to improve learning document topic representations and latent topics. We further show that GCL can be interpreted as the structured variational graph auto-encoder which maximizes the mutual information of latent topic representations of different perspectives on DWBG. Experiments on several benchmark datasets 1 demonstrate the effectiveness of our method for topic coherence and document representation learning compared with existing SOTA methods. Footnote 1: [https://github.com/zhehengluoK/GCTM](https://github.com/zhehengluoK/GCTM) **Computing methodologies \(\rightarrow\) Learning latent representations; Topic modeling; Information extraction; \(\rightarrow\) Information systems \(\rightarrow\) Document topic models.** [MISSING_PAGE_POST] **Information systems \(\rightarrow\) Document Document Document Document ontastive learning in other areas [11; 36], existing attempts [28; 35; 41] design the contrastive loss to guide the topic learning, with positive and negative samples by the word frequency based sampling strategy. Tosh et al. [35] randomly split a document into two to form positive pairs and takes subsamples from two randomly chosen documents as negative samples. Nguyen and Luu [28] generate positive/negative samples by replacing low-frequency words/high-frequency words in a given document. For the short text, Wu et al. [41] find the positive and negative samples by the topic semantic similarity between two short text pairs. Since they learn the topic semantics of the input document from its document word feature, their sampling strategy is essentially similar to that in Nguyen and Luu [28]. By learning to distinguish between positive and negative samples, they are able to generate superior latent topics when compared to widely-used neural topic models (NTMs) [12; 26; 29; 30; 34; 43]. However, their hypothesis on sampling negative samples that the ideal negative sample should exclude the high-frequency words of the input document as much as possible can be invalid and lead to the sample bias problem [7; 32]. Their negative samples can be false samples with similar semantics to the source document. To highlight this phenomenon, we provide in Table 1 the averaged similarity between the prototype and its negative sample in the SOTA neural topic model with the contrastive learning CLNTM [28] on three benchmark datasets. In Table 1, it is discovered that the average similarity between the prototype and its negative sample generated by CLNTM is significantly high. For each input document with the TF-IDF input feature \(x\), CLNTM considers the top-k words with the highest TF-IDF score to be the main contributor of the topic for the input document and replaces TF-IDF scores of top-k words with the corresponding score of the reconstructed feature \(\hat{x}\) by a neural topic model to generate negative samples \(x^{-}\). However, the topic of a document is not always determined by its high-frequency words, but also by other salient words [6; 13; 20]. For instance, as shown in Figure 1, the input document describes the crisis in European soccer. Its negative sample still conveys a similar topic about a crisis in European sport as the prototype, even though high-frequency words like "league", "soccer" and "compete" are removed. This sample bias issue will mislead the model to shift the representations of the source document away from semantically identical false negatives, to impact the performance. In this paper, we aim to explore sampling instructive negative and positive samples in the neural topic model scenario to address the sample bias issue. Motivated by the new assumption that: _the most beneficial negative samples should encompass as many distinct words as feasible that are semantically uncorrelated with the prototype_, we propose the graph-based sampling strategy guided by the in-depth correlation and irrelevance information among documents and words. Based on this, we propose the novel graph contrastive neural topic model (GCTM), that models document data augmentation as the graph data augmentation problem and conduct graph contrastive learning (GCL) based on instructive positive and negative samples generated by the graph-based sampling strategy. We build positive and negative word co-occurrence graphs (WCGs), and encode them with the graph neural networks (GNNs) [18] to capture multi-hop semantic correlation and irrelevance among words. The input document is also modelled as the document-word bipartite graph (DWBG) structured data. Based on the DWBG and WCGs, we design the document-word information \begin{table} \begin{tabular}{c c c c} \hline \hline **Dataset** & **20NG** & **IMDB** & **NIPS** \\ \hline CLNTM & 0.954 & 0.935 & 0.962 \\ GCTM & 0.108 & 0.108 & 0.109 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of the averaging similarities between the prototypes with their negative samples generated by our method and CLNTM [28], on benchmark datasets. propagation (DWIP) process to perform the graph augmentation: edge perturbation on the DWBG. It is able to identify new words that directly/distantly correlate to the input document to generate positive samples and recognizes words that are totally irrelevant to the input document to generate negative samples, by information propagation based on DWBG and WCGs. This yields the desirable negative samples with words that are semantically distinct from prototypes, as well as positive samples with words that correlate to prototypes. As shown in Table 1, the average similarity between the prototype and its negative sample generated by our method is significantly lower than that of CLNTM. Moreover, we show that the GCTM with graph contrastive loss can be interpreted as the structured variational graph auto-encoder (VGAE) (VGAE) (VGAE) (VGAE), which explains its superiority in modelling topic posterior over previous NTMs that are variational auto-encoders (VAEs) with a single latent variable. The main contributions of our work are as follows: 1. We propose GCTM, a new graph contrastive neural topic model that models the contrastive learning problem of document modelling as the graph contrastive learning problem, to better capture the latent semantic structure of documents. 2. We propose a novel graph-based sampling strategy for NTM based on graph data augmentation with the multi-hop semantic correlations and irrelevances among documents and words, that can generate more instructive positive and negative samples to enhance the effectiveness of the topic learning. 3. Extensive experiments on three real-world datasets reveal that our method is superior to previous state-of-the-art methods for topic coherence and document representation learning. ## 2. Related Work ### Contrastive Learning for Neural Topic Models Recent research has incorporated contrastive learning into NTMs, motivated by the success of contrastive learning in many NLP tasks (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Sutskever et al., 2015). Contrastive learning is based on the concept of bringing similar pairs together and pushing dissimilar pairs apart in the latent space. Designing effective positive and negative samples is a vital step in contrastive learning, especially for topic modeling, where even the substitution of a single word would change the whole sentence's meaning. Tosh et al. (Tosh et al., 2016) provided theoretical insight for contrastive learning in a topic modeling setting. They used paragraphs from the same text as the positive pairs, and paragraphs from two randomly sampled texts as the negative pairs. However, their sampled negative pairs can be invalid and even Figure 1. Example of the sample bias problem in CLNTM (Tosh et al., 2016). introduce noise without considering semantic similarity when sampling negatives. There is the possibility that paragraphs from two randomly selected texts may still share similar topics. Nguyen and Luu [(28)] proposed an approach to draw positive/negative samples by substituting \(k\) tokens with the lowest/highest TF-IDF with corresponding reconstructed representation. To tackle the data sparsity issue in short text topic modelling, Wu et al. [(41)] proposed to find the positive and negative samples based on topic semantic similarity between short texts, and then conduct the contrastive learning on the topic representations of input short texts. They learn topic semantics from the document word features, therefore have a similar sampling strategy with Nguyen and Luu [(28)]. However, as previously mentioned, their sampling method may suffer from the sample bias problem due to their word frequency-based sampling strategy, which generates noisy negatives that are semantically related to the source document and misleads the model. ### Graph Neural Topic Models In recent years neural topic models (NTMs) based on VAE [(16)] have received a surge of interest due to the flexibility of the Auto-Encoding Variational Bayes (AEVB) inference algorithm. A number of NTMs emerge such as NVDM [(26)], ProdLDA [(34)] and SCHOLAR [(4)] et al. Recently, graph neural networks (GNNs) have been extensively used in NTMs, due to their success in modelling graph structure. GraphBTM [(47)] was the first attempt to encode the biterm graph using GCN [(19)] to enhance the topic extraction quality of the biterm topic model (BTM, 44) on short texts. However, it was incapable of generating document topic representations and capturing semantic correlations between documents. The following works used GNNs to encode the global document-word graph and the word co-occurrence graph, including GTM [(46)], GATON [(45)], GTNN [(42)], DWGTM [(39)], and SNTM [(2)]. In addition to the bag-of-words (BoW), GNTM [(33)] considered the semantic dependence between words by constructing the directed word dependency graph. However, no previous efforts have employed a contrastive framework that can improve the discovery of latent semantic patterns of documents, via data augmentation and optimizing the mutual information among the prototype, negative and positive samples. In contrast to these Figure 2. The architecture of GCTM. methods, we aim to investigate the impact of effective data augmentation and contrastive learning on NTMs, with the aim of uncovering improved latent topic structures in documents. ## 3. Method In this section, we will illustrate the detail of our proposed GCTM and start with the formalization used in this paper. ### Formalization We first introduce the overall framework of NTMs with contrastive learning. Formally, we denote a corpus with \(N\) documents as \(\mathcal{D}\), where each document \(d\) is represented as a sequence of words \(\{w_{1},w_{2},\cdots,w_{n_{d}}\}\), and the vocabulary \(\mathcal{V}\) has a size of \(o\). For topic modelling, we assume \(\theta_{d}\) is the document representation of the document \(d\) in the latent topic space. Global topics for the corpus are represented as \(\beta\), in which each topic is a distribution over the vocabulary. We also assume the number of topics is \(k\), which is a hyperparameter. The latent topic representation \(\theta_{d}\) of document \(d\) is assumed to be sampled from a prior distribution \(p(\theta_{d})\). Due to the intractability of posterior distribution \(p(\theta|x)\), NTMs use a variational distribution \(q_{\Theta}(\theta|x)\) parameterized by an inference network with parameter set \(\Theta\) to approximate it. Following the previous methods (Zhu et al., 2017; Wang et al., 2018), we consider \(\theta_{d}\) is sampled from a logistic normal distribution. Based on the input feature \(x_{d}\) of document \(d\), we have: \[\begin{split}\mu_{d}&=f_{\mu}^{d}(x_{d}),\ \sigma_{d}^{2}=diag(f_{\sigma}^{d}(x_{d})),\\ \theta_{d}&=softmax(\mu_{d}+\sigma_{d}\epsilon_{d}),\end{split} \tag{1}\] where \(f\) is the feed-forward neural network, and \(\epsilon_{d}\in\mathcal{N}(0,I)\) is the sampled noise variable. Then the decoder network with parameter set \(\Phi\) is used to reconstruct the input \(x_{d}\) based on \(\theta_{d}\) and topics \(\beta\). The objective of NTM is to maximize the evidence lower bound (ELBO) of marginal log-likelihood: \[\begin{split}\mathcal{L}_{\text{NTM}}&=\mathbb{KL} \left[q_{\Theta}(\theta|x)\|p(\theta)\right]\\ &-\mathbb{E}_{q_{\Theta}(\theta|x)}[\log p_{\Phi}(x|\theta,\beta )].\end{split} \tag{2}\] Based on the above, contrastive learning is introduced to capture a better latent topic structure, where each document representation \(\theta_{d}\) is associated with a positive sample \(\theta_{d}^{+}\) and a negative sample \(\theta_{d}^{-}\)(Wang et al., 2018). The topic representations of positive pairs \((\theta,\theta^{+})\) are encouraged to stay close to each other, while that of the negative pairs \((\theta,\theta^{-})\) be far away from each other: \[\mathcal{L}_{\text{CL}}=-\frac{1}{N}\sum_{d\in\mathcal{D}}\log\frac{\exp(\theta _{d}\cdot\theta_{d}^{+})}{\exp(\theta_{d}\cdot\theta_{d}^{+})+\alpha\cdot\exp( \theta_{d}\cdot\theta_{d}^{-})} \tag{3}\] where \(\alpha\) is a factor controlling the impact of negative samples. The final optimization objective is: \[\mathcal{L}=\mathcal{L}_{\text{NTM}}+\gamma\mathcal{L}_{\text{CL}}, \tag{4}\] where \(\gamma\) is a parameter controlling the impact of contrastive loss. By optimizing the ELBO of NTM and the contrastive learning loss that guides the model to differentiate between positive and negative samples of input documents, it is expected that the model will uncover a superior topic structure and document representation. Therefore, finding informative negative and positive samples is vital for efficient contrastive learning of NTMs. ### Graph Contrastive Topic Modeling To tackle the sample bias problem of existing methods, we aim to explore the effective sampling strategy in NTMs to generate instructive negative samples and positive samples. We propose the new assumption as the guidance and thus design the graph-based sampling strategy comprehensively leveraging the correlations and irrelevances among documents and words. Based on this, we propose the graph contrastive topic model, that incorporates graph contrastive learning (GCL) into NTMs. As shown in Figure 2, GCTM models the input document as the document-word bipartite graph (DWBG) with the document word feature: TF-IDF. The positive \(\mathcal{W}^{+}\) and negative \(\mathcal{W}^{-}\) word correlation graphs (WCGs) are built and encoded by GCNs to capture the multi-order correlations and irrelevance among words. Based on DWBG and WCGs, we design the document-word information propagation to generate instructive negative and positive samples, based on the graph data augmentation on DWBG. We use the contextual embedding from the pre-trained language models (PLMs) (Han et al., 2017) to enrich the input feature of the prototype, design the graph contrastive loss to push close the topic representations of prototypes and positive samples, and pull away that of prototypes and negative samples. #### 3.2.1. Sampling Assumption Existing methods (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2018) assume that the ideal negative sample should exclude the high-frequency words of the input document. However, it has been found that the topic of a document may not be determined by the high-frequency words but by other salient words (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2018). Therefore, there arises the question: what constitutes high-quality negative samples? Ideally, informative negative and positive samples should be semantically distinct and similar to the prototypes respectively (Zhu et al., 2018). To answer the above question, we aim to generate the negative/positive samples that are topically unrelated/correlated with the prototypes and assume: Hypothesis 1.: _two documents have distinct topics when they feature a significant disparity in their semantic content, characterized by the presence of words with dissimilar meanings._ The common way to determine if two documents have different topics is to analyze and compare their contents characterized by the word distributions, co-occurrence patterns, and the overall context. It is intuitive to take the semantic dissimilarity between words in two documents as the indicator to identify if they have different topics. Based on the above hypothesis, we further assume: Hypothesis 2.: _the most beneficial negative samples should encompass as many distinct words as feasible that are semantically uncorrelated with the prototype._ We believe negative samples should include distinct words that are semantically uncorrelated with prototypes, to ensure they have different topics. Thus, it is vital to identify words that are semantically related or irrelevant to prototypes to make efficient data augmentation. To achieve this, we design the graph-based sampling strategy to sample desired negative and positive samples, which captures the multi-hop correlations and irrelevances between documents and words to sample desired negative and positive samples. We will introduce it in detail in the next subsections. #### 3.2.2. Graph Construction To fully capture semantic correlation among documents and words, we first build the input document as the document-word bipartite graph (DWBG), and the negative and positive word co-occurrence graphs (WCGs). **Document-word Bipartite Graph**: DWBG captures the document-word semantic correlations. We represent each input document \(d\) with its TF-IDF feature \(x_{d}=\{x_{d,1},x_{d,2},\cdots,x_{d,o}\}\). We take \(d\) as the document-word bipartite graph (DWBG) represented by the following adjacency matrix \(A_{d}\)(Wang et al., 2018; Wang et al., 2018): \[A_{d}=\begin{cases}x_{d,i},&\text{if $w_{i}$ appears in document $d$}\\ 0,&\text{otherwise}\end{cases} \tag{5}\] For each document \(d\), its DWBG contains two types of nodes: the document and all of its words, and there are only edges between the document and its words, which correspond to their respective TF-IDF values. We further use the external knowledge from pre-trained language models (e.g., 9) to enrich the document feature with the sequential and syntactic dependency information among words that can not be utilized by BoW features. Following the previous method [3], we introduce the contextual document embedding from SBERT [31], which is transformed into the hidden representation with \(v\)-dimension via a feed-forward layer2. Then, the hidden representation is concatenated with the TF-IDF feature to provide the enhanced input feature \(\hat{x}_{d}\in\mathbb{R}^{2\times o}\). Footnote 2: We use the same pre-trained language model as CombinedTM [3], i.e., _stsb-roberta-large._ **Word Co-occurrence Graphs**: We create two WCGs represented by \(\mathcal{W}^{+}\) (positive word graph) and \(\mathcal{W}^{-}\) (negative word graph) to save the global semantic association and irrelevance among words. For word co-occurrence, we employ the normalized pointwise mutual information (NPMI, 8). Formally, a word pair \((w_{i},w_{j})\) is denoted as: \[\text{NPMI}(w_{i},w_{j})=\frac{\log\frac{p(w_{i},w_{j})}{p(w_{i})\cdot p(w_{j} )}}{-\log p(w_{i},w_{j})}, \tag{6}\] where \(p(w_{i},w_{j})\) represents the probability that words \(w_{i}\) and \(w_{j}\) co-occur in the sliding windows, \(p(w_{i})\) and \(p(w_{j})\) refers to, respectively, the probability that words \(w_{i}\) and \(w_{j}\) appear in the sliding windows. Based on it, positive and negative word graphs can be constructed. The adjacency matrix \(A^{+}\) of positive word graph \(\mathcal{W}^{+}\) is denoted as: \[A^{+}_{ij}=\begin{cases}\text{NPMI}_{ij},&\text{if $i\neq j$ and $\text{NPMI}_{ij}\geq\mu^{+}$}\\ 1,&\text{if $i=j$}\\ 0,&\text{otherwise}\end{cases} \tag{7}\] where \(\text{NPMI}_{ij}=\text{NPMI}(w_{i},w_{j})\) and \(\mu^{+}\) is a non-negative threshold. Similarly, the adjacency matrix \(A^{-}\) of negative word graph \(\mathcal{W}^{-}\) is denoted as: \[A^{-}_{ij}=\begin{cases}|\text{NPMI}_{ij}|,&\text{if $i\neq j$ and $\text{NPMI}_{ij}\leq-\mu^{-}$}\\ 0,&\text{otherwise}\end{cases} \tag{8}\] where \(|\cdot|\) is the absolute value function and \(\mu^{-}\) is a non-negative threshold. In contrast to prior methods [47] that only considered positive word co-occurrence information, we use both positive and negative word graphs to preserve the global correlation and irrelevance among words. Notice that the negative word graph has no self-loops since the word is always related to itself. #### 3.2.3. Sampling Strategy Based on DWBG and WCGs, we formulate the data augmentation of documents as the graph augmentation problem, to generate positive and negative samples by identifying words that are semantically correlated and uncorrelated with the prototype. We propose to use the graph convolutional network (GCN) [19] to encode both positive and negative WCGs, to capture multi-hop correlations and the irrelevance of words. Formally, for a one-layer GCN, node features can be calculated as \[H^{(l+1)}=\rho(\tilde{A}H^{(l)}W^{(l)}), \tag{9}\] where \(\rho\) is the ReLU activation function, \(W^{(l)}\) is the weight matrix, \(\tilde{A}=D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\) is the normalized symmetrical adjacent matrix of the input graph \(A\), and \(D\) is the degree matrix of \(A\). Given adjacency matrix \(A^{+}\) and \(A^{-}\), we stack \(L\) layers of GCN to obtain positive/negative hidden representations of words by equation (9) respectively: \[\beta_{o}^{+}=softmax\left(H_{+}^{(L)}\right),\quad\beta_{o}^{-}=softmax\left(H_{- }^{(L)}\right). \tag{10}\] The input features of GCN are set to identity matrix \(I\) for both settings, i.e., \(H^{(0)}=I\). The \(L^{th}\)-layer output \(H^{(L)}\in\mathbb{R}^{v\times k}\). The information propagation with \(L\) GCN layers can capture the \(L\)-order semantic correlations/irrelevances among words in positive and negative word graphs respectively. Therefore, the hidden representation of each word is informed by its direct co-occurred/unrelated words as well as high-order neighbours. We then design the document-word information propagation (DWIP) process, that conducts the data augmentation on the DWBG via propagating semantic correlations/irrelevances between documents and words: \[H_{d}^{+}=A_{d}*\beta_{o}^{+},\quad H_{d}^{-}=A_{d}*\beta_{o}^{-}. \tag{11}\] where \(H^{+}\) and negative topic distribution \(H^{-}\) are hidden representations of positive samples and negative samples. The positive document hidden representation gathers information from words that are directly and distantly correlated with words that appeared in the prototype DWBG. both words of the document and implicitly correlated words from other documents. The negative document hidden representation is informed with words that are extremely unrelated to the prototype DWBG. If we remove the activation function in equations (9) and (10), the GCN layer will degrade into the simplified GCN [40], which yields: \[H_{d}^{+} =(A_{d}*\tilde{A}^{+})H_{+}^{(L-1)}W_{+}^{(L-1)}, \tag{12}\] \[H_{d}^{-} =(A_{d}*\tilde{A}^{-})H_{-}^{(L-1)}W_{-}^{(L-1)}.\] From the perspective of graph augmentation, \(x_{d}^{+}=A_{d}^{+}=A_{d}*\tilde{A}^{+}\) and \(x_{d}^{-}=A_{d}^{-}=A_{d}*\tilde{A}^{-}\) can be interpreted as the edge perturbation-based augmentations on the input graph \(A_{d}\). The edge between each document and word pair \((d,i)\) is: \[A_{d,i}^{+} =A_{d,i}\tilde{A}_{i,t}^{+}+\sum_{j\in\mathcal{N}_{i}^{-},j\neq i }A_{d,j}\tilde{A}_{j,t}^{+}, \tag{13}\] \[A_{d,i}^{-} =\sum_{j\in\mathcal{N}_{i}^{-},j\neq i}A_{d,j}\tilde{A}_{j,i}^{-},\] where \(\mathcal{N}_{i}^{-},\mathcal{N}_{i}^{+}\) are the neighbor sets of the word \(i\) in the negative and positive word graphs. If there exists an edge between \((d,i)\) in the original graph \(A_{d}\), which means the word \(i\) is mentioned in the document \(d\), the corresponding edge in \(A_{d}^{+}\) will be reinforced by the neighbour words of \(i\) that is also mentioned in the document \(d\) with the weight \(A_{d,j}\). Otherwise, a new edge \((d,i)\) will be added to \(A_{d}^{+}\) which represents the implicit correlation between word \(i\) and document \(d\) if \(i\) is the neighbour of words that are mentioned in document \(d\). Notice that there would be no edge between word \(i\) and document \(d\) if word \(i\) is not the neighbour of any word mentioned in the document \(d\). Thus, the sampling process is able to identify new words that latent correlate to the input document to generate positive samples. Similarly, in \(A_{d}^{-}\), the edge \(A_{d,i}^{-}\) between word \(i\) and document \(d\) will be yielded if \(i\) is the "fake" neighbours of any word \(j\) appeared in a document \(d\). Otherwise, there will be no edge between \((d,i)\) in \(A_{d}^{-}\). Therefore, the negative samples are generated by effectively recognizing the irrelevance between words and the prototype \(d\). #### 3.2.4. Contrastive Loss We then fed the enriched feature \(\hat{x}_{d}\) of the prototype, the hidden representations of its negative \(H_{d}^{+}\) and positive sample \(H_{d}^{-}\), into the encoder to sample the latent their topic representations \((\theta_{d},\theta_{d}^{+},\theta_{d}^{-})\) correspondingly. The contrastive loss is calculated as in Equation 3 based on \((\theta_{d},\theta_{d}^{+},\theta_{d}^{-})\), where \((\theta,\theta^{+})\) are encouraged to stay close to each other, while \((\theta,\theta^{-})\) be far away from each other in the latent topic space. Finally, the model will optimize the objective with the ELBO of NTMs and the contrastive loss as in Equation 4 ### Understanding Graph Contrastive Topic Learning According to previous methods [1, 23, 36, 38], the contrastive loss in equation (3) can be rewritten as: \[\mathcal{L}_{\text{CL}}=\mathbb{E}_{q(\theta,\theta^{\prime})}\,[\log\frac{q( \theta^{\prime}|\theta)}{q(\theta^{\prime})}], \tag{14}\] where \(q(\theta,\theta^{\prime})\) indicates the distribution parameterized by the neural networks \(x_{d},x_{d}^{\prime}\) can be deemed as two different views of input document \(d\), and equation (14) is to maximize the mutual information of their latent representations. Since \(q(\theta^{\prime}|\theta)=\frac{q(\theta,\theta^{\prime})}{q(\theta)}\), equation (14) can be further rewritten as: \[\mathcal{L}_{\text{CL}}=\mathbb{E}_{q(\theta,\theta^{\prime})}\,[\log\frac{q( \theta,\theta^{\prime})}{q(\theta)q(\theta^{\prime})}]. \tag{15}\] Considering a variational auto-encoder (VAE) with two variables \(x,x^{\prime}\), we have the marginal likelihood: \[\begin{split} p(x,x^{\prime},\theta,\theta^{\prime})& =p(x|\theta)p(x^{\prime}|\theta^{\prime})p(\theta,\theta^{\prime}), \\ \log p(x,x^{\prime})&=\log\mathbb{E}_{q(\theta, \theta^{\prime}|x,x^{\prime})}\,[\frac{p(x|\theta)p(x^{\prime}|\theta^{\prime })}{q(\theta|x)q(\theta^{\prime}|x^{\prime})}p(\theta,\theta^{\prime})], \end{split} \tag{16}\] where \(q(\theta,\theta^{\prime}|x,x^{\prime})\) is the approximate posterior, which is usually parameterized by the encoder in VAE. Applying \(p(x|\theta)=\frac{q(\theta|x)p_{\text{true}}(x)}{q(\theta)}\) to equation (16), we have: \[\begin{split}\log p(x,x^{\prime})&=\log\mathbb{E}_ {q(\theta,\theta^{\prime}|x,x^{\prime})}\,[\frac{p(\theta,\theta^{\prime})}{q( \theta)q(\theta^{\prime})}]+const(x,x^{\prime}),\\ \log p(x,x^{\prime})&=\log[\frac{p(\theta,\theta^{ \prime})}{q(\theta)q(\theta^{\prime})}]+const(x,x^{\prime}).\end{split} \tag{17}\] Notice that \(p_{\text{true}}(x),p_{\text{true}}(x^{\prime})\) are parameter-free and constants. Once we have the deterministic encoder to approximate the posterior \(q(\theta,\theta^{\prime}|x,x^{\prime})\), it can be collapsed from equation (17). Then we further rewrite equation (17) by introducing \(q(\theta,\theta^{\prime})\) similar to Aitchison [1]: \[\begin{split}&\log p(x,x^{\prime})=\log\mathbb{E}_{p_{\text{true} }(x,x^{\prime})}\,[\frac{p(\theta,\theta^{\prime})}{q(\theta)q(\theta^{\prime })}]+const\\ &=\log\mathbb{E}_{q(\theta,\theta^{\prime})}\,[\frac{q(\theta, \theta^{\prime})}{q(\theta)q(\theta^{\prime})}]+\log[\frac{p(\theta,\theta^{ \prime})}{q(\theta,\theta^{\prime})}]+const.\end{split} \tag{18}\] We can see that the first term is actually the same as the contrastive loss in equation (15). If we let \(p(\theta,\theta^{\prime})=q(\theta,\theta^{\prime})\), the second term is zero, and equation (18) is totally the same as the contrastive loss. Interestingly, we find that the contrastive loss can be interpreted as the structured variational graph auto-encoder with two latent variables on the input document graph \(x_{d}\) and its augmentation \(x_{d}^{\prime}\). Existing NTMs are actually variational auto-encoders with one latent variable, which aim to learn a good function \(q(\theta|x)\) and force it close to the prior \(p(\theta)\) in the meantime, while the contrastive loss aims to learn a mapping \(q(\theta,\theta^{\prime})\) and push it close to the prior \(p(\theta,\theta^{\prime})\). Obviously, the augmentation \(\theta^{\prime}\) provides extra information to better model the real data distribution of documents. This makes \(q(\theta,\theta^{\prime})\) better capture the latent topic structure. ## 4. Experiments ### Datasets and Baselines To evaluate the effectiveness of our method, we conduct experiments on three public benchmark datasets, including 20 Newsgroups (20NG) 3, IMDB movie reviews [(25)], and Neural Information Processing Systems (NIPS) papers from 1987 to 2017 4. As the statistics shown in Table 3, these corpora in different fields have different document lengths and vary in vocabulary size. Following previous work [(28)], we adopt the same train/validation/test split as it: 48%/12%/40% for 20NG dataset, 50%/25%/25% for IMDB dataset, and 60%/20%/20% for NIPS dataset. For preprocessing, we utilize the commonly-used script published by Card et al. [(4)] 5, which would tokenize and remove special tokens such as stopwords, punctuation, tokens containing numbers, and tokens with less than three characters. The parameter settings are listed in Table 2. Footnote 3: [http://qwone.com/~jason/20Newsgroups/](http://qwone.com/~jason/20Newsgroups/) Footnote 4: [https://www.kaggle.com/datasets/benhammer/nips-papers](https://www.kaggle.com/datasets/benhammer/nips-papers) Footnote 5: [https://github.com/dallascard/scholar](https://github.com/dallascard/scholar) For baseline methods, we use their official codes and default parameters. As for CLNTM + BERT and our model, we search for the best parameters from {lr: 0.001, 0.002} and {epochs: 200, 300, 400, 500} on 20NG and IMDB datasets, {lr: 0.002, 0.003, 0.004, 0.005, 0.006, 0.007} and {epochs: 300, 400, 500} on NIPS dataset. We use batch size 200 for 20NG and IMDB, 500 for NIPS, and Adam optimizer with momentum 0.99 across all datasets. The weight \(\alpha\) is set to \(e^{0.5}\) on 20NG and IMDB datasets and 1 on the NIPS dataset, which is the best values in our setting. But one can simply set \(1\leq\alpha\leq 2\) if time is limited, which hardly hurts the performance. The weight of contrastive loss \(\gamma\) is set to 1. We compare our method with the following baselines: 1. NTM [(26)] is a neural topic model based on the Gaussian prior. 2. ProdLDA [(34)] is a groundbreaking work that uses the AEVB inference algorithm for topic modelling. 3. SCHOLAR [(4)] extends NTM with various metadata. 4. SCHOLAR+BAT [(14)] fine-tunes a pre-trained BERT autoencoder as a "teacher" to guide a topic model with distilled knowledge. \begin{table} \begin{tabular}{c|c} \hline Hyperparameters & Values \\ \hline learning rate & \{0.001 \(\sim\) 0.007\} \\ epochs & \{200, 300, 400, 500\} \\ batch size & \{200, 500\} \\ weight \(\alpha\) & \{0.5, 1, \(e^{0.5}\), 2, 3, 5\} \\ weight \(\gamma\) & \{1\} \\ \#GCN layer \(L\) & \{1, 2, 3\} \\ \hline \end{tabular} \end{table} Table 2. Details of hyperparameters. \begin{table} \begin{tabular}{c c c c} \hline **Dataset** & **\#Train** & **\#Test** & **Vocab** \\ \hline 20NG & 11314 & 7532 & 2000 \\ IMDB & 25000 & 25000 & 5000 \\ NIPS & 5792 & 1449 & 10000 \\ \hline \end{tabular} \end{table} Table 3. Statistics of the datasets. (5) W-LDA [27] enables Dirichlet prior on latent topics by the Wasserstein autoencoder framework. 6. BATM [37] introduces generative adversarial learning into a neural topic model. 7. CombinedTM [3] extends ProdLDA by combining contextualized embeddings with BoW as model input. 8. CLNTM [28] introduces the contrastive learning loss for NTM. Its positive and negative samples are from substituting some items of BoW with corresponding items of reconstructed output. To evaluate the quality of generated topics from our method and baselines, we employ the normalized pointwise mutual information (NPMI) based on the top 10 words of each topic on all datasets. It is reported that NPMI is an automatic evaluation metric for topic coherence which is highly correlated with human judgements of the generated topic quality [5; 21]. We report the mean and standard deviation of 5 runs with different random seeds. ### Results and Analysis #### 4.2.1. Main Results We first present the NPMI results of our method and baselines on three datasets. As shown in Table 4, our method yields the best performance on all datasets with two different topic number settings. It demonstrates the effectiveness of our method which exploits in-depth semantic connections between documents and words via graph contrastive learning. Compared with CLNTM and its extension CLNTM+BERT with contextual document embeddings that also introduce contrastive learning into topic modelling, our method presents improvements on all datasets. In their methods, only the direct semantic correlations are considered when selecting negative and positive samples for contrastive learning, resulting in noisy negatives that may be semantically related to the source document and misleading the model. Different from their methods, our method can exploit the multi-hop interactions between documents and words based on the document-word graph along with positive and negative word correlation graphs. It allows our method to precisely generate samples for graph contrastive learning, leading to a better understanding of the source document. Our method also outperforms previous neural topic models enriched by pre-trained contextual representations such as SCHOLAR+ BAT. It proves the essential of contrastive learning for neural topic modelling, which is that it can learn more discriminative document representations. This is also proved in CombinedTM \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**20NG**} & \multicolumn{2}{c}{**IMDB**} & \multicolumn{2}{c}{**NIPS**} \\ & \(k=50\) & \(k=200\) & \(k=50\) & \(k=200\) & \(k=50\) & \(k=200\) \\ \hline NTM & 0.283 \(\pm\) 0.004 & 0.277 \(\pm\) 0.003 & 0.170 \(\pm\) 0.008 & 0.169 \(\pm\) 0.003 & - & \\ ProdLDA & 0.258 \(\pm\) 0.003 & 0.196 \(\pm\) 0.001 & 0.134 \(\pm\) 0.003 & 0.112 \(\pm\) 0.001 & 0.199 \(\pm\) 0.006 & 0.208 \(\pm\) 0.006 \\ W-LDA & 0.279 \(\pm\) 0.003 & 0.188 \(\pm\) 0.001 & 0.136 \(\pm\) 0.007 & 0.095 \(\pm\)0.003 & - & \\ BATM & 0.314 \(\pm\) 0.003 & 0.245 \(\pm\) 0.001 & 0.065 \(\pm\) 0.008 & 0.090 \(\pm\) 0.004 & - & \\ SCHOLAR & 0.297 \(\pm\) 0.008 & 0.253 \(\pm\) 0.003 & 0.178 \(\pm\) 0.004 & 0.167 \(\pm\) 0.001 & 0.389 \(\pm\) 0.010 & 0.335 \(\pm\) 0.002 \\ SCHOLAR+BAT & 0.353 \(\pm\) 0.004 & 0.315 \(\pm\) 0.004 & 0.182 \(\pm\) 0.002 & 0.175 \(\pm\) 0.003 & 0.398 \(\pm\) 0.012 & 0.344 \(\pm\) 0.005 \\ CombinedTM & 0.263 \(\pm\) 0.004 & 0.209 \(\pm\) 0.004 & 0.125 \(\pm\) 0.006 & 0.101 \(\pm\) 0.004 & 0.201 \(\pm\) 0.004 & 0.212 \(\pm\) 0.004 \\ CLNTM & 0.327 \(\pm\) 0.005 & 0.276 \(\pm\) 0.002 & 0.183 \(\pm\) 0.005 & 0.175 \(\pm\) 0.002 & 0.391 \(\pm\) 0.009 & 0.337 \(\pm\) 0.005 \\ CLNTM+BERT & 0.329 \(\pm\) 0.003 & 0.292 \(\pm\) 0.005 & 0.196 \(\pm\) 0.003 & 0.183 \(\pm\) 0.003 & 0.396 \(\pm\) 0.007 & 0.345 \(\pm\) 0.002 \\ GCTM & **0.379 \(\pm\) 0.004** & **0.338 \(\pm\) 0.005** & **0.202 \(\pm\) 0.004** & **0.184\(\pm\)0.007** & **0.410 \(\pm\) 0.006** & **0.387 \(\pm\) 0.008** \\ \hline \hline \end{tabular} \end{table} Table 4. NPMI coherence of the models on three datasets. The best scores are marked in bold. Some results are referenced from Hoyle et al. [14], Nguyen and Luu [28]. which shows poor performance in all datasets, since it directly concatenates the contextualized representations with word features without contrastive learning. #### 4.2.2. Parameter Sensitivity _Topic number._ As shown in Table 4, we present the results of our method in two different topic number settings as \(k=50\) and \(k=200\). Our method with 50 topics outperforms that with 200 topics on all datasets, which is similar to other baselines. We guess that redundant topics would be generated when the predefined topic number is much larger than the real number of semantic topics. _Weight of negative sampling._ We also perform a series of experiments with different \(\alpha\) in equation (3) to investigate the impact of the weight of negative samples. As shown in Figure 3, the NPMI first increases with the growth of \(\alpha\). The model achieves the highest scores when \(1\leq\alpha\leq 2\) and presents worse results with a larger \(\alpha\). This indicates that the weight of the negative samples should not be too large or too small since it affects the gradient of contrastive loss \(\mathcal{L}_{\text{CL}}\) (equation 3) with respect to latent vector \(\theta_{d}\)[28]. _GCN layers._ We encode the word graphs with different numbers of GCN layers in our method to evaluate the effect, as shown in Figure 4. On all three datasets, the model performs better with two GCN layers than one layer, but the performance drops dramatically when \(L\) increases to 3. Similar to Kipf and Welling [19], we argue that stacking too many GCN layers (e.g., more than 2) could introduce extra noise due to excessive message passing, while one GCN layer can only exploit limited structural information of the graphs. #### 4.2.3. Ablation Study To verify the contribution of each component in our proposed method, we adopt different objectives to train the model and evaluate the performance, including: 1) w/o cl: with only NTM loss; 2) w/o neg: with only positive samples; 3) w/o pos: with only negative samples; 4) full: with contrastive loss and NTM loss. The corresponding loss functions are as followed: 1) without contrastive loss: \[\mathcal{L}=\mathcal{L}_{\text{NTM}};\] 2) without negative sampling: \[\mathcal{L}=\mathcal{L}_{\text{NTM}}-\frac{Y}{N}\sum_{d\in\mathcal{D}}\theta_{ d}\cdot\theta_{d}^{+};\] Figure 3. Results of our model with different weights of negative samples. 3) without positive sampling: \[\mathcal{L}=\mathcal{L}_{\mathrm{NTM}}+\frac{Y}{N}\sum_{d\in\mathcal{D}}\theta_{d }\cdot\theta_{d}^{-};\] \begin{table} \begin{tabular}{c c c c} \hline \hline **Model** & **NPMI** & **Topics** & \\ \hline \multirow{4}{*}{SCHOLAR+BAT} & 0.5654 & encryption enforcement wiretap escrow clipper & \multirow{2}{*}{security} \\ & & secure encrypt cryptography security agency & \\ & & moon pat orbit flight earth & \\ & 0.3778 & lunar fly fuel nasa space & \\ & 0.5691 & clipper escrow enforcement secure keys & \\ & & wiretap agencies chip algorithm encryption & \\ & & orbit space shuttle mission nasa & \\ & 0.3552 & vehicle earth station flight ron & \\ & **0.5941** & encryption escrow clipper secure crypto & \\ & & nsa keys key wiretap privacy & \\ & **0.4340** & orbit space launch moon shuttle & \\ & & henry nasa flight spencer earth & \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of topics from different models on 20NG dataset. Figure 4: Effect of the number of GCN layers. The results are the average of 5 runs. \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **NPMI** & **Topics** \\ \hline \multirow{4}{*}{SCHOLAR+BAT} & 0.6460 & mead silicon chip fabrication neuromorphic & \multirow{2}{*}{chip} \\ & & photoreceptors fabricated cmos mosis chips & \\ & & bellman policy tsitsiklis mdps mdp & \\ & 0.5534 & parr discounted policies ghavamzadeh approximator & \\ & & vlsi chip cmos analog chips & \\ & 0.6575 & fabricated programmable transistors mosis circuit & \\ & 0.5491 & reinforcement bellman tsitsiklis mdp mdps & \\ & **0.6728** & chips voltages analog programmable capacitor & \\ & **0.6158** & mdps mdp policy bellman discounted & \\ & & policies horizon rewards reinforcement parr & \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of topics (top 10 words) from different models on NIPS dataset. The topic labels are annotated manually. The number of topics is 50. 4) with full contrastive loss: \[\mathcal{L}=\mathcal{L}_{\text{NTM}}+\gamma\mathcal{L}_{\text{CL}}.\] The results are shown in Table 7. Among all ablations, w/o cl presents the lowest NPMI scores, which proves again the essential of contrastive learning in topic modelling. It can also be observed that the performance of our method compromises without either positive or negative samples, demonstrating that both positive and negative samples can improve the quality of generated topics. The decrease in NPMI scores for our method without positive samples is more significant than that without negative samples. Similar to previous work [28], the improvement of our method can attribute more to positive samples than negative samples. Moreover, positive and negative samples are complementary in generating coherent topics, resulting in the best scores with full contrastive loss. #### 4.2.4. Case Study We randomly sample several topics from different models on NIPS and 20NG datasets to investigate the quality of our generated topics in Table 5 and Table 6, respectively. It clearly shows that our model yields more coherent and interpretable topics than baselines. For example, in the two selected topics that can be described as "chip" and "reinforcement learning", the word "mead" extracted from SCHOLAR + BAT and "ghavamzade" from CLNTM are not quite consistent with the topics. In contrast, almost all words extracted from our model are in line with the related topics. #### 4.2.5. Text Classification In order to evaluate the representation capability of our generated document representations, we resort to downstream application performance, i.e., text classification accuracy. Following previous methods, we utilize the generated document representations of our method with 50 topics to train a Random Forest classifier. As shown in Table 8, our method presents the best classification results, which demonstrates the benefit of our meaningful representations on predictive tasks. \begin{table} \begin{tabular}{c c c c} \hline \hline & **20NG** & **IMDB** & **NIPS** \\ \hline w/o cl & 0.362 \(\pm\) 0.008 & 0.193 \(\pm\) 0.006 & 0.401 \(\pm\) 0.004 \\ w/o neg & 0.373 \(\pm\) 0.008 & 0.198 \(\pm\) 0.003 & 0.406 \(\pm\) 0.008 \\ w/o pos & 0.369 \(\pm\) 0.005 & 0.195 \(\pm\) 0.005 & 0.404 \(\pm\) 0.007 \\ Full & **0.379 \(\pm\) 0.005** & **0.202 \(\pm\) 0.004** & **0.410 \(\pm\) 0.006** \\ \hline \hline \end{tabular} \end{table} Table 7. Ablation study (\(k=50\)). \begin{table} \begin{tabular}{c c c} \hline \hline Model & 20NG & IMDB \\ \hline SCHOLAR & 0.452 & 0.855 \\ SCHOLAR + BAT & 0.421 & 0.833 \\ CLNTM & 0.486 & 0.827 \\ CLNTM + BERT & 0.525 & 0.871 \\ Ours & **0.543** & **0.874** \\ \hline \hline \end{tabular} \end{table} Table 8. Classification accuracy of different models on 20NG and IMDB datasets (comprising of 20 and 2 classes respectively). The topic number \(k=50\). ## 5. Conclusion In this paper, we propose a novel graph-based sampling strategy and propose a novel graph contrastive neural topic model incorporating graph-contrastive learning for document topic modeling. Experimental results prove the superiority of our proposed method due to the better sampling strategy based on graph augmentation with multi-order semantic correlation/irrelevance among documents and words. We show that the contrastive learning in our model is actually a structured variational auto-encoder, thus it can better model the data distribution of documents to learn better latent topics. Since the NTM loss is a variational auto-encoder with a single latent variable, we argue that the contrastive loss can actually cover the learning process in NTM loss, but more effort is required to explore and verify it. In the future, we will explore removing the traditional NTM loss and further investigate the effectiveness of pure contrastive loss on document topic modeling.
2306.12142
Synaptic metaplasticity with multi-level memristive devices
Deep learning has made remarkable progress in various tasks, surpassing human performance in some cases. However, one drawback of neural networks is catastrophic forgetting, where a network trained on one task forgets the solution when learning a new one. To address this issue, recent works have proposed solutions based on Binarized Neural Networks (BNNs) incorporating metaplasticity. In this work, we extend this solution to quantized neural networks (QNNs) and present a memristor-based hardware solution for implementing metaplasticity during both inference and training. We propose a hardware architecture that integrates quantized weights in memristor devices programmed in an analog multi-level fashion with a digital processing unit for high-precision metaplastic storage. We validated our approach using a combined software framework and memristor based crossbar array for in-memory computing fabricated in 130 nm CMOS technology. Our experimental results show that a two-layer perceptron achieves 97% and 86% accuracy on consecutive training of MNIST and Fashion-MNIST, equal to software baseline. This result demonstrates immunity to catastrophic forgetting and the resilience to analog device imperfections of the proposed solution. Moreover, our architecture is compatible with the memristor limited endurance and has a 15x reduction in memory
Simone D'Agostino, Filippo Moro, Tifenn Hirtzlin, Julien Arcamone, Niccolò Castellani, Damien Querlioz, Melika Payvand, Elisa Vianello
2023-06-21T09:40:25Z
http://arxiv.org/abs/2306.12142v1
# Synaptic metaplasticity with multi-level memristive devices ###### Abstract Deep learning has made remarkable progress in various tasks, surpassing human performance in some cases. However, one drawback of neural networks is catastrophic forgetting, where a network trained on one task forgets the solution when learning a new one. To address this issue, recent works have proposed solutions based on Binarized Neural Networks (BNNs) incorporating metaplasticity. In this work, we extend this solution to quantized neural networks (QNNs) and present a memristor-based hardware solution for implementing metaplasticity during both inference and training. We propose a hardware architecture that integrates quantized weights in memristor devices programmed in an analog multi-level fashion with a digital processing unit for high-precision metaplastic storage. We validated our approach using a combined software framework and memristor based crossbar array for in-memory computing fabricated in 130 nm CMOS technology. Our experimental results show that a two-layer perceptron achieves 97 % and 86 % accuracy on consecutive training of MNIST and Fashion-MNIST, equal to software baseline. This result demonstrates immunity to catastrophic forgetting and the resilience to analog device imperfections of the proposed solution. Moreover, our architecture is compatible with the memristor limited endurance and has a \(15\times\) reduction in memory footprint compared to the binarized neural network case. Memory, Metaplasticity, Quantized Neural Networks (QNNs), In-Memory-Computing, Memristor, On-Chip learning ## I Introduction Intelligence in mammals is characterized by the ability to learn, which encompasses both the acquisition of new knowledge and skills, as well as the retention of previously acquired information. In contrast, state-of-the-art deep neural networks suffer from "catastrophic forgetting" (Fig. 1a), where the network forgets previously learned information when learning new information [1]. Recent advancements in the areas of class incremental learning [2], meta-learning [3], and metaplasticity [4, 5] have shown promise in addressing this problem. When learning a new task, the synaptic weights optimized during previous tasks are protected from further updates, allowing the network to find a set of parameters that can solve both tasks simultaneously (Fig. 1b). These algorithms can enable continuous learning from the sensory signals which has applications in tailoring the edge devices to unique users. For example, customizing wearable medical devices to suit each patient's needs, or adjusting smart home devices to reflect users' habits or preferences. Implementing such continuous learning algorithms on the chip will remove the need for sending the user/patient's data to the cloud, which not only saves significant amounts of power consumption but also guarantees their privacy. On-chip learning can benefit greatly from on-chip, high-density, and analog memory [6], and resistive memory technologies have emerged as a good candidate solution [7, 8, 9]. Despite its great potential, resistive memory undergoes variability and, in practice, the bit resolution is usually limited to up to 3 bits [10]. In this work, we propose a device-algorithm co-design approach that takes advantage of the recent meta-plastic algorithms implemented for Binary Neural Networks (BNNs) [5], while being compatible with resistive memory characteristics. Specifically, we first demonstrate that the metaplasticity-inspired training method for BNNs can be extended to quantized neural networks (QNNs) with more than two binary states. By implementing quantization in hardware, we are able to capitalize on the multi-level capabilities of memristors in a crossbar array to perform analog matrix-vector multiplication operations with low latency and energy consumption. Furthermore, we propose a mixed-precision architecture that combines the use of a memristor crossbar array for storing synaptic weights and performing matrix-vector multiplication operations with a digital processing unit for storing metaplastic variables in high precision and calculating low-precision weight updates based on these values [7]. To validate the effectiveness of our metaplasticity-inspired training method on the mixed-precision architecture, we conduct a combined hardware/software training experiment using a recently developed memristor crossbar array for in-memory computing [10]. Our key contributions include: * a metaplasticity rule inspired by [5] for quantized neural networks that reduces catastrophic forgetting, * experimental validation on a 16 kbit crossbar, showing that each 1T1R cell can store nine conductance levels per memristor device, * an online implementation of the proposed training technique on memristor-based hardware, * a study of this approach with respect to the limited compute precision and imperfection of the memristor devices. ## II Synaptic Metaplasticity in Quantized Neural Networks QNNs are a generalization of BNNs that use multiple levels of quantization [11]. In QNNs, synapses consist of two types of weights: quantized weights, \(W_{S}\), and hidden weights, \(W_{H}\) The hidden weights are arbitrary precision real values, while the quantized weights are discretized to a set of values, \(\mathcal{Q}=\{\mathcal{Q}_{i}\}\) with \(i=1,..,n\). The training process for QNNs involves updating the hidden weights using loss gradients computed with the quantized weights (as outlined in Algorithm 1, lines 2-3). The quantized weights are chosen as the closest quantized values to the hidden weights: \[W_{S}=\mathcal{Q}_{i}:|W_{S}-W_{H}|=\min_{i=0}^{n}\left\{|\mathcal{Q}_{i}-W_{H }|\right\} \tag{1}\] In [5], it has been proposed that the hidden weights in BNNs may be interpreted as metaplastic states by introducing an additional component into the hidden weight update rule. This component allows for a memory effect by introducing a meta-function, \(\mathcal{M}(W_{H})\). The \(\mathcal{M}(W_{H})\) function modulates the strength of the updates: when \(\mathcal{M}(W_{H})\) is low, the hidden weight is consolidated at its current level, while when it is high it is updated according to the stochastic gradient descent learning algorithm (lines 7-10 of Algorithm 1). The meta-function \(\mathcal{M}(W_{H})\) can be extended to QNNs using the following assumption: \(\mathcal{M}(W_{H})\) decreases as it approaches a quantized level and reaches a maximum at the midpoint between two adjacent quantized levels. In the interval between two adjacent quantized levels, \(\mathcal{I}_{i}^{D}=\mathcal{Q}_{i+1}-\mathcal{Q}_{i}\), the meta-plastic function is defined as a function of the hidden weight value: \[\mathcal{M}(W_{H})=1-\tanh^{2}\left(\frac{2m^{*}}{\mathcal{I}_{i}^{D}}|W_{H}- W_{S}|-m^{*}\right) \tag{2}\] with \(W_{H}\in\mathcal{I}_{i}^{D}\) and \(W_{S}\) the quantized weight (Eq. 1). The \(m^{*}\) parameter controls the steepness of the decay and therefore determines the rate of the consolidation of the hidden weight. An example of \(\mathcal{M}(W_{H})\) function for four quantized levels and various values of the scalar \(m^{*}\) is shown in Fig. 1c. In this work, all simulations use adaptive moment estimation (Adam) [12]. The training procedure for the QNN with metaplasticity is outlined in Algorithm 1. In this algorithm, \(\mathbf{W_{H}}\) represents the vector of hidden weights, with \(W_{H}\) representing a single component. A similar notation is used for the other vectors. The batch-norm parameters are represented by \(\theta^{\mathbf{BN}}\), while \(\mathbf{U_{W}}\) and \(\mathbf{U}_{\theta}\) represent the updates to the weights and the batch-norm, respectively. The input and target vectors are represented by \((\mathbf{x},\mathbf{y})\), and the learning rate is represented by \(\eta\). ``` 0:\(\mathbf{W_{H}},\theta^{\mathbf{BN}},\mathbf{U_{W}},\mathbf{U}_{\theta},(\mathbf{ x},\mathbf{y}),m^{*},\eta,\mathcal{Q}\) 0:\(\mathbf{W_{H}},\theta^{\mathbf{BN}},\mathbf{U_{W}},\mathbf{U}_{\theta}\) 1:\(\mathbf{W_{S}}\leftarrow\texttt{Approx}(\mathbf{W_{H}},\mathcal{Q})\)\(\triangleright\) Eq. 1 2:\(\mathbf{\hat{y}}\leftarrow\texttt{Forward}(\mathbf{x},\mathbf{W_{S}},\theta^{ \mathbf{BN}})\) 3:\(\mathbf{C}\leftarrow\texttt{Cost}(\mathbf{\hat{y}},\mathbf{y})\) 4:\(\partial_{\mathbf{W}}C,\partial_{\theta}C\leftarrow\texttt{Backward}(C,\mathbf{ \hat{y}},W_{S},\theta^{\mathbf{BN}})\) 5:\(\mathbf{U_{W}},\mathbf{U}_{\theta}\leftarrow\texttt{Adam}(\partial_{W}C, \partial_{\theta}C,\mathbf{U_{W}},\mathbf{U}_{\theta})\) 6:for\(W_{H}\) in \(\mathbf{W_{H}}\)do 7:if\(U_{W}\cdot(W_{H}-W_{S})<0\)then 8:\(W_{H}\gets W_{H}-\eta U_{W}\mathcal{M}(m^{*},W_{H},W_{S})\)\(\triangleright\) Eq. 2 9:else 10:\(W_{H}\gets W_{H}-\eta U_{W}\) 11:endif 12:endfor ``` **Algorithm 1** Algorithm for metaplasticity using multiple quantized levels. The algorithm has been tested using a multi-layer perceptron (MLP) with two hidden layers of 512 neurons each, trained on the MNIST dataset [13] first (epochs 1-50) and then on Fashion-MNIST [14] (epochs 51-100). The levels \(\mathcal{Q}\) are \(17\) in the range \([-1.5,1.5]\). The training experiment was performed using \(m^{*}\) values ranging from 0 to 5. To ensure proper weight initialization, the first \(10\) epochs were preformed without metaplasticity (\(m^{*}=0\)). For \(m^{*}<2\), the neural network experienced catastrophic forgetting, but in the interval \(m^{*}\in[2,4]\), the network was able to learn both tasks with near-independent task accuracy. However, when \(m^{*}>4\), the consolidation rate is too strong and the accuracy on Fashion-MNIST drops without any significant gain from the lower forgetting rate on MNIST (Fig. 2). The results show that using \(m^{*}=3\) leads to an accuracy of over \(97\) % for MNIST and \(86\) % for Fashion-MNIST. This value of \(m^{*}=3\) is applied through the rest of the article. These results match those obtained using a two-layer BNN with \(4096\) neurons per layer in previous studies [5]. ## III Memristor-based implementation The proposed system (Fig. 3) for training QNNs has two components: a memristive crossbar array for analog in-memory computing (green box) and a high-precision digital Fig. 1: **a** The “catastrophic forgetting” problem: a network is trained sequentially with two different training sets (here MNIST and Fashion-MNIST). When learning Fashion-MNIST, the MNIST test accuracy collapses almost to random guessing. **b** The black arrows depict the paths inside the parameter space when using a classic learning sequence of MNIST and Fashion-MNIST, while the green arrows show the paths traversed using the matplasticity training. **c** Metaplastic function \(\mathcal{M}\) on a set of unequally spaced \(\mathcal{Q}\) levels; it is used to implement metaplasticity on QNN by modulating the weights updates. computational unit (grey box). Memristors can store intermediate conductance levels, unlike CMOS-based memories (SRAM or DRAM) which store one bit per cell. This allows quantized weights in a QNN to be directly stored as conductance levels in a memristor crossbar array, resulting in compact weight storage. Additionally, the multiply and accumulate operations necessary during the forward and backward data propagation stages of QNNs training can be performed in-place using the fundamental laws of electric circuits: the multiply operation corresponds to Ohm's law, while the accumulate one to Kirchoff's current law. During forward propagation, the neuron activations (\(x_{i}\)) are transmitted to the Source Lines through voltages. The total current flowing through each column is the sum of the product of the weights (\(W_{S,ij}\)) and the activations (\(x_{i}\)) along each Bit Line. In backward propagation, the errors, \(\delta_{i}\), are applied to the Bit Lines and the resulting currents \(I=\sum_{j}W_{S,ji}\delta_{j}\) measured at the Source Lines. These current values are used to compute the gradient respect to the quantized weights stored in the analog crossbar arrays. The desired hidden weight updates are then calculated based on the the gradient calculated on the quantized weights and the metaplasticity rule (Algorithm 1, lines 5-11). The new hidden weights \(W_{H}\) are updated in the high-precision digital memory. At last, the hidden weights are approximated to the available levels \(\mathcal{Q}\) and the memristors in the crossbar are eventually re-programmed by a dedicated programming circuit, if the quantized level has changed. To validate the feasibility of the proposed architecture, we fully characterized an analog in-memory-computing circuit in hybrid CMOS/memristor process [15]. Hafnium oxide (HfO\({}_{X}\)) memristors are fabricated on top of a CMOS foundry \(130\,\mathrm{nm}\) process with four levels of metals. Fig. 4a shows a Scanning Electron Microscope (SEM) image of a fabricated 1-transistor 1-resistor (1T1R) memory cell. The memristor device starts in an extremely low conductance state (pristine state) when manufactured. The device must experience a unique "forming" operation to create a first conductive filament. Once formed, memristors can be programmed to multiple conductance levels. A set operation programs the device in the High-Conductance-State (HCS), a reset operation sets it into the Low-Conductance-State (LCS). The conductance value is controlled by modulating the set programming current \(I_{CC}\), defined by the gate voltage applied on the selector transistor [16]. We characterized a \(16\,\mathrm{kb}\) array of 1T1R devices, programming them with eight different compliance currents (HCS modulated by \(I_{CC}\)) and the LCS state, the corresponding conductance distributions are show in Fig. 4b. However, memristors are prone to a large conductance variability, resulting in a broad statistical distribution of conductance values after programming. A second challenge for the proposed architecture is the limited endurance (i.e. number of Set-Reset operations) of the devices. We evaluated the efficacy of our proposed architecture through a series of experiments and simulations. We trained a two layer-perceptron on MNIST for \(50\) epochs, using the first \(10\) epochs as a pre-training phase (i.e. no metaplasticity, \(m^{*}=0\)) before switching to Fashion-MNIST for the next \(50\) epochs. We utilized the same network architecture discussed in the previous Section. Quantized weights are encoded as the difference in conductance between two adjacent memory cells, enabling storage of both positive and negative weights [15]. Since each memristor can be programmed into nine levels (8 HCS levels and 1 LCS level), each weight can be represented by 17 levels, with the "zero" level obtained by matching the conductance of two memristors in two adjacent cells. The initial values for the hidden weights are determined by drawing samples from Gaussian distributions with a mean value set between two adjacent quantized levels. These quantized weights are then programmed into conductance values using the equation Eq. (1). These conductance values are read from the hardware and used in the simulation stage to compute the hidden weights updates through software simulations. Fig. 3: Schematic of the on-chip mixed analog/digital learning architecture. The analog in-memory computing block has several crossbar arrays programmed in a multi-level fashion to store the hidden weights. This analog block performs the _Forward_ (blue) and _Backward_ (red) propagations. The resulting values are used to compute the hidden weights updates stored in the digital memory. The memristors conductance values are updated accordingly. Fig. 2: **a** Impact of the \(m^{*}\) factor on the MNIST and Fashion-MNIST sequential learning: accuracy plot after \(50\) epochs. **b** Sequential learning of MNIST and Fashion-MNIST with \(m^{*}=2.2\). Comparison with a BNN with two hidden layers of \(4096\,\mathrm{neurons}\) each. **c** Sequential learning of MNIST and Fashion-MNIST with \(m^{*}=3\). Baseline: BNN with two hidden layers of \(4096\,\mathrm{neurons}\) each. The conductance values are then updated as necessary, with single-shot pulses applied without adjusting for the difference between the desired and observed conductance change. At the end of each training epoch, the conductance values for all memristors are read from the array and used to evaluate the classification performance. The results, shown in Fig. 5a, indicate that the memristor conductance variability does not significantly affect accuracy. This is comparable to other works that showed as the noise added during training should not affect or, in some cases, enhance the performances of QNNs [17, 18, 19]. The network achieves a maximum accuracy of \(97.47\pm 0.33\),% for MNIST and \(86.09\pm 0.33\),% for Fashion-MNIST. This suggests that metaplasticity in QNNs is robust to memristor conductance variability during training, making on-chip learning a viable option. In addition to evaluating the accuracy, we computed the number of programming operations required throughout the experiment while learning both tasks. We plot the percentage of devices in the network against the number of programming operations they have undergone in Fig. 5b. The figure shows that the majority of devices (\(76.12\) %) only require less than \(25\) operations, with only a small number of memristors (\(10.16\) %) requiring more than \(50\) programming operations. This is orders of magnitude lower than the endurance of memristors, which has been measured to be around \(10^{5}\) programming cycles [20]. The low number of programming operations results from updating only hidden weights at each iteration of the training algorithm, while re-programming memristors only if the associated quantized level changes. It is worth noting that our model has \(1.3\cdot 10^{6}\) devices for implementing quantized weights and reaches similar accuracy on MNIST and Fashion-MNIST compared to a BNN with \(20\cdot 10^{6}\) binary weights. In terms of memristor-based implementations, the proposed algorithm allows for a reduction in memory footprint by a factor of \(15\times\) and \(30\times\) compared to the binarized-neural-network implemented using a one-memristor one-transistor cell (1T1R) [21] and a two-memristors two-transistors cell (2T2R) [22], respectively (Table I). ## IV Conclusion In this work we extended the synaptic metaplasticity training algorithm for BNNs to work with QNNs, implementing it on a mixed analog/digital platform using hafnium oxide memristor crossbars. Our combined software/hardware experiment showed robustness to the main memristors limitations (computational precision, hardware imperfection, and endurance) and achieved equivalent performance to software implementations. We also utilized multi-level programming for compact weight memorization, resulting in a \(15\times\) reduction in memory compared to BNNs. Our findings enable QNNs for online synaptic consolidation avoiding catastrophic forgetting. Our work opens up possibilities for developing embedded hardware for continual learning. ## Acknowledgment This work was supported by European Research Council consolidator grant DIVERSE (101043854) and by the H2020 MeM-Scales project (871371). It also benefits from a France 2030 government grant managed by the French National Research Agency (ANR-22-PEEL-0010) and the Horizon Europe METASPIN project (101098651).
2307.05717
Towards Mobility Data Science (Vision Paper)
Mobility data captures the locations of moving objects such as humans, animals, and cars. With the availability of GPS-equipped mobile devices and other inexpensive location-tracking technologies, mobility data is collected ubiquitously. In recent years, the use of mobility data has demonstrated significant impact in various domains including traffic management, urban planning, and health sciences. In this paper, we present the emerging domain of mobility data science. Towards a unified approach to mobility data science, we envision a pipeline having the following components: mobility data collection, cleaning, analysis, management, and privacy. For each of these components, we explain how mobility data science differs from general data science, we survey the current state of the art and describe open challenges for the research community in the coming years.
Mohamed Mokbel, Mahmoud Sakr, Li Xiong, Andreas Züfle, Jussara Almeida, Taylor Anderson, Walid Aref, Gennady Andrienko, Natalia Andrienko, Yang Cao, Sanjay Chawla, Reynold Cheng, Panos Chrysanthis, Xiqi Fei, Gabriel Ghinita, Anita Graser, Dimitrios Gunopulos, Christian Jensen, Joon-Seok Kim, Kyoung-Sook Kim, Peer Kröger, John Krumm, Johannes Lauer, Amr Magdy, Mario Nascimento, Siva Ravada, Matthias Renz, Dimitris Sacharidis, Cyrus Shahabi, Flora Salim, Mohamed Sarwat, Maxime Schoemans, Bettina Speckmann, Egemen Tanin, Xu Teng, Yannis Theodoridis, Kristian Torp, Goce Trajcevski, Marc van Kreveld, Carola Wenk, Martin Werner, Raymond Wong, Song Wu, Jianqiu Xu, Moustafa Youssef, Demetris Zeinalipour, Mengxuan Zhang, Esteban Zimányi
2023-06-21T17:17:18Z
http://arxiv.org/abs/2307.05717v4
# Towards Mobility Data Science (Vision Paper)' ###### Abstract Mobility data captures the locations of moving objects such as humans, animals, and cars. With the availability of GPS-equipped mobile devices and other inexpensive location-tracking technologies, mobility data is collected ubiquitously. In recent years, the use of mobility data has demonstrated significant impact in various domains including traffic management, urban planning, and health sciences. In this paper, we present the emerging domain of mobility data science. Towards a unified approach to mobility data science, we envision a pipeline having the following components: mobility data collection, cleaning, analysis, management, and privacy. For each of these components, we explain how mobility data science differs from general data science, we survey the current state of the art and describe open challenges for the research community in the coming years. Mohamed Mokbel, Mahmoud Sakr, Li Xiong, Andreas Zufle, Jussara Almeida, Taylor Anderson, Walid G. Aref, Gennady Andrienko, Natalia Andrienko, Yang Cao, Sanjay Chawla, Reynold Cheng, Panos Chrysanthis, Xiqi Fei, Gabriel Ghinta, Anita Graser, Dimitrios Gunopulos, Christian S. Jensen, Joon-Sook Kim, Kyoung-Sook Kim, Peer Kroger, John Krumm, Johannes Lauer, Amr Magdy, Mario A. Nascimento, Siva Ravada, Matthias Renz, Dimitris Sacharidis, Flora Salim, Mohamed Sarwat, Maxime Schoemans, Cyrus Shahabi, Bettina Speckmann, Egemen Tanin, Xu Teng, Yannis Theodoridis, Kristian Torp, Goce Trajcevski, Marc van Kreveld, Carola Wenk, Martin Werner, Raymond Wong, Song Wu, Jianqiu Xu, Moustafa Youssef, Demetris Zeinalipour, Mengxuan Zhang, and Esteban Zimanyi. 2023. Towards Mobility Data Science (Vision Paper). _ACM Trans. Spatial Algorithms Syst._ 1, 1 (August 2023), 34 pages. [https://doi.org/XXXXXXXXXXXXXX](https://doi.org/XXXXXXXXXXXXXX) ## 1. Introduction The volume of mobility data being collected has been steadily increasing since the advent of affordable personal location-enabled mobile devices. Examples of mobility data continuously generated and collected in huge volumes include: (a) individual sporadic locations obtained from mobile app data and location-based social networks, (b) individual pedestrians, biking, or driving trajectories constrained by underlying side walks, biking trails, and road network, respectively, (c) indoor individual or asset tracking data obtained from RFID and bluetooth devices, (d) athletes movement data in various sports obtained from wearable devices, (e) public transportation, taxis, ride sharing, and delivery logistics trajectories obtained by location-tracking devices and specially designed app services, (f) aircraft and vessels trajectories moving in an unconstrained environment (i.e., no underlying road network) obtained by air and sea traffic monitoring services, and (g) animal tracking data moving freely in the space obtained from physically tagged and remotely sensed animals. Generally speaking, for each moving object, mobility data is typically available in the form of a sequence of (_location, timestamp_) pairs. The _location_ attribute could be as simple as a point, represented by either Figure 1. The Mobility Data Science Pipeline latitude_ and _longitude_ coordinates or as relative coordinates with respect to the underlying space. The _location_ attribute could also be an area, which can represent the mobility of objects with spatial extents, e.g., flocks or group movement. The ability of understanding and analyzing mobility data is crucial for various widely used important sectors and applications. In transportation and traffic management, analyzing traffic data through vehicle mobility helps in predicting accidents [214, 295], traffic congestions [354], and better route planning [68, 165]. In ride sharing and delivery logistics application, analyzing trip mobility data help in data-driven eco route planning, which results in huge cost and energy savings [51, 132]. In location-based services, analyzing people movements around the city significantly helps in trip planning activities [54, 298], finding popular tourists sites and restaurants [56, 156], and data-driven routing and querying [90, 299]. In indoor navigation, understanding how people move indoors helps in understanding the traffic for various stores inside a mall, which is needed in various market research studies [153, 358]. In urban planning, driving data can significantly help in building highly accurate, reliable, and annotated maps [2, 216] as well as deciding on good location for various facilities, e.g., restaurants, retail stores, and clinics [64, 279]. In social computing, analyzing how people move in cities and regions helps in understanding the demand for infrastructure and energy as a means of reducing inequalities [269]. In disaster response, analyzing crowd movement helps in preparing for natural disasters through rescuing and evacuation efforts [94, 141]. In health informatics, connected wearables can monitor and analyze the movement of elderly people, allowing for timely, and potentially life-saving, interventions [185, 270]. In pandemic prevention, the ability of privacy-preserving individual tracking allows for contact tracing, which was deemed as cornerstone in limiting pandemic spread [21, 211, 382]. The sheer volumes of mobility data along with the immense need of mobility data analysis in various applications call for employing a complete Data Science pipeline [106, 255, 352] over mobility data. This includes the whole pipeline of Data Science applications, starting from the data storage and management infrastructure and going through data collection, data cleaning and preprocessing, and data analysis. Unfortunately, this is not straightforward as current Data Science systems, tools, and algorithms are not directly applicable to mobility data. This is mainly due to the fact that these systems, tools, and algorithms, are designed in a generic way to support any data type, and hence they do not lend themselves to the distinguishing characteristics of mobility data. Examples of such characteristics include the spatial and temporal dimensions of the data, the rate of updates, and the privacy requirements. In particular, mobility data is always spatial, where nearby objects are more related to each other. This is unlike traditional data, where the concept of nearby and locality is not taken into account. Also, similar to time series data, mobility data is temporal, where one object may have hundreds of updates to its location, and all updates are related to each other (e.g., one trajectory). This is again unlike traditional data, where temporal updates of a single object are not frequent and older updates would be of less importance. Similar to streaming data, mobility data has a high frequency of updates, which is not supported in typical data science applications. Finally, mobility data is more sensitive to privacy. While privacy-preserving in traditional data can be achieved by removing (quasi-)identifier attributes, in mobility data, locations by themselves are considered private information that can reveal not only the users' identities, but also their behavior, life style, medical conditions, and work places. Motivated by ubiquity and sheer volumes of mobility data, the importance of mobility applications, and the lack of support from current data science pipelines, this paper presents our vision towards having a complete Mobility Data Science Pipeline. We present _Mobility Data Science_ as an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract or extrapolate knowledge and insights from potentially noisy, structured and unstructured mobility data, and apply knowledge from mobility data across a broad range of application domains. While currently, the community of developers, practitioners, and researchers, dealing with mobility data use off-the-shelf data science techniques and systems to collect, clean, manage, and analyze their mobility data, we firmly believe that this ends up to sub-bar performance. We urge such community to build its own mobility data science pipeline to better serve its own purpose. Our vision makes the case for the need for a mobility data science pipeline along with the challenges that need to be addressed to realize the vision. Figure 1 depicts our vision for Mobility Data Science pipeline, composed of three main components, _data collection and acquisition_, _data cleaning and preprocessing_, and _data analysis_, all are geared to serve a myriad of practical mobility data applications. All components are supported by infrastructure composed of _data management and storage_ and _data privacy_ modules. Though the components of the mobility data science pipeline are pretty similar to those of a typical data science pipeline, each component is fundamentally different. For example, the _mobility data collection_ component (Section 2) is very different from a typical data collection component as it needs to address inherent challenges in mobility data, including location privacy, data spareness, and the extreme lack of public available data compared to typical datasets for other domains. The _mobility data cleaning_ component (Section 3) is fundamentally different from typical data cleaning components as the noise in mobility data is inherent rather than outliers in typical data science applications. The _mobility data analysis_ component (Section 4) needs to extend existing statistical analysis, visual analytics, and machine learning techniques to support the nature of mobility data with its spatial, temporal, and streaming properties. The _mobility data management_ infrastructure (Section 5) would need to be extended to support the high update rates and time-series nature of mobility data. Such mobility data science pipeline requires _mobility data privacy_ (Section 6) for all the pipeline components to protect individuals and vulnerable groups. These five components combine into a full mobility data science cycle as shown in Figure 1 that enables mobility data science to have broad impacts on everyone's lives through a plethora of _mobility data applications_ (Section 7). ## 2. Mobility Data Collection The abundance availability of real data is a cornerstone to any data science application, and mobility data science applications are of no exception. However, it is much easier to collect and find tons of data for data science applications than it is the case for mobility data science. In particular, for data science applications, well-established research in anonymizing personal data allows wide data sharing. This is to the extent that governments have released various datasets for public (e.g., Data.gov) (Sarkin et al., 2017). In addition, companies already collect their own inventory data that does not include any personal identifiers, and hence it is suitable to be fed to data science applications. On the other side, data-driven mobility data science research has been in a constant struggle with the need for available mobility data. A main reason is that non-aggregated individual human location data is considered personal identifiable information as it may lead to tracing an individual's identity. For example, it has been shown that only a few spatial locations are sufficient to uniquely identify individuals even among a large population of people (Sarkin et al., 2017; Sarkin et al., 2017). As a result, most datasets are collected in aggregated form, which hinders the deployment of various mobility data science applications. This sections discuss current efforts and challenges of mobility data collection. ### Efforts in Mobility Data Collection Before the wide availability of personal digital devices, human mobility data collection was expensive and therefore datasets were very sparse. With the advent of personal location-enabled devices, many people's movements have started leaving digital traces that are being collected either by industry as a means of providing location-based services (Sarkin et al., 2017; Sarkin et al., 2017) or by governmental entries as a means of data analysis, e.g., traffic-related studies (Sarkin et al., 2017; Sarkin et al., 2017). However, this did not result in a similar explosion of publicly available mobility data, mainly due to privacy and data sharing concerns. Current efforts in releasing public non-aggregated mobility data is mainly limited to small datasets, small regions, while removing locations that can lead to one's whereabouts. This mostly include trips obtained from taxis, ride sharing services, or public transportation. Some of these datasets include detailed trajectory data for the following cities (ordered alphabetically): (1) _Athens_[38; 241]. 500K trajectories collected over 5 days in downtown Athens, Greece, (2) _Beijing 1_[121; 374]. 17+K trajectories with 26 Million GPS points over three years in Beijing, China, (3) _Beijing 2_[296; 355]. 10+K trajectories with 15 Million GPS points over one week in Beijing, China, (4) _Rio_[95]. 12+K Buses with detailed trajectories of 118+ Million GPS points over 30 days in Rio de Janeiro, Brazil, (5) _Rome_[52]. 320 Taxis with detailed trajectories of 21+ Million GPS points over 30 days in Rome, Italy, (6) _San Francisco_ 1 [240]. 536 Taxis with detailed trajectories of 11+ Million GPS points over 30 days in San Francisco, CA, USA, (7) _San Francisco_ 2 [4]. 20+K detailed trajectories with 5+ Million GPS points in San Francisco, CA, USA, (8) _Shenzhen_[321; 360]. 664 Taxis with detailed trajectories of 1.1+ Millions GPS points over one day in Shenzhen, China, (9) _Singapore_[146; 342]. 84K trajectories with 80+ Million GPS points over a month in Singapore. Other datasets only include the origin and destination of each trajectory. Examples include the following cities: (1) _Austin_[257]. 1.5 Million trips for a period of 10 months in Austin, TX, USA, (2) _Guangdong_[361; 349]. 2.5 Million trips over one day in Guangdong Province, China, (3) _NYC_[227]. 1.5 Million Taxi trips over a period of 6 months in New York City, NY, USA, (4) _Porto_[242]. 426K Taxi trips over three months in Porto, Portugal, Other than trip and trajectory road network data, there are tons of available biking data across the world, including tens of millions trips in Bay Area [195], Boston [194], Chicago [96], Columbus [82]. London [189], Los Angeles [48], Madrid [197], Minneapolis [222], New York City [80], Philadelphia [150], Portland [49], and Washington D.C. [58]. There are also available public marine traffic that include detailed vessel trajectories (e.g. [236]), sport data sets for basketball and soccer that include a variety of events took place in major leagues within one season [234], and indoor data about the behaviors of nearly 30 Year-10 students and their teachers collected over four weeks in Australia, with spatial reference (associations to rooms) and highly granular wearable data [114]. There are, however, some large-scale aggregated datasets on a coarse granularity, which can help in high level analysis, but not to get insight details of mobility data. Examples of such aggregate data include origin-destination employment statistics in USA, which contains home-to-work commuting flows aggregated to census tract level [125], cell phone trace datasets capturing the locations of individuals aggregated to their nearest cell tower [318], foot traffic data of check-ins of 35 million anonymized mobile devices in USA are aggregated to census block groups [265], and a global database about aggregate indoor occupant behavior, composed of 34 datasets from 15 countries and 39 institutions, collected by occupancy sensors that measure the occupancy count of each space being monitored [98]. An additional source of human mobility data is location-based social network (LBSN) data. LBSN data captures both 1) discrete check-ins between users and locations, and 2) a social network between users. This dimension of location bridges the gap between the physical world and online social networking services [371]. However, it has been shown existing LBSN data sets [75; 346; 188] are too small to broadly understand, analyze, and predict human behavior [159; 175]. The lack of available mobility data, combined with the need to stress test various research ideas have motivated various research groups to either develop their own data simulators or develop publicly available simulators that can also be used by other researchers for benchmark datasets. However, such simulators were mainly designed to test specific assets of research, but not meant to be presentative for real mobility data. For example, various simulators were mainly designed to test new index structures for mobility data, query processing algorithms, and system infrastructure scalability for managing spatio-temporal data (e.g., [18; 55; 99; 164; 209; 237; 380; 384]). Within the transportation community, more fine granularity simulators (e.g., [44; 142]) were proposed to study traffic infrastructure, but none of them is meant to provide comprehensive mobility study. ### Challenges in Mobility Data Collection This section presents some of the challenges in mobility data collection that the community needs to address towards realizing the vision of mobility data sciences. #### Challenge 1. Mobility Data Privacy In most cases, (human) mobility data is sensitive and considered as personal identifiable information. This puts major privacy concerns on data sharing. Hence, any attempt to collect fine granularity detailed trajectory or human mobility data must first address the privacy challenge. Though the general topic of data privacy has been well studied in literature with practical solutions, such solutions are not directly applicable to the case of mobility data. In particular, mobility data gives rise to the TUL (Trajectory-User Linking) problem [117; 202]. To protect users' actual locations, while preserving meaningful mobility information for various learning tasks, one may wish to generate realistic motions based on real-world mobility datasets [375]. Since privacy is a core problem in mobility data that does not only impact data collection, but also impacts all other components of the mobility data science pipeline, we dedicate Section 6 to discuss mobility data privacy in details. #### Challenge 2. Mobility Data Bias Mobility data collection procedures suffer from all kinds of bias. For example, mobile applications data and mobile phone network data are biased against people who do not use smart phones or use prepaid plans. Most traffic counting sensors are installed to count cars but do not count pedestrians, cyclists, wheelchairs, or similar. Cells in mobile phone networks vary widely in size. The data traces that are usually collected in cellular networks are cellular themselves. This affects rural areas with larger cells more than urban areas. Volunteered tracking data is biased towards technically savvy people. Sports tracking data is biased towards health conscious middle and upper class. It is important to understand, measure, and mitigate data bias in mobility datasets to ensure that actions and policies that are based on mobility data science results are equitable, fair, and include vulnerable populations [277]. #### Challenge 3. Incentives for Data Sharing Users need to have good incentives to share their locations. To some degree, users kind of agree to share their locations with commercial entities to get location-based services, ride sharing, cell phone coverage, delivery, among other services. However, it is understood that users would be reluctant to publicly share their mobility traces. Meanwhile, the biking community have shown a great example for sharing their biking trails. A main reason is that, in many places of the world, most of these trails are not really home-to-work commuting, but it is more of an outdoor activities. Hence, sharing biking trails helps fellow bikers in knowing the conditions of biking trails, which is a great incentive for sharing. More incentives need to be given for drivers to share their mobility traces, even for sporadic trips that do not lead to identifiable locations. Sharing could be for part of the trajectory where rewards are given back based on the sharing length and resolution. A gamification concept may be exploited to encourage more participants to share. #### Challenge 4. Simulated Mobility Data The dire need to mobility data along with the difficulty of obtaining them made it apparent that simulated synthetic data is immensely needed to enrich and train mobility data science applications. However, the challenge is to go beyond earlier attempts of simulating data for testing very specific techniques to simulating data for the general purpose of having realistic life scenarios. Empowered by modern computational capabilities that make it possible to simulate large populations, the mobility community should work with social scientists to create realistic individual-level human mobility data. Lessons have been learned from the experience of the deep learning community, by applying generative adversarial networks (GANs) for trajectory generation [118; 362]. However, it is unclear as of yet, how to measure the realism of mobility data. If synthetic mobility data is too realistic, for example, due to training on real human trajectories, it may invade someone's privacy if, for instance, it shows where members of a given household actually visit. On the flipside, benchmark data that is too disconnected from the real-world and does not represent realistic human behavior would not allow to generalize to the real-world. ## 3. Mobility data cleaning Until the early 21st century, location data and mobility data available for geographic information science (GIS) was mainly collected, curated, standardized [107; 108], and published by authoritative sources such as the United States Geological Survey (USGS) [313]. Now, data used for mobility data science is often obtained from sources of volunteered geographic information (VGI) [229; 292]. Such data is contributed by millions of individual users (more than ten million contributors in the case of OpenStreetMap [229]) and is rarely curated. Mobility data collected from such sources is highly uncertain due to physical limitations of sensing devices, due to obsoleteness of observations, and in many cases plain incorrect due to deliberate misinformation [213]. Consequentially, our ability to unearth valuable knowledge from large sets of mobility data is often impaired by the uncertainty of the data which geography has been named the "the Achilles heel of GIS" [124]. Data cleaning and preprocessing is a milestone to all data science. In fact, it has been reported that data scientists spend more than 80% of their time in data cleaning and preparations [220; 335]. As a result, there are huge efforts in the data science community dedicated to developing various data cleaning algorithms [77; 113; 248] and full-fledged systems [88; 249]. Mobility data is of no exception in terms of its need for data cleaning and preparation procedures. But for numerous reasons, data clearing and preparation yields unique challenges. This sections discusses current efforts and challenges of mobility data cleaning. ### Efforts in Mobility Data Cleaning A recent survey [174] and data quality assessment tool [126] have discussed various sorts of errors that negatively impact data quality in spatial and mobile environments. Motivated by the inaccuracy of location tracking devices, several efforts were dedicated to address: (a) the spatial inherent inaccuracy of GPS devices and (b) the uncertainty of moving objects whereabouts between each two known locations, which is a result of low sampling rates due to bandwidth and battery limitations. As the spatial inaccuracy indicates erroneous GPS coordinates, the efforts to identify and correct such coordinates have focused on either finding and eliminating outliers or map matching all coordinates to an underlying fixed and trusted infrastructure (e.g., road network map). For the case of map matching, existing efforts aim to match/snapp all GPS traces to an underlying road network [53; 61; 62; 144; 179; 332]. Proposed techniques vary from as simple as snapping each point to its nearest road to applying Markov Chain to identify the most probable road segment that each point should be snapped to. In case there is no underlying road infrastructure (e.g., marine transportation or animal movement), outlier detection techniques are used to identify and remove erroneous points. [305]. Irrespective of the collection method and device settings, there is also indispensable uncertainty in movement data caused by their discreteness. Since time is continuous, the data cannot refer to every possible instant. For any two successive instants, there is a temporal gap where the whereabouts of the moving objects is unknown. To overcome such location uncertainty, several efforts were dedicated for modeling the uncertainty of mobility data surveyed in [73; 383; 381]. ### Challenges in Mobility Data Cleaning This section presents some of the challenges in mobility data collection that the community needs to address towards realizing the vision of mobility data sciences. _Challenge 5. Inaccuracy in the Movement Space Infrastructure._ A unique challenge in mobility data is that in many cases, its reference points are the ones that are inaccurate. In particular, mobility data that represent movement on a road network may be more accurate than the road network itself. Road networks, like any other type of data, suffer from all sorts of inaccuracy, and may not be even available in many places [200; 217; 306]. In fact, Microsoft has recently announced that it has found more than one million kilometers of roads missing from current maps [203]. This is why there is a whole area of industrial and academic research about map inference, which aims to infer (all or missing parts) of the road network from either satellite images [1; 39; 72; 293] or trajectory data [47; 65; 261; 289]. However, almost all of these techniques focus on making accurate maps in terms of topology. There need to be more efforts on map inference algorithms that go beyond inferring the map topology to inferring map metadata (e.g., road speed, traffic lights, number of lanes, and turns), without which, mobility data would not be accurate as its road network reference itself is missing important data. A major step towards cleaning mobility data would be to first clean its reference map. _Challenge 6. Filling in Temporal Mobility Gaps._ As mentioned earlier, there are a lot of efforts dedicated to modeling the uncertainty of moving objects whereabout between each two consecutive time instances. However, uncertainty poses different challenges to down stream functions and applications, including the need to develop new techniques for indexing, query processing, and data analysis for various uncertainty models. One way to overcome this is to try to infer the actual whereabouts of a moving objects between any two time instances with known locations. There are already several efforts to insert artificial points between each two consecutive trajectory points, with the promise that these points act as if the trajectory was collected in a very high sampling rate. Such a process had various names, including trajectory interpolation [190; 370], trajectory completion [180], trajectory data cleaning [359], trajectory restoration [173], trajectory map matching [53; 191], trajectory recovery [327; 336], and trajectory imputation [103]. However, the large majority of such work rely on matching the trajectory points on the underlying road network, where the imputation becomes finding the road network shortest path between each two consecutive trajectory points. Unfortunately, this is not applicable to the case where the road network is unknown, untrusted, or inaccurate. Hence, more recent attempts try to do data-driven trajectory imputation without relying on the underlying road network [103; 109; 158; 247; 327]. However, such techniques are either not scalable to city-scale trajectory datasets, or require too much dense historical data that derives its imputation process. There is an immense need to develop a scalable, accurate, and fine-grained imputation that almost mimics a continuous data stream of trajectory locations. ## 4. Mobility Data Analytics Spatial data is special (Krishnan et al., 2017). Unlike non-spatial features, location attributes (such as longitude and latitude) rarely exhibit linear or other simple functional relationships to variables of interest. It rarely makes sense to model a variable of interest directly in relation to spatial attributes. Instead, it is distances that matter. According to Tobler's first rule of Geography, "everything is related to everything else, but closer things are more related than things that are far apart" (Tolley, 2018). For mobility data, proximity is further extended with time, i.e., objects that are close in space and time. In addition to this concept of spatiotemporal autocorrelation, what makes mobility data even more challenging to handle is that it is often observed from humans whose behavior can often be irrational and difficult to explain. As Nobel Prize laureate Murray Gell-Mann famously said, "Think how hard physics would be if particles could think" (Tolley, 2018). But unlike in physics, the "particles" of interest are often humans who can think. Data collection sensors have the capability to capture the spatiotemporal locations of the moving objects, but not their behavioral aspects. These difficulties require new paradigms, techniques, and algorithms to analyze and learn from the spatiotemporal data, and that can explain and predict the associated behavior. This section discusses current efforts and challenges of mobility data analysis. ### Efforts in Mobility Data Analytics Mobility data analytics has already gained momentum in research in the recent years. Dedicated workshops exist in major conferences; including the ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data (BigSpatial) since 2011 (Tolley, 2018), the Big Mobility Data Analytics (BMDA) workshop in EDBT since 2018 (Tolley, 2018), and the ACM SIGSPATIAL International Workshop on Animal Movement Ecology and Human Mobility (HANIMOB)@SIGSPATIAL since 2021 (Tolley, 2018). Surveys on the status of research exist in (Tolley, 2018; Krishnan et al., 2017). Mobility data analytics encompasses various application domains and involves analyzing data from different - sources such as urban (Krishnan et al., 2017; Krishnan et al., 2017), maritime (Krishnan et al., 2017; Tolley, 2018), aviation (Krishnan et al., 2017; Tolley, 2018), animal movement (Tolley, 2018; Tolley, 2018), and indoor movement (Krishnan et al., 2017; Tolley, 2018). Among these different themes urban mobility stands out with a fairly large body of research including green routing (Krishnan et al., 2017), traffic anomaly detection (Tolley, 2018), hot spot and hot path analysis (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), road traffic prediction (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), and travel time estimation (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Trajectories of moving objects have been used as means to create and continuosly update the road network (Krishnan et al., 2017; Krishnan et al., 2017; Tolley, 2018). Public transport systems also collect ticketing data in the form of passenger check-ins, sometimes also associated with check-outs. This data has been shown very useful to transit planners in understanding passenger demand and movement patterns in daily operations as well as in the strategic long-term planning of the network (Krishnan et al., 2017; Tolley, 2018; Tolley, 2018). Personal mobility of individuals is also a subject of analysis, which includes analyses such as activity recognition (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), personalized routing (Krishnan et al., 2017), matching with ride-sharing services (Krishnan et al., 2017; Krishnan et al., 2017), and crowd-sourcing (Krishnan et al., 2017; Krishnan et al., 2017). While a significant portion of research focuses on understanding and analyzing data through analytics, there are also important efforts dedicated to developing generic analysis tools for spatio-temporal data that are agnostic to the application domain. Efforts on generic methods for mobility data analysis include, among many others, trajectory clustering (Krishnan et al., 2017; Krishnan et al., 2017), trajectory similarity measures (Krishnan et al., 2017), outlier detection (Krishnan et al., 2017; Tolley, 2018), transportation mode classification (Krishnan et al., 2017; Krishnan et al., 2017), spatiotemporal pattern detection (Krishnan et al., 2017; Tolley, 2018), and trajectory completion (Krishnan et al., 2017; Tolley, 2018). However, and despite these many research efforts towards analyzing mobility data, there is lack of common data analysis tools and systems. The scientific software environment for mobility data analysis is rather fragmented. For example, (Krishnan et al., 2017) lists 58 packages in their review of R packages for movement and (Krishnan et al., 2017) reviews seven Python libraries for movement data analysis and visualization. Recent years have seen a notable increase on research on deep learning for mobility data analysis [193; 339]. This brought an increased adoption of various paradigms and (adapted versions of) architectures used in other areas where deep learning has brought improvements in tasks such as clustering/classification [205], prediction [168] and recommendation [40], information propagation [377], etc. For example, Generative Adversarial Network (GAN) based architectures have been used recently to learn representations of trajectories and generate synthetic trajectories techniques [115; 362; 116]. Given the introduction of Transformers [315], transformed-based approaches have also been used for mobility modelling and trajectory prediction [344], given the sequential properties of mobility data. Other deep learning approaches such as contrastive learning [376] have also been exploited in mobile data settings, along with investigation of the impact/benefits of representation learning[119]. ### Challenges in Mobility Data Analysis This section highlights open problems related to mobility data analysis, that needs consideration from the community towards realizing the vision of mobility data science. _Challenge 7. ML for Mobility Data._ The state-of-the-art deep learning (DL) models, such as Transformers [315] were not developed initially for mobility data science in mind. They were derived from NLP and computer vision domains. The community needs to provide best-case practices for doing ML (and DL) for mobility data. A major hurdle, and a research opportunity as well, is that existing ML and analytics tools, e.g., TensorFlow, or PyTorch, do not support location and mobility as base data types to reason about. So, even the basic analysis, such as clustering, classification, similarity, etc, need to be extended when mobility data is involved. These tasks, as well as higher-level analysis, can not be totally independent. Instead, common basic building blocks could have an impact on all or some of them. For example, exploring the effectiveness of embedding for mobility data analysis is a basic block that could impact different ML-based analysis tasks. This raises a challenge to build analysis primitives and common building blocks for applications that could shape a framework of ML-based mobility data analysis. Another major hurdle is the robustness in data-driven mobility models. It is widely known that data-driven models (as in the case of ML or DL) are only as good as the data that it is used to be trained on. However, given the changes of mobility behaviors, such as the COVID-19 pandemic and the associated lockdowns, and environmental events and disasters, traditional ML based, and even recent DL based, methods are no longer robust. The models' performance deteriorate in unseen events, especially as new behaviors emerge and then persist. Recent effort includes the incorporation of 'contextual-awareness' and'memory' in an enhanced event-aware spatio-temporal network [330] for predicting mobility in multiple modes of transportation including taxi, cycling, subway during the unprecedented events like COVID lockdowns, or snowstorms, as it emerged and up to 30 days post the event. However, more work to be done on modelling and understanding mobility behavior, that are robust to changes due to societal events. _Challenge 8. Progressing from Next Location Prediction to Movement Behavior Understanding._ Due to the wide availability of aggregated check-in and foot-traffic data, many researchers focus on the problem of location prediction [11; 263; 343]. Leveraging predictions such as "User X will visit Coffee Shop A next" or "\(32\pm 4\) users will visit Coffee Shop A in the next hour" has some direct applications. It could be useful for providing information about parking "parking at location X appears to be a problem today, so consider...," for battery charging opportunities, or for providing information about collective transportation status "Metro station X that you are expected to visit is closed for repairs, so instead...". One could provide a new transportation schedule and departure time in response to problems at an anticipated future location of a user, just like airlines at times update your itinerary in case of issues. Beyond predicting locations, if we understand the underlying behavior, at the individual-, group-, or population-scales, that leads to these predictions, we could understand _why_ one coffee shop chain has increasing visitor rates (e.g., due to a movement towards organic coffee sold by the coffee shop). Through inferring from the data about such behaviors, only then we can take corresponding actions not only to predict locations, but also to prescribe actions (e.g., offering more organic coffee) to improve visitor rates. This understanding of (human) behavior will broadly affect applications using mobility data. Traditional spatiotemporal data science allows for predictive analytics to predict the future. In contrast, mobility data science enables prescriptive analytics by understanding the underlying human behavior to devise actions and policies that aim to achieve desirable targets. An open problem for understanding mobility behaviour data is the lack of labels or human annotation to provide insights on the actual observations. There are several other tricks that have been proposed, including cross-domain data fusion as well as developing interpretability mechanisms for machine learning or deep learning models. When geographical information is fused together with contextual features and social behaviours, not only location prediction can be improved, but also insights can be provided about the underlying visitor behavior [343], even if no human-labelled data are provided about the mobility behaviors. Therefore, explainability of AI and machine learning models that have underpinned many of such predictive behavior models remain an open challenge, especially since deep learning models are black boxes. One such approach for deep-learning-based models is disentangled representation learning, and a recent work [367] shows that the disentanglement of latent spatiotemporal factors can assist the explainability of how the underlying latent factors learned by deep learning models are correlated. It can also be used for dimensionality reduction, and assist in few-shot learning cases. _Challenge 9. Visual Analytics_. Visualization and exploratory analysis of mobility data has long been a hot topic in visual analytics [23]. More recently, the trend turned to combining visualization with modeling and simulation to support decision making [169; 378]. This kind of research is by necessity application-oriented, while much less is done on developing more general ideas and approaches. One general research problem that has only been slightly touched in visual analytics but not systematically addressed is human involvement in real-time analysis of big mobility data. Is it possible to define realistic scenarios for involving human intelligence in big data analytics taking into account the cognitive limitations of human analysts with regard to the amount of information that can be perceived, speed of processing, and time required for analytical reasoning and contributing to the analysis process? Also how to combine computational methods of analysis, such as ML, with human expert knowledge and reasoning? The involvement of human intelligence is limited to thoughtful data preparation, feature selection, parameter setting, and so on. It would be great to find ways to make more direct and effective use of human-possessed concepts and, particularly, knowledge of causal relationships. Hence, a grand research challenge for visual mobility analytics is to develop approaches to understanding and modeling mobility behaviors from low-level movement data, such as trajectories of moving entities. The following research problem is how to analyze behaviors after they have been extracted from elementary movement data and represented by appropriate data structures. A conceptual framework should be developed to enable defining the types of conceivable patterns of movement behavior. This will provide orientation for developing visualization techniques facilitating visual discovery of behavioral patterns, as well as algorithmic methods for detection of specified types of patterns. These techniques and methods should be incorporated in systems and workflows for analyzing the contexts in which various patterns take place and developing models for describing and predicting mobility behaviors depending on the context. ## 5. Mobility Data Management Infrastructure Classical data management systems were designed for generic data types, where spatial and temporal data can be supported as new additional types. Yet, the core functionality of the data management engine does not acknowledge the spatial and temporal properties of mobility data. For example, mobility data calls for storing and querying locations of objects that evolve over time. The evolution can be in the location, the extent, and/or the properties of the object. The evolution can happen in discrete steps, e.g., check-ins, or in a continuous form. It is thus desired that the data management platform is able to represent the history, the current location, and possibly the near future of the moving object. Another example is classical index structures that are built with the assumption that the read workload is significantly higher than the write workload, and hence the index structure does not change often. Meanwhile, mobility data exhibits a different workload where the write workload (e.g., object location update) is significantly higher than the read workload, which makes all classical index structures simply not applicable to mobility data. A third example is that simple queries over mobility data like a nearest neighbor query can be supported by classical data management systems through finding the distance between the user location and all other objects, sort all objects based on that distance, and get the closest one. Such cumbersome approach is mainly due to the lack of having a specialized nearest-neighbor operator. Should we have one, that operator can seamlessly integrate with the query executor and optimize of a data management engine to efficiently support a pretty important query in most data mobility applications. Finally, a last example is that classical methods for scaling up data management in distributed environments rely on data distribution, mostly based on the data keys. This does not work well in scaling up mobility data as it is always desired to distribute mobility data in a way that spatially and temporally nearby objects are grouped together in the same cluster or computing node. This sections discusses current efforts and challenges of mobility data management. ### Efforts in Mobility Data Management There has already been extensive research in all layers of mobility data management infrastructure. On the data modeling aspect, early models based on the constraint databases model aim to support simple moving objects (i.e., points) (Kang et al., 2017). More complex data types (e.g., moving regions) were supported by later models based on abstract data types (Kang et al., 2017), which is still being used in recent systems (Kang et al., 2018; Wang et al., 2019). More recent efforts were introduced to capture the semantics of trajectories of moving objects. Other models were also proposed to capture specialized modes of movement, including indoor environments (Kang et al., 2019), network constrained (Kang et al., 2019), fuzzy trajectories (Wang et al., 2019), and periodic moving objects (Kang et al., 2020). In terms of indexing, tens of index structures have been proposed to support efficient indexing, storage, and retrieval for spatio-temporal data as either historical data, current locations, or continuously updated locations (Kang et al., 2017; Wang et al., 2019; Wang et al., 2019). This forms the infrastructure support for various spatio-temporal query processing techniques for various query operators over moving objects, including spatio-temporal range queries (Wang et al., 2019), spatio-temporal nearest-neighbor queries (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), reverse nearest neighbor queries (Wang et al., 2019), skyline queries (Wang et al., 2019), and spatio-temporal join (Kang et al., 2019; Wang et al., 2019) In terms of academic full-fledged systems, the SECONDO system was introduced in the early 2000 as a comprehensive testbed for distributed moving object databases covering all aspects of data modeling, indexing, and querying (Kang et al., 2019). More recently, MobilityDB, implemented on PostGIS, is introduced as a scalable system with a wider functionality on moving object databases (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). In terms of Big Data systems, ST-Hadoop (Kang et al., 2019), SUMMIT (Kang et al., 2019) and HadoopTrajectory (Kang et al., 2019) systems extend the Hadoop system to support spatial-temporal data, and trajectories, respectively. TrajSpark (Tranj, 2019), Dita (Dita, 2020), and TrajMesa (Tranj, 2019) systems extend the Spark system to support various index structures and query operations over trajectory data. SharkDB (Shark et al., 2016) extends in-memory column-oriented storage engines to support trajectories. In open-source community and industry, PostGIS (Sakir et al., 2016) supports very basic trajectory functions, Oracle spatial supports streaming point data to capture real time mobility (Krishnan et al., 2017), while Microsoft Azure (Mikkel et al., 2016) supports storing trajectory data in Azure table and utilizing Azure Redis for indexing. ### Challenges in Mobility Data Management Infrastructure Though there is already a lot of work in various components of mobility data management infrastructure, there is an apparent lack of integrated systems that offer comprehensive functionality to end users, encapsulated in full-fledged systems that support mobility data science. Hence, the challenges in this section mainly focus on the system building aspect as a data management infrastructure immensely needed towards the vision of mobility data science. _Challenge 10. Building Systems with Mobility Data in mind._ Location data has almost always been supported in data systems as an afterthought problem. Many systems, e.g., Postgres, Storm, Spark, and Hadoop, have not been originally designed with location data support in mind. What typically happens is that spatial data types get augmented into tuple-oriented systems to support the location data type. For example, a restaurant tuple that describes various attributes of a restaurant is augmented with the latitude and longitude of the location attribute of the restaurant to support location services. Spatial indexes are provided to speedup the access to these attributes, and some accompanying spatial operators are provided to operate on the location attributes to provide location services, e.g., range or k-nearest-neighbor searches. While this approach works to some extent, systems coming out of such approach end up with sub par performance for spatial data, and hence for mobility data. Given the myriad of applications that rely on mobility data, it is important that systems are extended with native support for locations and mobility data. So, mobility data types and operations should be integrated in the core of these systems, and not to be considered as an afterthought problem. This can go through all kinds of systems, starting from database management systems that need to be spatially- and temporally-aware to support mobility data to scalable big data and NoSQL systems, where injecting spatial- and temporal-awareness into their core functionality will inherit their scalability to support scalable mobility data science. _Challenge 11. Location data as first class citizens._ Having locations as core of mobility data calls for treating location data as a first-class citizen in a location data system that at the same time can be extended to support other data types (Sakir et al., 2016). Such location data systems can be presented as Location+X systems, where The data types "X" can be keywords (e.g., to support spatial-keywords and tweets), graphs (e.g., to support road-network data), relational data (e.g., to support descriptions of spatial data objects), click streams (e.g., to support check-in data), document data (e.g., to support points of interest and documents that describe them), annotated trajectories (e.g., location + time + textual annotations), among others. In many location services, more than one data type X may need to be supported, e.g., a graph data type combined with a document or keyword data types, which calls for a multi-model-like data system. This gives rise to an eco-system where location is at the core with some form of an extensible multi-model data system that supports the multitude of data types "X". However, current multi-model data system technology is lacking in several aspects. First, they do not support data streaming that is a cornerstone in mobility data due to the online streamed locations of moving objects. Second, we do not want to fall into the trap of adopting existing multi-model technologies that may affect location being a first class citizen. However, the need for supporting multi-models in one seamlessly integrated location+X system remains a necessity. In addition to supporting location data via a native location+X engine, an eco-system for mobility data would also include many important utilities to facilitate a broad spectrum of location service applications. From the input data side, to help navigate the vast amounts of available location datasets, and discover the right data sets for a given task, a location dataset lake infrastructure and location dataset discovery, cleaning, and integration facilities are needed. From the presentation side, a comprehensive visualization suite is envisioned to support visualizations for combinations of spatial and temporal data analytics on top of location data. _Challenge 12. Streaming, Batch, and Hybrid Workloads._ Motivated by the application needs, mobility data management need to support both batch and real-time data through all systems layers from digesting the data to analyzing and visualizing it. For example, a common requirement is to visualize the positions of a fleet of vehicles in real time, which only requires access to the most recent positions of the vehicles. Yet, on the same time, there is a need to perform batch analytics on the full trajectory of these vehicles (e.g., to assess whether the trajectories exhibit some unexpected behavior). Generally speaking, the need to have both real-time and historical data has led to the development of the data warehouse domain, where operational databases cover the real-time Online Transaction processing (OLTP) while data warehouses cover the historical Online Analytical Processing (OLAP). Since having two different systems for the two kinds of workloads is very costly, a new approach referred to as Hybrid Transactional and Analytical Processing (HTAP) has been recently proposed. However, mobility data exhibits pretty different workloads from other data, where streaming data is kind of dominant in terms of objects continuously streaming their new locations. Meanwhile, historical data is not of less importance and are continuously appended. Efforts need to be spent to adopt the concepts behind HTAP systems to support the nature of mobility data. ## 6. Mobility Data Privacy As we discussed in Challenge 1, mobility data privacy is a core problem in the mobility data science pipeline. Studies have shown that location data could reveal sensitive personal information such as home and workplace, religious and sexual inclinations (Zhou et al., 2017). As localization technology advances and extremely fine-grained location tracking is being enabled, it may even reveal products of interest in the stores we have visited, doctors we saw at a hospital, book shelves of interest in a library, artifacts observed in a museum, and generally anything that might publicize our preferences, beliefs and habits. Recent survey has shown that 78% smartphone users among 180 participants believe that Apps accessing their location pose privacy threats (Zhou et al., 2018). While there are many privacy-preserving data collection and data analysis techniques developed for personal data, mobility data introduces unique challenges due to 1) spatiotemporal correlations in the mobility data which often results in increased privacy cost due to privacy composition for correlated data or downgraded utility for downstream applications, 2) complex location semantics (e.g., corresponding POIs of locations) and mobility behaviors (e.g., regular vs. one-time visit of a location) which existing privacy definitions may not be able to capture, and 3) diverse and emerging application scenarios such as contact tracing using mobility data for which existing privacy algorithms designed for aggregate data analytics are not suitable. In this section, we briefly review existing privacy notions and techniques developed for location and mobility data and discuss several open challenges. ### Efforts in Mobility Data Privacy We categorize existing techniques in mobility data privacy into two main settings corresponding to our data pipeline: 1) local setting (data collection stage), and 2) central setting (data analysis stage). In the local setting, the mobility service provider that collects mobility data is assumed to be untrusted, hence each mobile user or entity can apply privacy-preserving mechanisms before the data is collected by the service provider. In the central or global setting, the mobility service provider is assumed to be trusted and collects the raw mobility data. The provider can apply privacy-preserving mechanisms for statistical analysis, and share aggregated data, machine learning models trained from the data, or synthetic data mimicing the original data to untrusted third parties. **Local Setting.** In recent years, local differential privacy (LDP), the local variant of differential privacy, [83, 130] has become the de facto standard for preserving privacy at data collection stage. Each user can perturb her raw data using an LDP mechanism before uploading it to an untrusted server. Most existing mechanisms are designed to ensure utility for aggregate queries or analytics (e.g., frequency or density estimation) and requires the aggregation of the perturbed values from a large group of users, while the individual perturbed value may not provide much utility. Several works applied existing LDP schemes to location data but the utility is poor [162, 368]. Other works relaxed LDP to personalized LDP [69]. Recent works developed improved LDP mechanisms for location data with better utility [322]. In addition to support aggregate data analytics, location based services (LBS) including range queries, spatial crowdsourcing, and the emerging contact tracing for pandemic control, require the precision of the perturbed locations themselves. Geo-Indistinguishability (GeoInd) [22] relaxes LDP for location data which requires the locations to be indistinguishable only within a radius and the indistinguishabilty is scaled by their distances, providing better privacy utility tradeoff for LBS. Later works extended GeoInd to account for temporal correlations between consecutive locations of mobile users [338] and protection of customizable spatiotemporal activities instead of raw locations or trajectories [57]. Other works applied the GeoInd mechanisms and variants for privacy-enhanced spatial crowdsourcing and contact tracing [85, 301]. Besides statistical privacy techniques, Private Information Retrieval (PIR) and secure multiparty computation (MPC) techniques have also been developed to allow LBS queries such as range queries and contact tracing without revealing individual locations [13, 76, 123, 251], but are generally more computationally expensive and need to be designed for each different query. **Global Setting.** Many works have applied differential privacy (DP) for computing and publishing aggregate mobility data. Compared to DP algorithms for tabular data, they typically exploit the hierarchical structure of locations and sequential patterns of trajectories to improve utility [5, 206, 206, 246, 276]. Some works also utilized the DP aggregates for task assignment in spatial crowdsourcing [300]. In practice, mobility data providers have started sharing aggregated mobility datasets with DP, esp. in response to the pandemic, such as Meta's population density maps and Movement Range maps, Google's COVID-19 Community Mobility Reports, and SafeGraph's Patterns [35]. Other works have applied DP for training machine learning models using mobility data, for example, for location prediction [11]. Another line of work attempts to generate synthetic trajectories or mobility data based on raw trajectories with formal DP guarantees [139, 294, 325]. From the privacy attack side, recent works demonstrated the possibility of membership inference attacks on aggregate location data and linking attacks, and the defense power of DP against some of these attacks, reinforcing the need for ensuring rigorous privacy even for seemingly anonymous aggregate mobility data and machine learning models trained from mobility data [154, 244]. ### Challenges in Mobility Data Privacy This section highlights open problems related to mobility data privacy, that needs consideration from the community towards realizing the vision of mobility data science. #### Challenge 12. Threat Models and Privacy Definitions The first challenge for mobility data privacy is the need to understand the threat models and adopt or define proper criteria by which to enforce privacy. We need to define first what needs to be protected (i.e., the sensitive information). This may vary for different mobile users and applications. It may be the exact location coordinates of a user at a given time (most existing efforts focus on this). It may also be the association of a user with a sensitive place, co-location of two users (while it's okay for the users to reveal the exact location coordinates), or spatiotemporal activities of a user (e.g., stay at a place, or a trajectory). When defining privacy models and designing subsequent privacy mechanisms, there will (almost always) be attacks based on side channel information exploitation. While privacy notions like DP typically assumes the worst case which also means sacrificed utility, relaxed versions may be needed given specific threat models to enhance the privacy and utility tradeoff. Besides developing rigorous privacy enhancing mechanisms, it is equally important to understand the privacy risks and the empirical defense power of the privacy enhancing technology (PETs). While there have been some work on privacy attacks on aggregate mobility data, more work is needed to understand what sensitive information may be revealed and reconstructed from mobility data based models, e.g., if membership inference attacks or feature reconstruction attacks [110, 286] can be carried out, and potentially build benchmark attacks which can be used to audit the privacy risk of mobility data science systems and privacy mechanisms. #### Challenge 13. Privacy and Utility Tradeoff and Other Factors When designing privacy mechanisms for mobility data collection and analysis, it is important to consider the utility of the privacy protected data for the downstream applications. For LBS (as typical in the local setting), the utility needs to be measured by the precision or accuracy of range queries for POI search, or contact detection for contact tracing (instead of how accurate the perturbed location is from the original location for which most algorithms following geolnd are focused on). Hybrid methods that combine DP and cryptographic techniques may be needed esp. for critical applications like contact tracing and public health [76]. For aggregate data analytics and machine learning applications using mobility data (in both local and global setting), the utility need to be measured by the accuracy of the statistics (e.g., frequency or density estimation for which most existing work focus on), the trained model, or the fidelity of the synthetic data. As a result, the algorithms need to be designed to optimize the corresponding utility many remain an open challenge. For example, existing methods for DP trajectory synthesization are mainly based on statistical models or low-order Markov models and perform well on some utility metrics. While there are more powerful generative adversarial network (GAN) based models for generating more realistic synthetic trajectories [118, 115], ensuring formal DP for these models would result in deteriorated utility due to the complexity of the models. Designing methods for optimal privacy utility tradeoff remains an open challenge. In addition to the privacy and utility tradeoff, privacy enhancing technology may exacerbate bias in the data or learning algorithms. Mobility data may have inherent bias as we discussed in Challenge 2. Data analysis algorithms may also have unfair performance for groups that are underrepresented in training data. It has been demonstrated that learning with DP could exacerbate such unfairness, i.e. underrepresented groups suffer from worse privacy/utility trade-offs [32]. Research is needed to understand such impact on mobility data and design privacy algorithms to optimize privacy utility tradeoff while ensuring the fairness. _Challenge 14. Explainability and Societal Education._ Another important challenge of mobility data privacy is to improve the explainability of privacy definitions and mechanisms and communicate them to the stakeholders including mobile users (data contributors), mobility service providers, and data analysts. This is a general challenge for privacy enhancing technology, but more so for mobility data given the complex semantics of location information and diverse applications as we mentioned. DP-compliant algorithms and location privacy models (such as Geo-Ind) as described earlier use privacy parameters to control the trade-off between privacy guarantee and the utility of the private outputs. However, there is a significant gap between the theory and practice of DP: we lack principles and guidelines for choosing privacy parameters when collecting or processing mobility data using DP techniques in the real world. While the technology companies have employed DP in releasing the mobility datasets as we discussed earlier, the choice of the privacy parameter and the associated noise and uncertainty are often not precisely specified or uniform across companies. This makes it difficult for the downstream applications to quantify the uncertainty of the analysis result. The parameter \(\epsilon\) of DP is mathematically defined but not well-aligned with the stakeholders' interests. Even for the same \(\epsilon\), the privacy guarantees could be different based on the different variants of DP and algorithms at hand. In addition, the \(\epsilon\) is not always linked to a specific privacy risk for the users (such as "the probability that an attacker can correctly infer my data") or a precise utility level for data analysts (such as "the accuracy of the DP-ML model"). To promote the adoption of mobility data privacy technology such as those based on DP, we should establish principles, design guidelines, and provide tools for explaining DP's protection and limitation from stakeholders' practical interests. For example, we can help data contributors understand the privacy risk (such as membership inference attacks or reconstruction attacks) under different privacy parameters given a concrete DP algorithm; we can also design efficient methods to visualize how data analyzers' utility metrics (such as MSE or model accuracy) may change along with different privacy parameters for specific mobility applications. ## 7. Mobility Data Science Applications Mobility data science used to be limited in the domain of transportation but recent technological inventions have created an abundance of mobility data, resulting in applications in many other domains of interest for society. Such applications leverage mobility data to understand, explain, and predict where moving entities such as humans, animals, or infectious diseases go, why they go where they go, and where they will go next. This section outlines broad applications of mobility data science to illustrate the recent landscape of mobility data science. ### Traffic Traffic is a problem of global scale, as recognized by transportation science over a decade ago. Drivers in the United States spend 6.9 billion of driving-hours stuck in traffic and waste more than 11 billion liters of fuel per year according to INRIX (Krishnan et al., 2018). Measured per-capita, people in Russia and Thailand spend even more time in traffic, while Brazil, South Africa, the UK, and Germany are only slightly behind the United States. Leveraging mobility data science and understanding the underlying behavior of human participants concomitantly with different transportation modes, can enable more effective solutions to multiple problems at the heart of improving traffic management. Two main lines of research focus on: (1) traffic monitoring at an aggregate level, e.g., to help city administration, and (2) services that road users are getting. Existing work towards traffic monitoring include monitoring congestion (Krishnan et al., 2018), assessing the safety of roads and intersections (Krishnan et al., 2018), traffic prediction (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018), evacuation routing (Krishnan et al., 2018; Krishnan et al., 2018), optimizing the public transportation schedules (Krishnan et al., 2018). Efforts on the services provided to road users includes routing queries that balance the traffic across roads [71; 92], help find drivers finding nearest facilities [163; 275], personalized routing [178], eco-routing for minimizing greenhouse emissions [19; 183], and enabling multi-modal trip planning [304]. But there are many open opportunities and challenges using mobility data to improve traffic conditions. One example is devising accurate models for dynamic scheduling of public transportation. Another example is the context-aware optimization of traffic signals - e.g., incorporating the impact of additional flux of pedestrians in bus/train stations, to minimize the stop-and-go impacts for vehicles. A challenge of using mobility data science in the transportation domain is monitoring and reduction of emissions. Being able to quantify emissions (e.g., from transportation) is essential to accountability and reduction of emissions. Using data on emissions collected from in-situ sensors but also sensed remotely through earth observation (satellite) data will allow us to better understand the effects of e-mobility, better collective transportation, and infrastructure improvements. ### Urban Areas In 2018, 55% of the world's population (4.2 billion people) resided in urban areas, and this proportion is projected to increase to 68% by 2050 [312]. Urban areas are a focal point for mobility application as they introduce a variety of mobility modalities such as electrical vehicles [157; 316] and bicycles and scooters with respective sharing programs [60; 182]. But by understanding how, where, and why people move in cities, outer suburban and regional areas, the demand for infrastructure and energy can be better understood [372; 31]. Improving this understanding helps reduces urban inequalities in cities [223] such as access to high quality food [319] and healthcare [131]. Mobility data also helps improve urban safety by improving crime prediction [111] and helping to recommend safe routes [112; 274]. A specific Urban mobility data science supports urban areas is through data-driven map construction [3; 7] and updating of existing maps to account for blocked or new road segments [216; 65] which is paramount in autonomous driving applications [196; 357]. The realtime monitoring of urban mobility could result in _situational awareness_, initially a term coined in defense applications, involving _perception_ of the environmental states using the surrounding data, _comprehension_ of the ingested data to understand the emerging situations, and _projection_ of future states and/or events that require predictive analytics. Mobility data provides critical components and insights into situational awareness in cities. When achieved, this is applicable not only to enabling robust critical infrastructures in cities, but also to protecting them from harm, e.g., forest fires, earthquakes, terrorist attacks. Many researchers use mobility data as input to enable situational awareness in cities as well as in airports [281; 282; 283]. ### Health Informatics The spread of infectious diseases is a highly complex spatiotemporal process that is strongly tied to human mobility [102; 143]. Many recent works have used human mobility data for data-driven epidemic forecasting as surveyed in [260]. A specific example of leveraging mobility data for public health is contact tracing, which refers to the process of tracking persons who may have come into spatial contact with an infected person, and subsequently collecting further information about these contacts [207]. The feature-rich interaction, processing and localization/communication modalities of smartphone devices, have brought these to battle on the technological forefront and have curbed the fast spread of pandemics, like COVID-19. To this date, the community has proposed a wide range of contact tracing approaches, including opportunistic [250] and participatory approaches [86] approaches as well as privacy-sensitive [356], de-centralized [309], proximity-based (e.g., BLE, sound) [252], and location-based approaches (e.g., Wi-Fi, GPS) [86] approaches. However, a wide range of challenges remain unanswered, including methodologies to improve the penetration and adoption rates, alleviate privacy or expectation skepticism (Krishnan et al., 2017), ubiquitous availability on low-end terminals as well as technological/psychological adoption barriers (Krishnan et al., 2017), achieving cross-country interoperability with standard formations beyond recommendations, scalability/reliability and accuracy verification of engaged spatial technologies as well as lessons about effectiveness from real large-scale deployments. Another specific health application form ability data is elderly health monitoring. GPS-enabled smart-watch technology can be used to monitor the movement of elderly users (Krishnan et al., 2017). In particular, if the monitored user is showing early signs of dementia, her/his trajectories could show an abrupt change from her/his movement history (Krishnan et al., 2017). For instance, a user who normally walks in a park then goes to a restaurant is found to only stay in the park for a substantial amount of time. Indoor sensors installed in the room can also be used to track whether an elderly person or a patient falls from the bed. Trajectory outlier analysis methods, together with gerontology knowledge, can be very useful for this kind of applications. ### Indoor Environments Indoor mobility data management has been described as a new frontier in data management (Han et al., 2017). But in addition to data management, large-scale indoor localization data also raises challenges in data collection, data analysis and data privacy. Indoor data collection is an open research problem due to the non-existence of the indoor equivalent of GPS: a system that can provide the user location in any building worldwide. This is particularly important in applications related to emergency management and infectious disease contact tracing. Systems have been developed over the years to address this problem based on different data sources including WiFi signal strength and time of arrival (Han et al., 2017; Wang et al., 2018), cellular signal (Beng et al., 2018; Wang et al., 2018), Ultra-wideband (Han et al., 2018), ultrasonic (Han et al., 2018), magnetic tracking (Krishnan et al., 2018), inertial sensors (Han et al., 2018), among others. These novel data sources enable new applications in indoor navigation, contact tracing, indoor analytics, and evacuation management. Indoor data analytics allows to improve understanding of indoor behavior which has multiple benefits and applications, including for crowd management (Beng et al., 2018), retail and POI recommendation systems (Han et al., 2017; Wang et al., 2018), and for optimizing energy use and improving sustainability in the long term (Krishnan et al., 2018). For example, by utilizing WiFi logs, Ren et al. (Ren et al., 2019) find strong correlations between behaviors and user demography (e.g., age, gender and visitor types), indicating that both indoor mobility behavior, in conjunction with online behavior, can be used to predict the underlying demography of the visitors. Occupancy behaviors are also highly linked with building management systems and controls (Beng et al., 2018). By having a more accurate energy use estimation using indoor spatial and mobility data, in addition to historical energy consumption data, the performance of the buildings can be better optimized, towards achieving a more sustainable operations (Beng et al., 2018). The responsible use of mobility behavior analytics, including indoor and outdoor mobility behaviors, strongly points towards the increased capacity for improving sustainable operations of buildings (Krishnan et al., 2018), enabling net zero goals to be achieved. ### Marine Transportation According to UNCTAD, Over 80% of the volume of international trade in goods is carried by sea, and the percentage is even higher for most developing countries (Krishnan et al., 2018). Estimates say that the global shipping activity emits 3% of the global emissions worldwide in 2022 (Han et al., 2017). These significant numbers, as well as the availability of large-scale ship trajectory data obtained from the automatic identification system (AIS) (Krishnan et al., 2018) motivated a lot of research efforts on mobility data analysis for maritime. The stakeholders who seek benefit of such analyses include the maritime authorities, environment officers, ship owners, port and canal managers, and the transport and logistic sector. One major challenge is to ensure the safety at sea, which splits down to the technical challenges of identifying positional anomalies(Levev and others, 2017), locating dark vessels (vessels that switch off their AIS devices) (Han and others, 2017; Han and others, 2018), and cleaning location and identity spoofing (Lev and others, 2017; Han and others, 2018). Additionally, an essential aspect is the detection of fishing activities to ensure the sustainable fishing practices (Lev and others, 2017). Since vessels do not have fixed routes in the sea, research has also investigated the density of ship routes (Lev and others, 2017; Lev and others, 2017). Multi-criteria routing using multiple optimization criteria including estimated time of arrival, fuel consumption, safety, and comfort has been increasingly recognised as an important path planning problem (Han and others, 2017; Lev and others, 2017). An optimization of ship routes could effectively lead to significant reductions of GHG emissions and contribute to the actions against anthropogenic global warming. The influence of ocean currents, waves, and wind on the course and speed of ships have been known for centuries. Used optimally, ocean currents lead to more efficient paths between two given ports. Ship route computation approaches that exploit the potentials of wind, wave and weather models aiming at minimize fuel consumption have been addressed by the marine science, maritime engineering and transportation community (Han and others, 2017; Lev and others, 2017; Lev and others, 2017). Since green mobility is currently gaining huge attention, CO2 emission aware ship routing is expecting to get an enormous impact in economy, politics and society and provides very promising opportunities for the spatial and spatiotemporal database and mobility community. Marine transportation becomes particularly important in the scope of climate change (e.g., the advent of hydrogen/battery/fossil/atom hybrid vessels) as well as digitization for new infrastructure-free localization technologies on-board. ### Social Connections Location-based social networks (LBSNs) bridge the gap between the physical world and online social networking services (Lev and others, 2017). LBSN data capture both human mobility (in the form of check-ins to discrete points of interest) and a social network between individual humans. Combining mobility data and social networks, LBSN data finds many applications. A first application found in the literature was on modeling and describing human mobility patterns (e.g., (Lev and others, 2017; Lev and others, 2017; Lev and others, 2017)), analyzing these patterns (e.g.,(Lev and others, 2017; Lev and others, 2017)), and explaining why individual user choose locations and how social ties affect this choice (e.g., (Lev and others, 2017; Lev and others, 2017)). Another application is that of location recommendation, which leverages check-ins of users and their ratings in the user-location network to recommend new locations to users (Lev and others, 2017; Lev and others, 2017; Lev and others, 2017). A closely related application area is that of location prediction (e.g., (Lev and others, 2017; Lev and others, 2017; Lev and others, 2017)), which predicts the future check-ins of users. Another active research field is LBSNs analysis is friend recommendation or social link prediction (e.g., (Lev and others, 2017; Lev and others, 2017; Lev and others, 2017)), which suggests new friends to users based on similar interests at similar locations, while also having similar social connections. Other research topics with respect to LBSNs include efficient query processing (e.g., (Han and others, 2017; Lev and others, 2017)) finding user communities (e.g., (Lev and others, 2017; Lev and others, 2017)), and estimating the social influence of users (e.g., (Lev and others, 2017)). This plethora of applications and research shows how mobility data in connection with social network data can be used to understand the social fabric that ties us together. A potential future application is using human mobility data to reinforce this social fabric by recommending social events and meetings to groups of people to help people find new friends, collaborators, sports mates, teachers, mentors, and family members. ## 8. Conclusions This paper presented a vision towards a mobility data science pipeline to address the specific challenges of mobility data. A main question that this paper answered is how mobility data science is different from data science. The space and time dimensions in mobility data call for different methods of data acquisition, management, analysis, and privacy preservation which are not addressed by the common data science tools. Accordingly we surveyed the main problems that are currently being researched, we identified major research questions for the coming years, and described applications that lead to broader impacts of mobility data science. Co-authored by a diversity of academics and industry professionals, this paper also conferred a community effort to sketch the boundary of mobility data science as an interdisciplinary field and bring together a dedicated research community around the identified research challenges. ## 9. Acknowledgements Walid G. Aref acknowledges the support of the National Science Foundation under Grant Number IIS-1910216.
2304.13425
Learnable Ophthalmology SAM
Segmentation is vital for ophthalmology image analysis. But its various modal images hinder most of the existing segmentation algorithms applications, as they rely on training based on a large number of labels or hold weak generalization ability. Based on Segment Anything (SAM), we propose a simple but effective learnable prompt layer suitable for multiple target segmentation in ophthalmology multi-modal images, named Learnable Ophthalmology Segment Anything (SAM). The learnable prompt layer learns medical prior knowledge from each transformer layer. During training, we only train the prompt layer and task head based on a one-shot mechanism. We demonstrate the effectiveness of our thought based on four medical segmentation tasks based on nine publicly available datasets. Moreover, we only provide a new improvement thought for applying the existing fundamental CV models in the medical field. Our codes are available at \href{https://github.com/Qsingle/LearnablePromptSAM}{website}.
Zhongxi Qiu, Yan Hu, Heng Li, Jiang Liu
2023-04-26T10:14:03Z
http://arxiv.org/abs/2304.13425v1
# Learnable Ophthalmology SAM ###### Abstract Segmentation is vital for ophthalmology image analysis. But its various modal images hinder most of the existing segmentation algorithms applications, as they rely on training based on a large number of labels or hold weak generalization ability. Based on Segment Anything (SAM), we propose a simple but effective learnable prompt layer suitable for multiple target segmentation in ophthalmology multimodal images, named Learnable Ophthalmology Segment Anything (SAM). The learnable prompt layer learns medical prior knowledge from each transformer layer. During training, we only train the prompt layer and task head based on a one-shot mechanism. We demonstrate the effectiveness of our thought based on four medical segmentation tasks based on nine publicly available datasets. Moreover, we only provide a new improvement thought for applying the existing fundamental CV models in the medical field. Our codes are available at website. ## 1 Introduction Segmentation is vital for ophthalmology diagnosis and treatment. The Department of Ophthalmology holds more than 10 kinds of imaging [1]. The difference between multimodal images brings different segmentation targets, for example, blood vessels from the color fundus, and retinal layers from optical coherence tomography (OCT), which hinders the application of a single model in ophthalmology, since most of the existing segmentation algorithms rely on the labels from experts or hold weak generalization ability. Some fundamental CV models of Segment Anything (SAM) [2], DINOv2 [3], are released this month, which are large ViT-based model trained on the large visual corpus. Both of them have proved promising segmentation capabilities in various natural scenarios. But they cannot segmentation the blood vessel or lesions from medical images, which are helpful for doctors' diagnosis or treatment plan. As shown in Fig. 1, DINOv2 cannot provide blood vessels from the retinal color fundus or optical coherence tomography angiography (OCTA). SAM can find several blood vessels from OCTA image, but it cannot segment vessels or lesions from color fundus. The possible reason is that the edge differences between vessels or lesions and retinal are indistinct. For the retinal Optical Coherence Tomography (OCT) images, SAM neither produce layer information, which considers as a whole. Thus, SAM cannot directly apply into our medical image segmentation. Since these fundamental CV models hole the great potential of the foundation models for computer vision, we believe they are also helpful for the medical field [4, 5]. However, their mask segmentation predictions have not been fully explored in the medical field. We analyze lots of segmentation results and find that SAM and DINOv2 provide acceptable segmentation results under the condition of distinct edge differences. However, blood vessels or lesions may be not distinctive enough to be recognized. As we know, the feature extraction abilities of the above two models are confirmed by some other computer vision tasks [6, 7]. What is the best way to adapt such a foundation model to medical image segmentation in terms of effectiveness and efficiency? Model fine-tuning is often applied to use such foundation models on large-scale benchmarks. Full-tuning the parameters in the entire network or head-tuning by only optimizing the model's head cannot provide available results in the medical field. Recently, prompt tuning has achieved considerable results in Natural Language Processing [8] and natural image processing [9], and its definition is treating the prompts as task-specific continuous vectors and directly optimizing them via gradients during fine-tuning [10]. For example, two sets of prompts used in SAM, including sparse (points, boxes, text) and dense (masks) cannot provide accurate segmentation results in the medical field, especially for blood vessels. Therefore, in the paper, we propose a new learnable prompt for SAM, which accurately specifies what to segment in a medical image. Therefore, the contributions of the paper are listed as: 1. We propose a new learnable prompt layer for SAM, named Learnable Ophthalmology SAM, which accurately segments blood vessels or lesions or retinal layers in multimodal ophthalmology images after one-shot fine-tuning. 2. The proposed learnable prompt is able to automatically learn its interested target in different modal images and hold generalization ability between datasets. 3. We demonstrate the effectiveness of the proposed prompt on four segmentation tasks based on nine publicly available datasets. ## 2 Learnable Ophthalmology SAM Inspired by [9, 11], we freeze the main parameter of the backbone model like Vision Transformer (ViT) and try to insert some task-specific learnable parameters during training, which makes the model be applicable for the downstream-tasks without full fine-tuning the model. Therefore, in this paper, we propose a simple but effective way to learn the prompt from the features, which is applied in medical image segmentation. ### Preliminaries For a visual transformer(ViT), an input \(x\) is first processed by patch embedding, which extracts embedding features. Then \(N\) transformer layers encode the extracted features to one feature representation. Finally, a task head, such as segmentation head, generates the task-specific output based on the above representation. We formulate the above process as: \[\hat{y}=H_{t}(T_{N}(T_{N-1}(...T_{1}(P(x))))) \tag{1}\] where \(\hat{y}\) is the task-specific output, \(H_{t}\) is the head for the task \(t\), \(T\) is the transformer block, \(P\) is the patch embedding layer. ### Learnable Prompt Layer As Fig. 2 (a) shows, we freeze the weights of all the transformer layers of the image encoder in the SAM. Then a task-specific head instead of the prompt generator and mask decoder of SAM is adapted to produce results. SAM is trained mainly based on natural images. Although several researchers show its certain segmentation ability in medical images [4, 5], it cannot produce available results for some medical images, such as OCT, and color fundus. It is assumed that SAM lacks prior knowledge of these scenarios. In this paper, we propose a learnable prompt layer between each transformer layer to learn the knowledge, which is adopted as the task-specific prompt. During training, we only train the prompt layer and the task-specific head, colored with red and marked with fire, as shown in Fig. 2 (a). The transformer layers and patch embedding are frozen colored blue and marked with snowflakes. The structure of our learnable prompt layer is shown in Fig. 2(b), which is simple but effective. We just adopt two \(1\times 1\) Conv (Conv 1x1) with the layer normalization (LayerNorm) and GELU as the nonlinear activation function as the main part of the layer. Inspired by [12], \(3\times 3\) depthwise convolution (DW-Conv 3x3) is introduced to capture the local pattern of the features. Assumed \(f_{i}\) ad the input Figure 1: Examples of blood vessels and lesions extraction by fundamental CV models and our algorithm. features of \(i-th\) transformer layer, the \(i-th\) prompt layer is formulated as: \[Prompt_{i}=\delta(LN(W_{1}(\delta(LN(DW_{3}(\delta(LN(W_{1}(f_{i}))))))))) \tag{2}\] where \(\delta\) is the GELU, \(W_{1}\) is the weights of the \(1\times 1\) Conv, \(DW_{3}\) is the weights of depthwise \(3\times 3\) Conv. For the task head, we first upsample the features of the last layer in ViT twice using two transpose convolutional layers with a kernel size of \(2\times 2\). Then we adopt a multi-scale convolutional layer with \(group=4\)[13] and linear layer to generate the segmentation results. ## 3 Experiment Results ### Datasets We evaluate our algorithm on three medical segmentation tasks, including blood vessel segmentation, lesion segmentation, layering based on nine publicly available datasets, including three color fundus datasets FIVES [14], HRF [15], CHASEDB [16] for vessel segmentation, four OCTA datasets ROSE-1 [17], ROSE-2 [17], OCTA-6M [18], OCTA-3M [18] for vessel segmentation, one lesion segmentation dataset iDRiD [19], and a retinal layer segmentation dataset ARoI [20]. ### Evaluation Metrics The metrics to evaluate the segmentation performance are Precision(P), Recall(REC), Dice, Bookmaker Informedness (BM), and Intersection over Union (IoU). The equations are listed as follows: **Precision(P):**: \[P=\frac{TP}{TP+FP} \tag{3}\] **Recall(REC):**: \[REC=\frac{TP}{TP+FN} \tag{4}\] **Dice:**: \[Dice=\frac{2*TP}{2*TP+FN+FP} \tag{5}\] **Bookmaker Informedness (BM):**: \[BM=\frac{TP}{TP+FN}+\frac{TN}{TN+FP}-1 \tag{6}\] **IoU:**: \[IoU=\frac{TP}{TP+FN+FP} \tag{7}\] where \(TP,FP,TN\) and \(FN\) are "True Positive", "False Positive", "True Negative" and "False Negative", respectively. ### Implement Details We implement the models by Pytorch [21] framework, and all experiments are run on the machine with one NVIDIA Tesla A100 graphics card. The mini-batch stochastic gradient descent (SGD) with a momentum of 0.9 and a weight decay of 0.0005 is applied to optimize the model. Poly learning rate adjusts strategy [22] is adopted to set the learning rate dynamically during training, which sets the learning rate according to \(lr=init\_lr\times(1-\frac{iter}{max\_iter})^{power}\), and we set \(init\_lr=0.05,power=0.9\). The mix-precision training strategy is adopted to save the memory. ### Experimental Results We first conduct the one-shot learning experiments on three different segmentation tasks: vessel segmentation with two-modal images, lesion segmentation, and OCT layer segmentation. The results are shown in Table 1 and Figure 3. For the **vessel segmentation task**, we use two-modal images including color fundus and OCTA to evaluate the performance. The results are shown in Table 1, from which we can see the SAM with our proposed prompt can Figure 2: Structure of the framework and our proposed prompt layer. provide a comparable performance at the OCTA vessel segmentation task on the ROSE-2 with other supervised models based on the whole dataset including U-Net [23] and OCTA-Net [17] model. From the qualitative results shown at Figure 3, the SAM with our proposed prompt can segment the large vessel in the OCTA correctly, but can not distinct the small or tiny vessels. For the vessel segmentation of color fundus, our SAM with learnable prompt model achieves amazing results on the FIVES dataset. Our model tries to segment the small vessels and distinguish some small vessels but the classification of the small or tiny vessel is still the obstacle to applying the SAM at the vessel segmentation. We will discuss this later. We conduct the **OCT layer segmentation** on the ARoI dataset. As shown in Table 1, the performance of the model raises about 25% of the dice metric. We visualize the segmentation results in Figure 3, which illustrates that the proposed model achieves a good layer results compared to the original model. As OCT images often display as a volume data, we enlarge the size of the data with a volume data to finetune the model. The performance improves further, which may prove that more data for training, the robustness of the model increases. Finally, we conduct the one-shot **lesion segmentation** on the iDRiD dataset. As shown in Table 1, although our model fails for the segmentation of microaneurysms (MA) and soft exudates (SE), it works well for the hemorrhages (HE) and hard exudates (EX). The visualization in Figure 3 also confirms the ability. We analyze that the small medical object is hard to classify by the SAM, because some small objects may be lost during the patch embedding or the edge difference may be not obvious. ### Generalization To verify the generalization of the model, we conduct a series of zero-shot experiments. Considering the fairness of generalization, we excluded the dataset tasks with excessive diversity when generalizing. Therefore, this paper uses blood vessels on two modalities for generalization experiment verification. We conduct the zero-shot experiments for the vessel segmentation on two color fundus datasets. As shown in Table 2, the model trained on the FIVES performs well on the target datasets including HRF and CHASEDB. We find an interesting thing that the model is not disturbed by the lesion in the image on the HRF dataset, as shown in Figure 4. This phenomenon shows the powerful genealization ability of our learnable ophthalmology SAM. Then we use the ROSE-2, OCTA-3M, and OCTA-6M to do the vessel segmentation on OCTA images, as shown in Table 2. The model trained on ROSE-1 achieves good performance on different datasets captured by the same and different manufactures. The visualization results are shown in Figure 4. OCTA-3M and OCTA-6M are captured by OCTA device from Optovue manufacture, whose images holds more background, while ROSE datasets are by that from Zeiss manufacture, whose background seems clean. The results tell that the model produces good vessel segmentation results on different OCTA datasets without training. Therefore, our learnable ophthalmology SAM performs well based on kinds of segmentation datasets. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Model & Dice & REC & BM & P \\ \hline \multirow{2}{*}{FIVES} & SAM & 15.72 & 82.17 & 15.91 & 8.69 \\ & SAM(Ours) & 80.90 & 79.53 & 78.22 & 82.31 \\ \hline \multirow{4}{*}{ROSE-1} & U-Net & 66.05 & - & - & - \\ & OCTA-Net & 75.76 & - & - & - \\ & SAM & 7.50 & 4.59 & 0.51 & 20.59 \\ & SAM (Ours) & 59.01 & 58.49 & 49.34 & 59.53 \\ \hline \multirow{4}{*}{ARoI} & SAM & 10.42 & 32.59 & 25.46 & 23.02 \\ & SAM(Ours) & 35.68 & 40.55 & 33.83 & 40.24 \\ & SAM(Ours)\({}^{*}\) & 44.59 & 48.82 & 45.98 & 43.51 \\ \hline \hline \multirow{4}{*}{IDRiD} & SAM & MA & HE & EX & SE \\ \cline{1-1} & SAM & 0.14 & 2.75 & 3.71 & 1.42 \\ \cline{1-1} & SAM(Ours) & 0.00 & 39.54 & 62.96 & 0.00 \\ \hline \end{tabular} \end{table} Table 1: Segmentation results based on the one-shot Prompt Learning SAM.\({}^{*}\) is the results that trained by one volume of OCT dataset. Figure 3: Visualization of the segmentation results on four tasks. ## 4 Discussion In the experiments, we list the results of the learnable prompt inserted in SAM, which prove the effectiveness of our thought. During experiment, we find that the image quality seriously effects the segmentation results, as shown in Fig. 5. In FIVES dataset, there are several images with blurred blood vessels, or obscured macular area. Our algorithm only segments parts of the blood vessels. Since OCTA is produced by the combination of variety of B-scan, the image quality is often effected by the eye moving, such as the example in the second row of Fig. 5. Our algorithm considers some other background as blood vessels. Low image quality of OCT is often caused by the patients' inactive cooperation. Our algorithm fails in such low-quality condition, which still hinders intelligent medical image analysis. There are various of tiny targets in medical images, such as tiny blood vessels or lesions. The segmentation accuracy is not high if the targets are very tiny, as shown in Fig. 6. In the future, maybe we need to improve the prompt to intensify the tiny target to deal with such condition. Based on the above experiments, the feature extraction ability of SAM is further proved. The uneven quality of medical images, large modal differences, various analysis targets, data privacy protection, and other issues make it difficult for SAM to be fully applicable in the medical field. Therefore, in order to assist doctors in diagnosis and treatment, it is still necessary to improve the accuracy of target segmentation. We need to consider the speciality of medical scenarios, such as the medical prior knowledge, before \begin{table} \begin{tabular}{c c c c c c} \hline Dataset & Model & Dice & REC & BM & P \\ \hline \multirow{2}{*}{HRF} & SAM & 16.10 & 79.21 & 11.98 & 8.96 \\ & SAM(Ours) & 69.16 & 76.47 & 72.74 & 63.13 \\ \hline \multirow{2}{*}{CHASEDB} & SAM & 8.16 & 29.74 & -14.90 & 4.73 \\ & SAM(Ours) & 76.37 & 82.98 & 80.42 & 70.74 \\ \hline \multirow{2}{*}{ROSE-2} & SAM & 6.36 & 5.30 & 1.14 & 7.96 \\ & SAM(Ours) & 23.33 & 41.65 & 27.01 & 16.20 \\ \hline \multirow{2}{*}{OCTA-3M} & SAM & 8.47 & 9.52 & 1.38 & 7.63 \\ & SAM(Ours) & 54.10 & 84.99 & 75.86 & 39.68 \\ \hline \multirow{2}{*}{OCTA-6M} & SAM & 4.31 & 2.84 & -0.10 & 8.94 \\ & SAM(Ours) & 55.28 & 77.63 & 67.14 & 42.93 \\ \hline \end{tabular} \end{table} Table 2: Zero-shot results for the vessel segmentation. Figure 4: The generalizations results of two-modal images based on five datasets. Figure 5: Some failed examples caused by the image quality. designing the model algorithm. ## 5 Conclusions In this paper, we proposed a learnable ophthalmology SAM to solve the problem that SAM cannot effectively segment blood vessels or lesions in ophthalmology. The experiments proved the ability, effectiveness and generalization of our proposed prompt layer in three segmentation tasks based on nine publicly available datasets. We also analyzed the failure examples caused by the image quality or targets in the discussion. We also believe that our proposed prompt can be applied in other medical field. ## 6 Fund This work was supported in part by The National Natural Science Foundation of China (8210072776), Guangdong Provincial Department of Education (2020ZDZX3043), Guangdong Basic and Applied Basic Research Foundation (2021A1515012195), Guangdong Provincial Key Laboratory (2020B121201001), and Shenzhen Stable Support Plan Program (20220815111736001).
2305.12412
EM Pre-training for Multi-party Dialogue Response Generation
Dialogue response generation requires an agent to generate a response according to the current dialogue history, in terms of which two-party dialogues have been well studied, but leaving a great gap for multi-party dialogues at the same time. Different from two-party dialogues where each response is a direct reply to its previous utterance, the addressee of a response utterance should be specified before it is generated in the multi-party scenario. Thanks to the huge amount of two-party conversational data, various pre-trained language models for two-party dialogue response generation have been proposed. However, due to the lack of annotated addressee labels in multi-party dialogue datasets, it is hard to use them to pre-train a response generation model for multi-party dialogues. To tackle this obstacle, we propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Theoretical analyses and extensive experiments have justified the feasibility and effectiveness of our proposed method.
Yiyang Li, Hai Zhao
2023-05-21T09:22:41Z
http://arxiv.org/abs/2305.12412v1
# EM Pre-training for Multi-party Dialogue Response Generation ###### Abstract Dialogue response generation requires an agent to generate a response according to the current dialogue history, in terms of which two-party dialogues have been well studied, but leaving a great gap for multi-party dialogues at the same time. Different from two-party dialogues where each response is a direct reply to its previous utterance, the addressee of a response utterance should be specified before it is generated in the multi-party scenario. Thanks to the huge amount of two-party conversational data, various pre-trained language models for two-party dialogue response generation have been proposed. However, due to the lack of annotated addressee labels in multi-party dialogue datasets, it is hard to use them to pre-train a response generation model for multi-party dialogues. To tackle this obstacle, we propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Theoretical analyses and extensive experiments have justified the feasibility and effectiveness of our proposed method. The official implementation of this paper is available at [https://github.com/EricLee8/MPDRG](https://github.com/EricLee8/MPDRG). ## 1 Introduction Inspired by the tremendous success in pre-training large language models (PLMs) in general domains Devlin et al. (2019); Clark et al. (2020); Radford et al. (2018), efforts have been made to train PLMs for dialogue response generation Zhang et al. (2020); Bao et al. (2020); Chen et al. (2022). However, they constrain the dialogues to be either two-party, or sequential structured (i.e. each utterance replies directly to its previous utterance). Different from them, a multi-party dialogue can involve multiple interlocutors, where each interlocutor can reply to any preceding utterances, making the response relations of the dialogue tree-structured and much more complicated Zhang et al. (2018); Le et al. (2019); Shi and Huang (2019); Wang et al. (2020). Besides, the speaker and addressee of a response utterance should be specified before it is generated in multi-party scenario, making the annotated data for multi-party dialogue response generation (MPDRG) less available. Figure 1 illustrates an example of MPDRG task taken from the Ubuntu IRC benchmark Hu et al. (2019). The upper part shows the tree-structured addressee relations of the dialogue, where the arrows point from addressees to speakers, and different colors represent different interlocutors. The middle part displays the content of the dialogue history, where U\({}_{7}\) is the response to be generated. The addressee (U\({}_{6}\)) and the speaker (#4) of it are given, and the content of this response is the target of our model. The lower part gives the human response, which is also called the ground truth reference. Previous works on MPDRG fine-tune generative PLMs on small multi-party dialogue datasets with explicit addressee annotations. They utilize the response annotations to form a tree-structured response graph, then encode the dialogue history using either homogeneous or heterogeneous Graph Neural Networks (GNNs) Hu et al. (2019); Gu et al. (2022). Nevertheless, none of them make attempts to pre-train a response generation model for multi-party dialogues due to the lack of large-scale corpora with annotated addressee labels. To solve the aforementioned problem of data scarcity, we propose an EM approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Specifically, we treat the addressee of each utterance in the dialogue history as a discrete latent variable \(z\). During the E-steps, given the current dialogue history \(c_{t}\) and the the response utterance \(r_{t}\), we model the distribution of the current addressee \(z_{t}\) as \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\), where \(\mathbf{\theta}\) is the current model parameters. During the M-steps, we sample \((c_{t},r_{t},z_{t})\) triplets from distribution \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\) and optimize the generative model \(p(r_{t}|c_{t},z_{t};\mathbf{\theta})\) on these samples. With the iteration number increasing, the accuracy of latent variable prediction and the quality of generated responses will grow together. It is worth noting that during these iterations, annotated addressee labels are not required, which makes it possible to leverage the huge amount of multi-party dialogue corpora without addressee labels. We provide theoretical analyses to prove the feasibility of our EM method, and conduct experiments on the Ubuntu IRC benchmark, which is used in previous works (Hu et al., 2019; Gu et al., 2022). The contributions of our work can be summarized as the following three folds: * To the best of our knowledge, we are the first to study the pre-training of multi-party dialogue response generation, which is much more challenging and complicated than two-party dialogues. * We put forward an EM approach to alleviate the scarcity of multi-party dialogue data with addressee labels, making it possible to pre-train a model with huge amount of unlabeled corpora. * We provide theoretical analyses to prove the feasibility of our EM pre-training method, and experimental results on the Ubuntu IRC benchmark show our pre-trained model achieves state-of-the-art performance compared with previous works. ## 2 Related Works ### Pre-training for Response Generation In recent years, researchers have gradually drawn their attention from retrieval-based dialogue systems to generation-based ones. Thanks to the huge amount of two-party dialogue corpora, various PLMs for two-party dialogue response generation have been proposed. Zhang et al. (2020) propose DialoGPT, which utilizes the sequential response chains in the Reddit Corpus to pre-train an auto-regressive response generation model based on the architecture of GPT (Radford et al., 2018). Different from their work, which focuses on sequential dialogue history, our work aims to solve the case where the agent can respond to any previous utterance in a tree-structured dialogue history. Bao et al. (2020) propose PLATO, which models the conversational intents as \(K\) discrete latent variables, then utilizes response selection, bag-of-words prediction, and language modeling objectives to train the model. DialogVED (Chen et al., 2022) further extends the discrete latent variables to continuous ones, and models them with a multi-variable Gaussian distribution. It utilizes KL divergence reduction to optimize the parameters of the latent distribution and applies masked language modeling, response generation, and bag-of-words prediction to train the whole model. PLATO and DialogVED focus on two-party conversations, and the conversational intents they put forward have no corresponding concepts of actual entities (e.g., intent to argue, intent to end a conversation, and so on). Distinct from their works, we lay emphasis on multi-party dialogues, and the latent variables of our method have actual meanings: variable \(z_{t}=j\) indicates that the addressee of the response at the \(t_{th}\) turn is the \(j_{th}\) utterance. ### Multi-party Dialog Response Generation Several previous works have studied the MPDRG task. Hu et al. (2019) extract a subset of the Ubuntu Dialogue Corpus (Lowe et al., 2015) with explicit addressee labels to construct the Ubuntu IRC benchmark, where they propose a Graph Struc Figure 1: An example of multi-party dialogue response generation task, better view in color. tured Neural Network (GSN) for dialogue modeling. Specifically, they first treat each utterance of a dialogue as a node, and the addressee relations as edges to construct a dialogue graph, then make use of GNNs to encode the dialogue history. Finally, they adopt a Gated Recurrent Unit (GRU) with cross attention as the decoder to generate responses. Gu et al. (2022) put forward HeterMPC, which models the dialogue history as a heterogeneous graph. In detail, they first design six types of edges: reply and replied-by, address and addressed-by, speak and spoken-by, among two kinds of nodes: interlocutor nodes and utterance nodes, and then encode the dialogue history using Transformers Vaswani et al. (2017) together with heterogeneous GNNs. Finally, they utilize a Transformer Decoder to generate responses. Instead of fine-tuning models on a small dataset with annotated addressee labels as these existing work did, our work focuses on the utilization of large unlabeled corpora to pre-train a response generation model for multi-party dialogues. ## 3 Methodology To design a model for multi-party dialogue response generation and make it compatible with the EM training algorithm, there are two important things to consider: how to model \(p(r_{t}|c_{t},z_{t};\mathbf{\theta})\) in the maximization step, and how to compute \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\) in the expectation step. In this section, we will first address these two problems, then mathematically derive the feasibility of our EM pre-training algorithm. ### Task Formulation Given an input sequence of the dialogue history and the speaker of the response at time step \(t\), \(\mathbb{X}=\{\mathrm{S}_{1}\colon\mathrm{U}_{1}[\mathrm{SEP}\mathrm{S}_{2} \colon\mathrm{U}_{2}[\mathrm{SEP}]\dots\mathrm{S}_{\mathrm{t-1}}\colon \mathrm{U}_{\text{t-1}}[\mathrm{SEP}\mathrm{S}_{\mathrm{t}}\colon\}\), together with the addressee of the response \(z_{t}=j\), our goal is to train a model that can generate an response \(\mathbb{Y}=\mathrm{U}_{\mathrm{t}}\). Here each \(\mathrm{S}_{\mathrm{i}}\) is the name of the speaker at time step \(i\), which is represented as _Speaker_#\(S_{i}\) like those in Figure 1. \(\mathrm{U}_{\mathrm{i}}=\{w_{i1},w_{i2},\dots,w_{in_{i}}\}\) is the content of the \(i_{th}\) utterance with \(n_{i}\) words. \(z_{t}=j\) represents that \(\mathrm{S}_{\mathrm{t}}\) speaks to \(\mathrm{S}_{\mathrm{j}}\), who utters \(\mathrm{U}_{\mathrm{j}}\), and \([\mathrm{SEP}]\) is a special token that indicates the end of a dialogue turn. ### Addressee Modeling In this section, we answer the first question: how to model \(p(r_{t}|c_{t},z_{t};\mathbf{\theta})\), or in other words, how to incorporate the addressee information \(z_{t}=j\) into the process of generating a response \(r_{t}\). We design a straightforward method that adds addressee embeddings to the positional encodings and word embeddings, before they are further encoded by a PLM. The left part of Figure 2 illustrates this method, where we use an embedding look-up table with \(2\) entries to indicate whether a word belongs to the addressee utterance or not. Specifically, if a word is in the addressee utterance, it will get its addressee embedding from entry \(1\), otherwise from entry \(0\). Since addressee modeling is not the key contribution of this work, we just adopt the most straightforward and effective way. In our experiments, we use BART Lewis et al. (2020) as the backbone PLM, following previous works Gu et al. (2022). Due to the page limit, the proverbial architecture of Transformer and BART are omitted here. ### Latent Variable Prediction In this section, we answer the second question: how to compute \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\) in the expectation step, or in other words, how to predict the distribution of the unlabeled addressee \(z_{t}\), given the current dialogue Figure 2: The overview of our model architecture. The left part shows how we incorporate the addressee information into response generation by adding addressee embeddings. The right part illustrates a Bayesian Network of how a response is generated given the current dialogue history \(c_{t}\) and the addressee \(z_{t}\). context \(c_{t}\), response \(r_{t}\), under parameters \(\mathbf{\theta}\). The solution to this question is essentially the most important part of our method since it delicately solves the problem of data scarcity in MPDRG. Let's consider what humans will do to participate in a multi-party conversation. First, we will read the dialogue history \(c_{t}\), then choose an addressee \(z_{t}\) to reply. Once \(c_{t}\) and \(z_{t}\) are determined, we will utter a response according to the content of the whole dialogue and the addressee utterance. The right part of Figure 2 gives the Bayesian Network of the above process, where the joint distribution of \((c_{t},z_{t},r_{t})\) can be factorized as: \[p(c,z,r)=p(c)\cdot p(z|c)\cdot p(r|c,z) \tag{1}\] Here we omit the subscript \(t\) and model parameters \(\mathbf{\theta}\) for simplicity. Given Eq. (1), \(p(z|c,r;\mathbf{\theta})\) can be derived as: \[p(z|c,r) =\frac{p(c,z,r)}{p(c,r)} \tag{2}\] \[=\frac{p(c)\cdot p(z|c)\cdot p(r|c,z)}{p(c)\cdot p(r|c)}\] \[=\frac{p(z|c)\cdot p(r|c,z)}{p(r|c)}\] We assume that the probability of choosing any previous utterance as the addressee is the same given the current dialogue history, which means \(p(z|c)\) obeys a uniform distribution. Meanwhile, the denominator \(p(r|c)\) is independent of \(z\), leaving only the term \(p(r|c,z)\). Now, we can induce that: \[p(z|c,r)\propto p(r|c,z) \tag{3}\] Therefore, for each \(z^{i},i=1,2,\ldots,t-1\), we have: \[p(z^{i}|c,r)=\frac{p(r|c,z^{i})}{\sum_{j=1}^{t-1}p(r|c,z^{j})} \tag{4}\] In practice, we can use the generative model \(p(r_{t}|c_{t},z_{t};\mathbf{\theta})\) to compute the probability distribution of \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\) by Eq. (4). ### Expectation-Maximization Process Figure 3 illustrates the overview of our EM training process. During the E-steps, we compute the probability distribution of the latent variable (the addressee \(z\)). During the M-steps, we sample \((c,r,z)\) triplets from this distribution and optimize the generative model by standard training algorithms. **The Expectation Step** is to compute the conditional distribution of the latent variable \(z_{t}\), given the observed data \((c_{t},r_{t})\) and the current model parameters \(\mathbf{\theta}\), where Eq. (4) gives a reasonable approximation of this value. Specifically, for a sample \((c_{t},r_{t})\), with the model parameters \(\mathbf{\theta}\) fixed, we first calculate the un-normalized probability of each of the \(i_{th}\) (\(i<t\)) utterance being the addressee: \(p(r_{t}|c_{t},z_{t}^{i};\mathbf{\theta})\) using Eq. (3), then normalize them to get the conditional distribution of \(z_{t}\) using Eq. (4). Once \(P(z_{t}|c_{t},r_{t};\mathbf{\theta})\) is obtained, we sample \((c_{t},r_{t},z_{t})\) triplets from this distribution, which is further used in the maximization step. **The Maximization Step** is analogical to the normal training process. Given the sampled \(\{(c_{t}^{k},r_{t}^{k},z_{t}^{k})\}_{k=1}^{N}\) triplets, where \(N\) is the total number of samples, our goal is to minimize the auto-regressive language modeling loss: \[\mathcal{L}_{G}=-\sum_{k=1}^{N}\sum_{i=1}^{n_{k}}\log p\left(w_{i}^{k}\mid w_ {<i}^{k},c_{t}^{k},z_{t}^{k};\mathbf{\theta}\right) \tag{5}\] where \(w_{i}^{k}\) is the \(i_{th}\) word in the response of the \(k_{th}\) sample: \(r_{t}^{k}=\{w_{i}^{k}\}_{i=1}^{n_{i}}\), and \(n_{i}\) is the length of this response. **Compared with the vanilla EM algorithm**, there are several differences in our implementations. First of all, we do not use the initial model to generate the training data for the first round of the maximization step. Instead, we utilize the discourse parser provided by Shi and Huang (2019) to predict the addressee of each utterance in the unlabeled corpus to get a coarse initial training dataset. The reason for this initialization method is that the initialization of training data (or model parameters) is vital to the EM method, which helps it converge to a better point. Second, rather than sampling \(z_{t}\) from its conditional distribution, we adopt a hard EM approach which takes the value \(z_{t}^{i}\) with highest probability as the predicted label, Figure 3: The overview of the EM process, where the expectation steps and maximization steps are performed alternately and iteratively. where \(i=\operatorname*{arg\,max}_{i}p(z_{t}^{i}|c_{t},r_{t};\mathbf{\theta})\). This hard EM approach is proved as more effective to boost the performance [10]. Finally, to ensure the quality of the generated training data in the maximization step, we set a hyper-parameter \(\alpha\in[0,1]\) to control the proportion of training data that is actually used. Specifically, we first rank the prediction confidence of each \(z_{t}^{k}\) according to the value of \(p(z_{t}^{k}|c_{t}^{k},r_{t}^{k};\mathbf{\theta})\), then pick the top \(\alpha\times N\) samples with the highest confidence scores. In our experiments, \(\alpha\) is dynamically set to ensure the addressee prediction accuracy of the selected samples is over \(80\%\) in an annotated validation set. ### Proof of Feasibility In a multi-party dialogue corpus without annotated addressee labels, a usual solution to train a response generation model is to maximize the marginal log-likelihood (or incomplete log-likelihood) over all possible addressees: \[\ell(c,r;\mathbf{\theta})=\log\mathrm{p}(\mathrm{r}|c;\mathbf{\theta})=\log\sum_{ \mathrm{i}}\mathrm{p}(\mathrm{r},\mathrm{z}_{\mathrm{i}}|c;\mathbf{\theta}) \tag{6}\] However, this objective is hard to optimize since the distribution of \(z\) is hard to obtain. Here, we define an expected complete log-likelihood where our estimation of \(p(z_{t}|c_{t},r_{t};\mathbf{\theta})\) can come to rescue: \[\hat{\ell}(c,r;\mathbf{\theta}) =q(z_{i})\sum_{i}\log\mathrm{p}(\mathrm{r},\mathrm{z}_{\mathrm{i} }|c;\mathbf{\theta}) \tag{7}\] \[q(z) =p(z_{t}|c_{t},r_{t};\mathbf{\theta})\] Our new objective now becomes maximizing the expected complete log-likelihood. The relation between \(\ell\) and \(\hat{\ell}\) can be derived as follows: \[\ell(c,r;\mathbf{\theta}) =\log\sum_{\mathrm{i}}\mathrm{p}(\mathrm{r},\mathrm{z}_{\mathrm{ i}}|c;\mathbf{\theta}) \tag{8}\] \[=\log\sum_{\mathrm{i}}\mathrm{q}(\mathrm{z}_{\mathrm{i}})\cdot \frac{\mathrm{p}(\mathrm{r},\mathrm{z}_{\mathrm{i}}|c;\mathbf{\theta})}{\mathrm{q }(\mathrm{z}_{\mathrm{i}})}\] \[\geq\sum_{i}q(z_{i})\cdot\log\frac{\mathrm{p}(\mathrm{r}, \mathrm{z}_{\mathrm{i}}|c;\mathbf{\theta})}{\mathrm{q}(\mathrm{z}_{\mathrm{i}})}\] \[=\sum_{i}q(z_{i})\cdot\log\mathrm{p}(\mathrm{r},\mathrm{z}_{ \mathrm{i}}|c;\mathbf{\theta})\] \[\qquad-\sum_{i}q(z_{i})\cdot\log\mathrm{q}(\mathrm{z}_{\mathrm{i}})\] \[=\hat{\ell}(c,r;\mathbf{\theta})+\mathcal{H}_{q(z)}\] where the third line is derived from the _Jensen Inequality_, and \(\mathcal{H}_{q(z)}\) is the entropy of the distribution of \(z\). Since \(\mathcal{H}_{q(z)}\geq 0\), we can derive that \(\hat{\ell}(c,r;\mathbf{\theta})\leq\ell(c,r;\mathbf{\theta})\), which means \(\hat{\ell}\) is the lower bound of \(\ell\). By maximizing the lower bound \(\hat{\ell}\), we can indirectly maximize \(\ell\), which is originally hard to optimize. Another important observation is hat \(\hat{\ell}=\ell\) if and only if \(q(z)=p(z_{t}|c_{t},r_{t};\mathbf{\theta})\), which is exactly what we calculate during the E-steps in Eq. (7). Though the derivation of the posterior distribution of \(z\) is not exact since we assume uniform prior in Eq. (2), it is still much closer to the real distribution compared to random \(q(z)\). It is worth noting that the global optimal point is not guaranteed to be reached by this algorithm, and it depends heavily on the initialization of model parameters or the training data for the first round of the maximization step. This explains the reason why we utilize a discourse parser to get a coarse initial training dataset instead of using the expectation step at the first iteration in Section 3.4. ## 4 Experiments In this section, we first introduce the datasets to pre-train and evaluate our model, then present the experimental results and comparisons with previous methods. ### Datasets and Experimental Setups For pre-training, we adopt the second version of Ubuntu Dialogue Corpus [14], which contains no annotated addressee labels. The original dataset contains \(1\)M dialogues for training, and \(0.5\)M dialogues for validation and testing, respectively. Dialogues that contain less than \(4\) turns, or have overlap with the dataset for the downstream task (the Ubuntu IRC benchmark, Hu et al.2019), are excluded from the pre-training data. After filtering, we eventually get a pre-training dataset that contains 764,373 dialogues. For fine-tuning, we follow previous works [10, 11] to adopt the Ubuntu IRC benchmark, which is constructed by extracting all utterances with response addresseees indicated by the "@" symbol in the Ubuntu Dialogue Corpus. In total, this dataset consists of 311,725 dialogues for training, and 5,000 dialogues for validation and testing, respectively. It is worth noting that this dataset contains addressee labels for every single utterance in the dialogue history, which are utilized by previous methods, yet not by ours. For both pre-training and fine-tuning, BART [14] is used as the backbone model. Before pre-training, we initialize the pre-trained weights from BART-base. During the process of pre-training, we evaluate our model on the validation set of the Ubuntu IRC benchmark, and the best checkpoint is saved for the fine-tuning process. ### Baseline Models and Evaluation Metrics Table 1 shows the results of our method and previous models, where GPT-2, GSN, and HeterMPC (Radford et al., 2018; Hu et al., 2019; Gu et al., 2022) are introduced in section 2.1 and 2.2, respectively. BART is a sequence-to-sequence model with encoder-decoder Transformer architecture and is trained using denoising objectives. Following Hu et al. (2019), we also adopt BLEU-1 to BLEU-4, METEOR, and ROUGE-L as the automatic evaluation metrics, which can be calculated using the _pycocoevalcap_ package. Besides automatic evaluation, human evaluation is also conducted and will be introduced in Section 4.4. ### Automatic Evaluation Results Let's firstly focus on the upper and middle part of Table 1, where we present the results of previous models and our methods. Three settings of our method based on BART are experimented with: pre-training only (PO), fine-tuning only (FO), and pre-training-fine-tuning (PF). Results of PO are obtained by directly using the pre-trained model to generate the response for each dialogue. FO means the checkpoint of BART is directly fine-tuned on the Ubuntu IRC benchmark without pre-training. PF follows a pre-training-fine-tuning paradigm, where the best checkpoint of the pre-training process is further fine-tuned on the downstream dataset. Three observations can be seen from the table. First of all, solely pre-training with our proposed EM method with unlabeled corpus is already able to achieve comparable results with the previous state-of-the-art (SOTA) models. It is surprising since the pre-training requires no annotated addressee labels, while previous models not merely utilize the addressee information of the response utterance, but also make use of the addressee labels of the dialogue history to form a response graph. Second, fine-tuning our model on the downstream dataset with the ground truth addressee labels yields better results compared with pre-training only. Since it uses the ground truth addressee labels of responses, the results of it can be regarded as an upper bound of what the EM training can achieve. Besides, FO outperforms the previous SOTA model by large margins with even simpler architecture and fewer annotations (without addressee labels in the dialogue history), demonstrating the effectiveness of our proposed addressee embeddings. Finally, by further fine-tuning the pre-trained checkpoint with the ground truth addressee labels, we achieve the best performance on all metrics, which shows the transferability of our pre-trained model. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & METEOR & ROUGE-L \\ \hline \hline GPT-2 (Radford et al., 2018) & 10.37 & 3.60 & 1.66 & 0.93 & 4.01 & 9.53 \\ GSN (Hu et al., 2019) & 10.23 & 3.57 & 1.70 & 0.97 & 4.10 & 9.91 \\ HeterMPC\({}_{\text{BART}}\)(Gu et al., 2022) & 12.26 & 4.80 & 2.42 & 1.49 & 4.94 & 11.20 \\ \hline BART (Lewis et al., 2020) & 11.25 & 4.02 & 1.78 & 0.95 & 4.46 & 9.90 \\ Pre-training Only (PO) & 11.78 & 4.67 & 2.38 & 1.41 & 4.98 & 11.19 \\ Fine-tuning Only (FOOTNOTE:]Footnote : footnotemark: [ENDFOOTNOTE] & 11.47 & 5.11 & 2.98 & 2.11 & 5.23 & 11.31 \\ Pre-training + Fine-tuning (PF) & **12.31** & **5.39** & **3.34** & **2.45** & **5.52** & **11.71** \\ \hline FO + Reply-Chain & \(9.11\) & \(3.52\) & \(1.99\) & \(1.35\) & \(4.32\) & 9.36 \\ PO w/o EM & \(10.03\) & \(3.90\) & \(2.03\) & \(1.18\) & \(4.56\) & 9.66 \\ PF w/o EM & \(11.39\) & \(5.04\) & \(3.02\) & \(2.15\) & \(5.27\) & 11.20 \\ Denoising + Fine-tuning & \(11.49\) & \(5.08\) & \(3.02\) & \(2.13\) & \(5.25\) & 11.28 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Ubuntu IRC benchmark, where the upper part presents models of previous works, the middle part shows our backbone model BART together with our method under different settings, and the lower part shows the ablation studies. \begin{table} \begin{tabular}{l r r r} \hline \hline Model & Score & Kappa & Best (\%) \\ \hline \hline Human References & \(2.20\) & \(0.56\) & \(28.00\) \\ \hline BART & \(1.68\) & \(0.45\) & \(8.00\) \\ HeterMPC\({}_{\text{BART}}\) & \(1.88\) & \(0.48\) & \(8.00\) \\ Ours (PF) & **1.92** & \(0.47\) & **28.00** \\ \hline \hline \end{tabular} \end{table} Table 2: Human evaluation results, where _Score_ is the average score and _Best_ means the ratio of each system being the best response. ### Human Evaluation Results For human evaluation, we recruit a team with 8 members who have at least a Bachelor's degree in Computer Science and are familiar with Ubuntu and Linux. We randomly sample \(100\) examples from the testing set, then ask the team members to score each prediction and select the best one. The quality scores are considered in terms of three independent aspects: 1) relevance, 2) fluency and 3) informativeness. They are scored from 0-3 and the average values were reported. The evaluation results are shown in Table 2, where our model (Pre-training + Fine-tuning) constantly outperforms vanilla BART and the previous SOTA model \(\text{HeterMPC}_{\text{BART}}\). We also report the Fleiss's Kappa to indicate the agreement between annotators. Besides, the ratio of our predictions being the best response is the same as that of human responses, demonstrating the high quality of the generated responses of our model. ## 5 Analysis In order to get more insights into the proposed EM pre-training method, we dive deeper into it by conducting extensive analyses. ### Ablation Study We conduct ablation studies to investigate the contribution of our different designs, whose results are tabulated in the lower part of Table 1. Firstly, let's focus on the first line of the lower part. To study whether other utterances that are not in the reply chain of the current addressee can help to generate a better response, we extract the reply train by traversing from the current leave utterance (the response) up to the root node (the first utterance), then train a model by inputting this chain only. We see a large performance drop on all metrics in this setting, demonstrating the significance of the side information provided by the whole context. Second, let's pay attention to the second and third lines of the lower part. In order to study the effect of the EM pre-training process, which is the key contribution of our work, we remove this process and pre-train a model using only the addressee labels obtained from the discourse parser (i.e. the initial training data used in the first iteration of our EM approach). A sharp performance drop is observed compared with PO and PF with our proposed EM pre-training strategy, demonstrating the significance of our design. Without the iterative EM procedure, the noisy addressee labels obtained from the discourse parser can cause error propagation, which makes the model learn noisy features to predict a response, and hurts the performance. Finally, aiming at investigating whether the performance gains come from seeing more in-domain data in the pre-training process, we use the same pre-training data to train another model with the denoising objectives proposed in BART Lewis et al. (2020), then also fine-tune it on the Ubuntu IRC benchmark. The last line of the lower part presents the results, where we observe nearly the same performance compared with FO. This observation indicates that simply performing domain adaptation using the general pre-training objectives is insufficient to benefit the MPDRG task. ### Response Generation vs. Addressee Prediction In Section 3.3, we prove that \(p(z|c,r)\propto p(r|c,z)\). To verify the correctness of this equation and also to investigate the training process of our EM strategy, we draw the line chart of the BLEU-4 score and addressee prediction accuracy of the top-30% confidence samples on the validation set with the increasing of pre-training iterations. The addressees are predicted using Eq. (4), where we take the \(z^{i}\) with the highest conditional probability as the predicted addressee. Figure 4 illustrates the trending of the BLEU-4 score and addressee prediction accuracy. On the one hand, we see that the trending of both metrics is consistent, which means with a more powerful response generation model comes a higher addressee prediction accuracy. This observation verifies the correctness of Eq. (3). On the other hand, with the Figure 4: Line chart of the BLEU-4 score and addressee prediction accuracy with the increase of EM iterations. increasing of iterations, both metrics grow mutually, then reach their tops at around the \(6_{th}\) iteration, demonstrating the effectiveness of the EM process. ### Case Studies To understand the effect of our method intuitively, we sample two cases from the testing set and present them in this section. Figure 5 illustrates an example whose addressee relations and dialogue history are shown in Figure 1. This conversation is about how to run the _compiz_ or _beryl_ in a _comp_ with \(256\)MB RAM. _Speaker #2_ points that _it's the graphic card that is important_, but _Speaker #4_ seems unsatisfied by saying _that didn't tell me much_. After that, _Speaker #5_ suggests using the _rdesktop_ and _Speaker #4_ replies him/her. Our model is able to capture the key information _rdesktop_ and _terminal_ in the addressee utterance \(\mathbf{U_{6}}\), and generate a proper response _Well, how do I install dr desktop from the terminal_, which is very close to the human answer and even better with more information _from the terminal_. On the contrary, the baseline model (BART) fails to capture the addressee information and just replies with a safe response _I tried but it didn't work_. This case shows the great significance of modeling the addressee information, and also demonstrates the effectiveness of our model design. Figure 6 presents another example sampled from the testing set, where we investigate how different addressee labels affect the generated responses. In the figure, different colors represent different utterances in the _Dialogue History_ part, and different responses generated by giving the corresponding utterances as addressee in the _Generated Responses_ part. This conversation is about discussing the file system in Ubuntu that can share on a network with windows machines. When the addressee is given as \(\mathbf{U_{1}}\), our model suggests using _samba_, which is a solution to the question of \(\mathbf{U_{1}}\). Responses to \(\mathbf{U_{2}}\) and \(\mathbf{U_{3}}\) are like safe responses, but they make sense in their contexts: the former expresses its confusion about a confusing utterance (\(\mathbf{U_{2}}\)), and the latter expresses its gratitude to the suggestion in \(\mathbf{U_{3}}\). Response to \(\mathbf{U_{4}}\) states his/her understanding towards \(\mathbf{U_{4}}\), and questions if his/her understanding is right. Response to \(\mathbf{U_{5}}\) acknowledges the solution _gentoo_ in \(\mathbf{U_{5}}\) by saying _using gentoo on my computer too_. In general, this case demonstrates the ability of our model to generate diverse responses according to the specified addressee and contexts of the dialogue history. ### Response Parser: A Byproduct for Free Another contribution of our EM pre-training is that a response parser can be freely obtained. This byproduct comes from Eq. (4), where given a response generation model with addressee modeling, we can predict the addressee for each utterance in the dialogue. Previous literature has studied and proved that explicitly modeling the structural information is beneficial to understanding specific structured data. (Li et al., 2020, 2022a, 2022b). In this context, the response parser can be used to infer the discourse structures, which contributes to boost Figure 5: The first example of Case Studies, which shows the generated responses of our model and the baseline model. Figure 6: The second example of Case Studies, which illustrates the generated response of our model given different addressee labels. Better view in color. ing the performance of some multi-party dialogue comprehension tasks like response selection and question answering. Jia et al. (2020); Li and Zhao (2021); Ma et al. (2022) ## 6 Conclusion Most multi-party dialogue corpora are not annotated with addressee labels, making them unable to support the pre-training of response generation models. To solve this problem, we design a simple yet effective way to model the addressee of a response as a latent variable and propose an EM pre-training approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Mathematical derivation, experimental results on the Ubuntu IRC benchmark, and extensive analyses have justified the theoretical feasibility and actual effectiveness of our method. ## Limitations First, Due to the lack of datasets to evaluate the MP-DRG task, we perform our experiments only on the Ubuntu IRC benchmark and pre-train our model only on the domain of Ubuntu chats. However, the potential of our approach goes far beyond that since it is applicable to any open-domain multi-party dialogue dataset. In the future work, we will consider applying our method in more open-domain conversational datasets, such as the transcripts of TV series or movies. Additionally, the pre-training process solely relies on the addressee information of individual turns, disregarding the reply-to relations within the dialogue history. This oversight prevents the model from benefiting from valuable contextual cues necessary for a comprehensive understanding of the multi-party dialogue. In our future work, we will explore the integration of discourse-level reply-to relations into the pre-training process to further enrich the capabilities of the model.
2307.04409
Violation of a Leggett-Garg inequality using ideal negative measurements in neutron interferometry
We report on an experiment that demonstrates the violation of a Leggett-Garg inequality (LGI) with neutrons. LGIs have been proposed in order to assess how far the predictions of quantum mechanics defy macroscopic realism. With LGIs, correlations of measurements performed on a single system at different times are described. The measured value of K = 1.120 +/- 0.007, obtained in a neutron interferometric experiment, is clearly above the limit K = 1 predicted by macro-realistic theories.
Elisabeth Kreuzgruber, Richard Wagner, Niels Geerits, Hartmut Lemmel, Stephan Sponar
2023-07-10T08:17:49Z
http://arxiv.org/abs/2307.04409v1
# Violation of a Leggett-Garg inequality using ideal negative measurements ###### Abstract We report on an experiment that demonstrates the violation of a Leggett-Garg inequality (LGI) with neutrons. LGIs have been proposed in order to assess how far the predictions of quantum mechanics defy'macroscopic realism'. With LGIs, correlations of measurements performed on a single system at different times are described. The measured value of \(K=1.120\pm 0.007\), obtained in a neutron interferometric experiment, is clearly above the limit \(K=1\) predicted by macro-realistic theories. _Introduction._--The question whether measurable quantities of a quantum object have definite values prior to the actual measurement is a fundamental issue ever since quantum theory has been introduced more than a century ago. Examples include Bell's inequality [1; 2], which sets bounds on correlations between measurement results of space-like separated components of a composite (entangled) system. A violation of Bell's inequality thus demonstrates that certain predictions of quantum mechanics cannot be reproduced by realistic theories, more precisely, by local hidden variable theories (LHVT). Another prime example is found in the Kochen-Specker theorem [3], which stresses the incompatibility of quantum mechanics with a larger class of hidden-variable theories, known as noncontextual hidden-variable theories (NCHVTs). Here it is assumed that the result of a measurement of an observable is predetermined and independent of a suitable (previous or simultaneous) measurement of any other compatible (co-measurable or commuting) observable, i.e., the measurement context. While both, Bell's inequality and tests of the Kochen-Specker theorem, require composite or multiple spatially-separated systems Leggett-Garg inequalities (LGIs) [4] study temporal correlations of a single system, therefore they are often referred to as Bell inequalities _in time_. Violation of a Bell inequality is a direct witness of entanglement - a very specific feature of quantum mechanics. Contrary, in the case of LGIs the violation occurs due to the _coherent superposition_ of system states, which is essentially the most fundamental property of quantum mechanics. In other words LGIs quantify coherence in quantum systems and can consequently be seen as a measure or test of _quantumness_. Leggett-Garg inequalities were proposed in 1985 [4] in order to assess whether sets of pairs of sequential measurements on a single quantum system can be consistent with an underlying macro-realistic theory [5]. Within the framework of a macro-realistic theory a single macroscopic system fulfills the following two assumptions of macrorealism measured at successive times: (A1) at any given time the system is always in only one of its macroscopically distinguishable states, and (A2) the state of the system can be determined in a non-invasive way, meaning, without disturbing the subsequent dynamics of the system. Quantum mechanics predicts the violation of the inequalities since it contradicts with both assumptions (A1) and (A2). The (quantum) system under observation has to be measured at different times. Correlations that can be derived from sequences of this measurements let us formulate the LGI. The result of these correlation measurements either confirm the absence of a realistic description of the system or the impossibility of measuring the system without disturbing it [5]. This will also refuse a well-defined pre-existing value of a measurement. Recent violations of LGI have been observed in various systems, including photonic qubits [6; 7; 8; 9], nuclear spins in a diamond defect center[10], superconducting qubits in terms of transoms [11] and flux qubits [12], nuclear magnetic resonance [13; 14], and spin-bearing phosphorus impurities in silicon [15]. Proposed schemes for increasing violations of Leggett-Garg inequalities range from action of an environment on a single qubit in terms of generic quantum channels [16] to open many-body systems in the presence of a nonequilibrium [17]. In a recent paper [18] the authors propose to test a violation of the Leggett-Garg inequality due to the gravitational interaction in a hybrid system consisting of a harmonic oscillator and a spatially localized superposed particle [19], aiming to probe the _quantumness_ of gravity [20; 21]. The violation of an LGI in an interferometric setup has been proposed in literature theoretically for electrons in [22]. The requirement of non-invasive measurements from (A2) is realized in most experiments by utilizing the concept of weak measurements, or by introducing an ancilla system, as implemented in [15]. Note that even a weak measurement in practice can never be completely non-invasive (due to a non-vanishing measurement strength) and the preparation of the ancilla system will also always be imperfect. However, the experimental procedure from [22] realizes _ideal negative measurements_ in an interferometer experiment in order to fulfill the requirement of non-invasive measurements from (A2) without the need for an ancilla. In this Letter, we present a neutron interferometric experiment, demonstrating a violation of the LGI. In our measurement scheme the single system is represented by the neutron's path in an interferometer. A respective observable is defined and measured non-invasively according to the LGI protocol. _Leggett-Garg inequality._--For dichotomous variables \(Q_{i}\), accounting for two _macroscopically distinguishable_ states, having outcomes \(q_{i}=\pm 1\), the correlation function for measurements at times \(t_{i}\), \(t_{j}\) is given by \[C_{ij}=\langle Q_{i}Q_{j}\rangle=\sum_{q_{i}q_{j}=\pm}q_{i}q_{j}P\big{(}q_{i}( t_{i}),q_{j}(t_{j})\big{)}, \tag{1}\] where \(P(q_{i}(t_{i}),q_{j}(t_{j}))\) denotes the _joint_ probability of obtaining the measurement results \(q_{i}\) at time \(t_{i}\) and \(q_{j}\) at time \(t_{j}\). Considering Eq.(1) for three experimental sets with \(i,j\in\{1,2,3\}\) yields the LGI \[K\equiv C_{21}+C_{32}-C_{31}, \tag{2}\] where \(K\) denotes the _Leggett-Garg correlator_, with limits \(-3\leq K\leq 1\). Since the three correlators are derived from probabilities with \(|C_{ij}|\leq 1\), the lower limit cannot be violated. However, quantum mechanics allows for a violation of the upper bound. In a two-level system, the maximum obtainable violation is \(K=1.5\)[5]. The basic idea behind the experimental procedure as proposed by Emary et al. in [22], is to map the temporal structure (or measurement time \(t_{i}\)) of LGI onto real-space coordinates, more precisely onto three distinct regions of the interferometer, indicated by the index \(\alpha\in\{1,2,3\}\), cf. Fig. 1. Within each region the two paths of the interferometer constitute a qubit. The measurement of the qubit's state, denoted as \(q_{i}=\pm 1\), therefore results in a "which-way" measurement [23] in the particular region of interest. While a click of a detector in e.g. the \(+\) arm of region 2 (\(q_{2}=+1\)) is a strongly _invasive measurement_, on the other hand the absence of a detector response implies \(q_{2}=-1\) and does not disturb the system at all. It accounts for the required non-invasive measurement (A2) in terms of an _ideal negative measurement_. In our neutron interferometric realization of [22] neutrons enter the IFM via the \(+\) port of region 1. Hence, it is not necessary to measure in region 1 and the noninvasive measurability is granted. The first plate of the IFM consists of a tunable beamsplitter characterized by parameter \(\vartheta_{A}\), which is schematically illustrated in Fig. 1. The theoretical maximum of \(K=1.5\) is obtained for \(\vartheta_{A}=\vartheta_{B}=\pi/3\) and phase shift \(\chi=0\). However, in our setup with fixed \(\vartheta_{B}=\pi/2\) (usual 50:50 beamsplitter), the maximal possible violation is \(K=\sqrt{2}\) (for \(\vartheta_{A}=\pi/4\)). We define \(P_{\alpha\pm,\beta\pm}(n_{\alpha},n_{\beta})\) as the joint probability that two detectors placed at position \(\alpha\pm\) and \(\beta\pm\) respectively detect (\(n=1\)) or don't detect a neutron (\(n=0\)), where \(\alpha\) and \(\beta\) specify the region and \(\pm\) the path. Then the correlator, as defined in Eq.(1), between regions \(\alpha\) and \(\beta\) is given by \[C_{\alpha\beta}=\sum_{q_{\alpha},q_{\beta}=\pm}q_{\alpha}q_{\beta}P_{\alpha q_ {\alpha},\beta q_{\beta}}(1,1). \tag{3}\] Hence the correlation function for regions 1 and 3, denoted as \(C_{31}\), can simply be expressed as \(C_{31}=P_{3+1+}(1,1)-P_{3-,1+}(1,1)\), since the neutrons always enter from \(1+\). Therefore, the correlation function \(C_{31}\) can also be expressed in terms of marginal probabilities as \(C_{31}=P_{3+}(1)-P_{3-}(1)\). Although not particularly necessary here, it is instructive to express \(C_{31}\) in terms of _ideal negative_ measurements as \[\begin{split} C_{31}=&\sum_{q_{1},q_{3}=\pm}q_{1}q _{3}P_{3q_{\alpha}}\big{(}1\big{)}\big{(}1-P_{1q_{\beta}}(0)\big{)}\\ =&-\sum_{q_{1},q_{3}=\pm}q_{1}q_{3}P_{1q_{2},3q_{3}} (1,0),\end{split} \tag{4}\] since \(P_{1q_{1}}(0)=1-P_{1q_{1}}(1)\). A similar expression gives the correlator \(C_{21}=P_{1+,2+}(1)-P_{1+,2-}(1)\) which is measured Figure 1: Regions in the Mach-Zehnder interferometer and setup for determination of correlator \(C_{31}\). Figure 2: Setups to determine probabilities \(P_{2\pm 1+}\) for correlators \(C_{21}\) in (a) and \(P_{3\pm 2\pm}\) for \(C_{32}\) in (b),(c). with detectors directly placed in region 2, shown in Fig. 2 (a). For \(C_{32}\) all four terms of the sum from Eq.(3) contribute, taking both paths of section 2 into account. \[C_{32}=\sum_{q_{2},q_{3}=\pm}q_{2}q_{3}P_{3q_{3},2q_{2}}(1,1) \tag{5}\] Using again \(P_{2q_{2}}(0)=1-P_{2q_{2}}(1)\) we write the sum as \[C_{32}=-\sum_{q_{2},q_{3}=\pm}q_{2}q_{3}P_{3q_{3},2q_{2}}(1,0) \tag{6}\] in order to account for the non-invasive or ideal negative measurement in section 2. The two probabilities \(P_{3\pm,2-}(1,0)\) are determined by counting the neutrons in path \(3+\) and \(3-\) respectively under the condition that they have not been counted in pah \(2-\). The latter is ensured by placing a beam blocker in path \(2-\), cf. Fig. 2(b). The other two probabilities are measured similarly as shown in Fig. 2(c). The correlators according to [22] for the regions in our setup are calculated as follows \[\begin{split} C_{21}=&\cos\vartheta_{A}\\ C_{32}=&\cos\vartheta_{B}\\ C_{31}=&\cos\vartheta_{A}\cos\vartheta_{B}-\cos \chi\sin\vartheta_{A}\sin\vartheta_{B}\\ K=&\cos\vartheta_{A}+\cos\vartheta_{B}-\cos \vartheta_{A}\cos\vartheta_{B}\\ &+\cos\chi\sin\vartheta_{A}\sin\vartheta_{B},\end{split} \tag{7}\] which in our setup, with fixed \(\sin\vartheta_{B}=\frac{\pi}{2}\), \(K\) becomes \[K=\cos\vartheta_{A}+\cos\chi\sin\vartheta_{A}. \tag{8}\] Figure 3 shows the regions in the parameter space (\(\vartheta_{A}\),\(\chi\)) of our experimental LGI test (with fixed value \(\vartheta_{B}=\pi/2\)), where it is in theory possible to violate the LGI with a value \(K=\sqrt{2}\). \(\vartheta_{A}\) represents the mixing angle of the first interferometer plate, and \(\chi\) the phase shifter angle. The resulting \(K\) values are shown in green for areas where no violation is possible, and in orange for a possible violation of the LGI. The dashed red line indicates our measurement result in an ideal interferometer. _Neutron interferometer setup._--Neutron interferometry [24; 25] provides a powerful tool for investigation of fundamental quantum mechanical phenomena. Entanglement between different degrees of freedom (DOF), e.g., the neutron's spin, path, and energy DOF has been confirmed, and the contextual nature of quantum mechanics has been demonstrated successfully [26]. In more recent experiments the concept of weak measurements and weak values has been utilized for direct state reconstruction [27], demonstration of the canonical commutator relation [28] and studies of which way information [29; 30]. The experiment was carried out at the neutron interferometer instrument S18 at the high-flux reactor of the Institute Laue-Langevin (ILL) in Grenoble, France (the experimental data can be found on the ILL data server under [31]. A monochromatic unpolarized neutron beam with mean wavelength \(\lambda=1.91\)A (\(\delta\lambda/\lambda\sim 0.02\)) and \(3\times 3\,\)mm\({}^{2}\) beam cross section was used to illuminate the interferometer. In order to observe a violation of an LGI in an interferometric experiment, it is necessary to implement a non-50:50 beam splitter at the first plate of the interferometer. This is achieved by placing a Figure 4: Unpolarized monochromatic neutrons enter the interferometer and are split into paths I and II. Green indicates the neutron beam, blue the phase shifter and purple the Indium absorber. Detectors O (front) and H (back) as well as the (re)movable detector for \(C_{21}\) measurement and Cadmium blocker (red) are shown. Figure 3: Regions in the parameter space where the LGI can be violated, with fixed \(\vartheta_{B}=\pi/2\). The dashed red line indicates our experimental parameter settings. partial absorber behind the first interferometer plate in one of the neutron paths. The absorber is an Indium slab, about 3 mm thick, placed in path I, resulting in an intensity ratio between paths I and II of about 10:90. The interferometer itself is a symmetric three-plate silicon perfect crystal (triple Laue type), with a plate thickness of 3 mm and a length of 140 mm. A schematic illustration of the interferometric setup is given in Fig. 4. To obtain interference fringes, a 5 mm Aluminium phase shifter was used. Additional beam blockers for the detection of single path intensities were made of Cadmium. Both the 'O' and 'H' detectors outside the interferometer and the additional detector for \(C_{21}\) measurements were \({}^{3}\)He proportional counting tubes. Determination of correlators \(C_{31}\) and \(C_{21}\) is straightforward. In both cases it is not necessary to measure non-invasively, since no subsequent measurement on the same state is performed. For \(C_{31}\), the measurement is that of a standard interferogram Fig. 5, with measurement time 180 seconds per phase shifter position. The correlator \(C_{31}\) is calculated via \[C_{31}=\frac{N_{3+1+}(\chi)-N_{3-1+}(\chi)}{N_{3+1+}(\chi)+N_{3-1+}(\chi)}, \tag{9}\] where \(N_{3+1+}(\chi)\) denotes the counts in the H detector and \(N_{3-1+}(\chi)\) the counts in the O detector. Due to the cosine behaviour of the recorded interferogram, this correlator is dependent on the position \(\chi\) of the phase shifter. For the largest possible violation, the maximum counts in O and minimum in H are used, which corresponds to the position \(\chi=2n\pi\) (where \(n\in\mathbb{N}_{0}\)) in Fig. 5. Similarly, the correlator \(C_{21}\) is calculated as \[C_{21}=\frac{N_{2+1+}-N_{2-1+}}{N_{2+1+}+N_{2-1+}} \tag{10}\] and is performed as a transversal scan with a pencil-size He-3 detector mounted on a translation stage in region 2 of the interferometer, with measurement time 300 seconds per detector position. Moving first through path I and then through path II, the resulting neutron counts are shown in Fig. 6, where the separation between both paths is also clearly visible. The \(N_{2i1+}\) are the neutron counts in the peak of the respective Gaussian fit to the intensity profiles. For correlator \(C_{32}\), however, it is crucial to measure non-invasively. This is done by measuring the _absence_ of a neutron in a given path due to the Cd blocker, meaning that the neutron has to take the path without the Cd blocker. This is represented by the minus sign in Eq. (6). Four measurements are performed, with each of the paths blocked in turn and the resulting intensity in detectors O and H recorded for a measurement time of 600 seconds. These results are shown in Fig. 7. \(C_{32}\) becomes \[C_{32}=\frac{N_{3+2-}+N_{3-2+}-N_{3+2+}-N_{3-2-}}{N_{3+2-}+N_{3-2+}+N_{3-2+}+N_{ 3+2+}+N_{3-2-}}, \tag{11}\] with \(N_{3+2-}\) and \(N_{3+2+}\) the neutron counts in the H detector with blocked path II and path I, respectively, and likewise for the O detector in \(N_{3-2\pm}\). _Results._--In order to demonstrate the experimental violation of the Leggett-Garg inequality, we calculate the correlator \(K\), Eq. (2). The resulting curve is shown in Fig. 8, with the maximum at a phase shift of \(\chi=0\). With the Indium absorber in path I of the interferometer, a violation of the limit \(K=1\) is clearly visible (Fig. 8(a)). Figure 5: Measurement results for the of \(C_{31}\) correlator in terms of interferograms. Figure 6: Measurement results for the of \(C_{21}\) correlator obtained by transversal scan of movable detector. Figure 7: Measurement results for the of \(C_{32}\) correlator: neutron counts in detectors O (blue) and H (orange). Our results show a significant violation of the LGI by 18 standard deviations \(\sigma\) (denoted as \(n_{\sigma}=18\)) at the maximum, \(K=1.120\pm 0.007\). The violation is visible over a wide range of phase shifter values \(\chi\). Numeric values of the individual correlators \(C_{ij}\) and the final value of \(K\) in case of the maximal violation of the LGI are presented in Tab. 1. For comparison, Fig. 8(b) shows the same measurement procedure for a symmetric beam splitter (\(\vartheta_{A}=\pi/2\)), i.e. without absorber, where no violation is possible, resulting in \(K=0.540\pm 0.023\). _Concluding remarks and discussion._--Our measurement results demonstrate a violation of an LGI by \(n_{\sigma}=18.0\), while the absorberless measurements show no violation. Hence we conclude that neutrons in an interferometer must be understood quantum mechanically. An even higher violation can be achieved when the signs in region 3 are switched, and detector O becomes \(3+\), detector H \(3-\). The correlators \(C_{31}\) and \(C_{32}\) have to be recalculated accordingly, resulting in \(K=1.162\pm 0.006\) with \(n_{\sigma}=28\). This 'additional' violation is due to the asymmetric nature of the perfect crystal interferometer. Since successive reflections on the crystal lamellas enhance the reflectivity [32] the H detector always receives some phase-independent intensity offset. The detection loophole is closed due to the high efficiency of our neutron detectors, close to unity. The fair sampling assumption is needed, especially for the correlator \(C_{21}\), which is the case for a wide range of experiments of this kind, since simultaneous detection of everything is impossible. Finally, we want to emphasize that the interferometric scheme applied in the present work is not limited neutrons, but is in fact completely general and can be used for any quantum particle with nonzero or even zero mass. This work was supported by the Austrian science fund (FWF) Projects No. P 30677 and No. P 34239.
2307.13175
Wedge product theorem in compensated compactness theory with critical exponents on Riemannian manifolds
We formulate and prove compensated compactness theorems concerning the limiting behaviour of wedge products of weakly convergent differential forms on closed Riemannian manifolds. The case of critical regularity exponents is considered, which goes beyond the regularity regime entailed by H\"{o}lder's inequality. Implications on the weak continuity of $L^p$-extrinsic geometry of isometric immersions of Riemannian manifolds are discussed.
Siran Li
2023-07-24T23:48:40Z
http://arxiv.org/abs/2307.13175v1
Wedge product theorem in compensated compactness theory with critical exponents on Riemannian manifolds ###### Abstract. We formulate and prove compensated compactness theorems concerning the limiting behaviour of wedge products of weakly convergent differential forms on closed Riemannian manifolds. The case of critical regularity exponents is considered, which goes beyond the regularity regime entailed by Holder's inequality. Implications on the weak continuity of \(L^{p}\)-extrinsic geometry of isometric immersions of Riemannian manifolds are discussed. Key words and phrases:Compensated compactness; weak continuity; differential form; div-curl lemma; critical exponents; isometric immersion 2020 Mathematics Subject Classification: 58C07; 53C42 ## 0. Introduction We establish compensated compactness theorems with critical exponents for wedge products of differential forms on closed manifolds. Our work generalises and is, in fact, primarily motivated by the works of Robbin-Rogers-Temple [52] and Briane-Casado-Diaz-Murat [10]. Throughout, by closed manifold we mean a compact Riemannian manifold with no boundary. Compensated compactness has played an important role in nonlinear analysis, especially in nonlinear PDEs arising from fluid mechanics (_cf._ DiPerna [25], C. Dafermos [23]), nonlinear elasticity (_cf._ Resetnjak [50], Ball [2], Murat [46], Muller [45]), and geometric analysis (_cf._ Helein [32]), as well as calculus of variations (_cf._ Tartar [55, 56], Ball-Currie-Olver [3], Fonseca-Leoni-Maly [28]). The above list of references is by no means exhaustive; we refer to the monograph by Evans [27] and the recent survey [13] by Chen for further details. The _div-curl lemma_ introduced by Murat [46, 47, 48] and Tartar [55, 56] marks a cornerstone of the compensated compactness theory. It maintains that a nonlinear functional of weakly convergent sequences will converge in coarse topologies when specific derivatives of these sequences are compact in natural function spaces. The nonlinear functional and differential constraints in consideration need to satisfy some conditions that are essentially algebraic in nature. In the simplest and original form, the div-curl lemma reads as follows: **Lemma 0.1**.: _Let \(\{u^{n}\}\), \(\{v^{n}\}\) be sequences of vectorfields in \(L^{2}_{\rm loc}(\mathbb{R}^{3},\mathbb{R}^{3})\) such that \(u^{n}\rightharpoonup\bar{u}\) and \(v^{n}\rightharpoonup\bar{v}\) in \(L^{2}_{\rm loc}(\mathbb{R}^{3},\mathbb{R}^{3})\). Assume \(\{{\rm div}(u^{n})\}\) and \(\{{\rm curl}(v^{n})\}\) are precompact in \(W^{-1,2}_{\rm loc}(\mathbb{R}^{3},\mathbb{R})\) and \(W^{-1,2}_{\rm loc}(\mathbb{R}^{3},\mathbb{R}^{3})\), respectively. Then \(u^{n}\cdot v^{n}\) converge to \(\bar{u}\cdot\bar{v}\) in the sense of distributions._ This result has been proved, reproved, and generalised since its first appearance. Murat and Tartar's arguments ([46, 47, 48, 55, 56]) utilised Fourier transforms. Later, two harmonic analytic proofs were provided in the seminal paper by Coifman-Lions-Meyer-Semmes [21], which furthermore established that \(\{u^{n}\cdot v^{n}\}\) in Lemma 0.1 is bounded in Hardy space \(\mathcal{H}^{1}_{\rm loc}\). From another perspective, Robbin-Rogers-Temple [52] first observed that, by writing \[u^{n}=\Delta\Delta^{-1}u^{n}=\left(\mathrm{grad}\circ\mathrm{div}-\mathrm{ curl}\circ\mathrm{curl}\right)\Delta^{-1}u^{n},\] one may decompose \(u^{n}\), and similarly for \(v^{n}\), into a weakly convergent part and a strongly convergent part. The pairing of the two weakly convergent parts is shown to converge in the sense of distributions by employing the ellipticity of \(\Delta\), the commutativity of \(\Delta^{-1}\) with divergence and curl, as well as the first-order different constraints in the assumption1. This idea has been exploited to extend the div-curl lemma to more general domains (_e.g._, Riemannian manifolds) and differential operators (_e.g._, exterior differential d and codifferential d\({}^{*}\), or elliptic complexes in greater generality), based on which a functional analytic framework for div-curl lemma has been developed. See Kozono-Yanagisawa [34], Chen-Li [14, 15], Waurick [58], and Pauly [49]. Footnote 1: For simplicity of presentations, here we do not state carefully the boundary conditions. In fact, Robbin-Rogers-Temple established a more general result [52, Theorem 1.1] than Lemma 0.1, formulated in terms of wedge products of weakly convergent differential forms. **Lemma 0.2** ((Multilinear) wedge product theorem; Theorem 1.1 in [52]).: _Let \(\{\alpha_{1}^{n},\ldots,\alpha_{L}^{n}\}_{n\in\mathbb{N}}\) be sequences of differential forms on \(\Omega\subset\mathbb{R}^{N}\) of degrees \(s_{1},\ldots,s_{L}\), respectively; \(\sum_{i=1}^{L}s_{i}\leq N\). Assume for each \(i\in\{1,\ldots,L\}\) that_ \[\alpha_{i}^{n}\rightharpoonup\overline{\alpha_{i}}\text{ in }L^{p_{i}} \text{ with }\sum_{i=1}^{L}\frac{1}{p_{i}}=1\] _and that_ \[\{\mathrm{d}\alpha_{i}^{n}\}\text{ lies in a compact subset of }W_{\mathrm{loc}}^{-1,p_{i}}.\] _Then we have_ \[\alpha_{1}^{n}\wedge\cdots\wedge\alpha_{L}^{n}\longrightarrow\overline{\alpha }_{1}\wedge\cdots\wedge\overline{\alpha}_{L}\qquad\text{ in }\mathcal{D}^{\prime}.\] In terms of applications, we find that such formulation is particularly convenient for weak continuity considerations of isometric immersions of (semi)-Riemannian manifolds. See [14, 15], in which some generalisations of Lemma 0.2 are proved and exploited. This observation has been further extended to study weak continuity of gauge equations by Chen-Giron [18]. The theory of compensated compactness as well as the descendants/relatives of the div-curl lemma remains an important topic of research in nonlinear analysis today. Some recent endeavours in this field are devoted to the development of nonhomogeneous, endpoint, and fractional versions of div-curl lemmas ([7, 9, 10, 24, 30, 37, 39]), detailed characterisations of the structures -- especially, algebraic structures -- of compensated compactness quantities ([33, 38, 40, 48, 51, 59]), and connections with broader contexts in geometric measure theory ([54]). _Trente ans apres_ the introduction of the div-curl lemma by Murat-Tartar [46, 55], Briane-Casado-Diaz-Murat further proved in [10] several endpoint div-curl-type theorems and applied them to study the \(G\)-convergence for unbounded monotone operators. The setting is as follows: Let \(\{u^{n}\}\subset L^{p}\) and \(\{v^{n}\}\subset L^{q}\) be two sequences of weakly convergent vectorfields on \(\Omega\subset\mathbb{R}^{N}\) to limits \(\bar{u}\) and \(\bar{v}\), respectively, such that \(|u^{n}-\bar{u}|^{p}\) and \(|v^{n}-\bar{v}|^{q}\) both converge weakly-\(\star\) as Radon measures. In addition, assume the strong convergence of \(\mathrm{div}(u^{n})\) and \(\mathrm{curl}(v^{n})\) in \(W^{-1,q^{\prime}}\) and \(W^{-1,p^{\prime}}\), respectively. (Notice the "twist" of indices \(p\), \(q\) here!) Then the dot products \(u^{n}\cdot v^{n}\) converges to \(\bar{u}\cdot\bar{v}+\varpi\) in the sense of distributions, where \(\varpi\) is a defect Radon measure that can be characterised in detail. A crucial point here is that the range of indices \((p,q)\) is \(\frac{1}{p}+\frac{1}{q}\leq 1+\frac{1}{N}\), which goes beyond the "subcritical" case of \((p,q)\) entailed by Holder's inequality as in Lemma 0.2. This motivates us to generalise the wedge product theorem to "critical" indices. Our main Theorem 0.4 below extends Lemma 0.2, the wedge product theorem of compensated compactness _a la_ Robbin-Rogers-Temple [52], in the spirit of Briane-Casado-Diaz-Murat [10]. Here we focus on the bilinear case \(L=2\). See Theorem 2.1 below for an extension for general \(L\), namely the multilinear case. Meanwhile, Theorem 0.4 also generalises [10], by way of extending the results therein from vectorfields on Euclidean domains to differential forms of arbitrary degree on closed manifolds. Before stating the theorem, we first introduce some notations used throughout this work. **Notation 0.3**.: _For a closed (i.e., compact and boundaryless) smooth Riemannian manifold \(X\), denote by \(\operatorname{dvol}_{g}^{X}\) the Riemannian volume measure on \(X\), and by \(\mathcal{M}(X)\) the space of Radon measures on \(X\). In view of the Riesz representation theorem, \(\mathcal{M}(X)\) equipped with the total variation norm is isometrically isomorphic to \(\left[\mathcal{C}^{0}(X)\right]^{*}\) as a Banach space, topological by the weak-\(\star\) topology on \(\left[\mathcal{C}^{0}(X)\right]^{*}\). Write the limit of a sequence of Radon measures as \(\mathcal{M}-\lim_{n\to\infty}\)._ _We write \(\mathrm{d}\) (\(\wedge\), resp.) for the exterior differential (wedge product, resp.) on the space of differential forms, and \(\mathrm{d}^{*}\) for the codifferential, namely that the formal \(L^{2}\)-adjoint of \(\mathrm{d}\) taken with respect to the Riemannian metric. We reserve the symbol \(\delta\) for Dirac delta measures. Also denote \(\Delta:=\mathrm{d}\mathrm{d}^{*}+\mathrm{d}^{*}\mathrm{d}\), which is the Laplace-Beltrami operator. It differs from the Hodge Laplacian by a sign; for example, \(\Delta=-\sum_{j=1}^{N}\partial_{jj}\) on \(\mathbb{R}^{N}\)._ _For a regularity scale \(\mathcal{R}=W^{k,p},\mathcal{C}^{0,\alpha},\mathcal{C}^{\infty},\mathcal{M}, \mathcal{D}^{\prime}\cdots\) (as usual, \(\mathcal{D}^{\prime}\) denotes distributions), we shall designate \(\mathcal{R}\left(X,\bigwedge^{\ell}T^{*}X\right)\) for the space of differential \(\ell\)-forms of regularity \(\mathcal{R}\) on \(X\). Lengths/moduli and inner products on \(\bigwedge^{\bullet}T^{*}X\), unless specified otherwise, are always taken with respect to the Riemannian metric on \(X\). A differential form \(\alpha\in\mathcal{R}\left(X,\bigwedge^{\bullet}T^{*}X\right)\) is said to be exact (coexact, resp.) if and only if \(\mathrm{d}\alpha=0\) (\(\mathrm{d}^{*}\alpha=0\), resp.) in the sense of distributions._ _We write \(A\lesssim_{c_{1},\ldots,c_{n}}B\) to mean the inequality \(A\leq CB\), where the constant \(C\) depends only on the parameters \(c_{1},\ldots,c_{n}\). Einstein's summation convention is adopted throughout._ Now we may state our generalised bilinear wedge product theorem theorem: **Theorem 0.4**.: _Let \((X,g)\) be an \(N\)-dimensional closed Riemannian manifold; \(N\geq 2\). Let \(1<p,q<\infty\) be such that_ \[1\leq\frac{1}{p}+\frac{1}{q}\leq 1+\frac{1}{N}. \tag{0.1}\] _Consider sequences of differential forms \(\{\alpha^{n}\}\subset L^{p}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\) and \(\{\beta^{n}\}\subset L^{q}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\) satisfying the following conditions:_ 1. \(\alpha^{n}\rightharpoonup\overline{\alpha}\) _weakly in_ \(L^{p}\)_;_ 2. \(\beta^{n}\rightharpoonup\overline{\beta}\) _weakly in_ \(L^{q}\)_;_ 3. \(|\alpha^{n}-\overline{\alpha}|^{p}\,\operatorname{dvol}_{g}^{X}\rightharpoonup\mu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 4. \(\left|\beta^{n}-\overline{\beta}\right|^{q}\,\operatorname{dvol}_{g}^{X} \rightharpoonup\nu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 5. \(\mathrm{d}\alpha^{n}\to\mathrm{d}\overline{\alpha}\) _strongly in_ \(W^{-1,q^{\prime}}\)_;_ 6. \(\mathrm{d}\beta^{n}\to\mathrm{d}\overline{\beta}\) _strongly in_ \(W^{-1,p^{\prime}}\)_._ _Then we have the following convergence (modulo subsequences) in the sense of distributions:_ \[\alpha^{n}\wedge\beta^{n}\longrightarrow\overline{\alpha}\wedge\overline{ \beta}+\sum_{k=1}^{\infty}\mathrm{d}\left(v^{k}\delta_{x^{k}}\right)\qquad\text {in }\mathcal{D}^{\prime}, \tag{0.2}\] _for some sequences of points \(\left\{x^{k}\right\}\subset X\) and \(\left(\ell_{1}+\ell_{2}-1\right)\)-covectors \(\left\{v^{k}\right\}\). Moreover,_ \[\left|v^{k}\right|\lesssim_{p,N,X}\left[\mu\left(\left\{x^{k}\right\}\right) \right]^{\frac{1}{p}}\left[\nu\left(\left\{x^{k}\right\}\right)\right]^{\frac{ 1}{q}}.\] _If, in addition, \(\frac{1}{p}+\frac{1}{q}<1+\frac{1}{N}\) in (0.1), then all \(v^{k}\) are zero._ We shall refer to \(\sum_{k=1}^{\infty}\mathrm{d}\left(v^{k}\delta_{x^{k}}\right)\) in (0.2) as the _concentration distribution_ in the sequel. The indices \((p,q)\) satisfying (0.1) are said to be in the _critical_ range. **Remark 0.5**.: _More precisely, (0.2) means the following:_ \[\left\langle\alpha^{n}\wedge\beta^{n},\Xi\right\rangle\longrightarrow\left\langle \overline{\alpha}\wedge\overline{\beta},\Xi\right\rangle+\sum_{k=1}^{\infty} \left\langle v^{k},\mathrm{d}^{*}\Xi\right\rangle\delta_{x^{k}}\qquad\text{ for any }\Xi\in\mathcal{C}^{\infty}\left(X;\bigwedge^{\ell_{1}+\ell_{2}}T^{*}X \right).\] _The pairings \(\left\langle\bullet,\bullet\right\rangle\), as before, are taken with respect to the metric \(g\). In geometric measure theoretic terminologies, the convergence in (0.2) should be understood as weak convergence of \((\ell_{1}+\ell_{2})\)-dimensional currents on \((X,g)\). The same convention is adopted throughout this paper._ If \(\frac{1}{p}+\frac{1}{q}<1\) is assumed instead of (0.1), Theorem 0.4 is then reduced to the classical wedge product theorem of Robbin-Rogers-Temple (Lemma 0.2 above with \(L=2\)). Thus, throughout this paper we focus solely on the range of indices in (0.1). In this case, however, notice below: **Remark 0.6**.: _The definition of the wedge products \(\alpha^{n}\wedge\beta^{n}\) and \(\overline{\alpha}\wedge\overline{\beta}\) requires further clarification: a priori it is unclear if they are well defined distributions for \(p^{-1}+q^{-1}>1\). We resort to Hodge decomposition for this purpose. See SS1.2 below for details._ The proof of Theorem 0.4 occupies SS1. A crucial assumption in this theorem is that \(1<p,q<\infty\), which shall be referred to as the non-endpoint case -- in contrast to the _endpoint_ case tackled in SS3, in which either \(p=1\) or \(q=1\). Our strategies of the proof follow Briane-Casado-Diaz-Murat [10], with the key tool for tackling critical exponents being P.-L. Lions' theory of concentrated compactness [43]. Next, in SS2, we discuss several corollaries of the wedge product Theorem 0.4. It will be explained that Theorem 0.4 encompasses the classical div-curl lemma of Murat-Tartar [46, 47, 55, 56] (_cf._ Coifman-Lions-Meyer-Semmes [21] too), as well as the generalisation by Briane-Casado-Diaz-Murat [10]. We also present a multilinear version of the wedge product theorem in Theorem 2.1, which addresses the distributional convergence of \(\alpha_{1}^{n}\wedge\cdots\wedge\alpha_{L}^{n}\) for sequences of differential forms \(\left\{\alpha_{i}^{n}\right\}_{i\in\left\{1,2,\ldots,L\right\};\,n\in\mathbb{N}}\) of \(L^{p_{i}}\)-regularity, respectively, provided that \(1\leq\sum_{i=1}^{L}\frac{1}{p_{i}}\leq 1+\frac{1}{\dim X}\). Consequences of the exactness of the concentration measure will be discussed too. The following section SS3 deals with the "endpoint critical case" of Theorem 0.4. By "critical" we mean \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{N}\), and by "endpoint" we further mean that either \(p\) or \(q\) equals \(1\). The main result in this case, which generalises [10, SSSS3&4], is as follows: **Theorem 0.7**.: _Let \(\left(X^{N},g\right)\) be an \(N\)-dimensional closed Riemannian manifold; \(N\geq 2\). Consider two sequences of differential forms \(\left\{\alpha^{n}\right\}\subset\mathcal{M}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\) and \(\left\{\beta^{n}\right\}\subset L^{N}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\) satisfying the following conditions:_ 1. \(\alpha^{n}\rightharpoonup\overline{\alpha}\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 2. \(\beta^{n}\rightharpoonup\overline{\beta}\) _weakly in_ \(L^{N}\)_;_ 3. \(\left|\alpha^{n}-\overline{\alpha}\right|\rightharpoonup\mu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 4. \(\left|\beta^{n}-\overline{\beta}\right|^{N}\rightharpoonup\nu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ _._ 5. \(\mathrm{d}\alpha^{n}\to\mathrm{d}\overline{\alpha}\) _strongly in_ \(W^{-1,N^{\prime}}\)_;_ 6. \(\mathrm{d}\beta^{n}\to\mathrm{d}\overline{\beta}\) _strongly in_ \(L^{N}\)_._ _Then we have the following convergence (modulo subsequences) in the sense of distributions:_ \[\alpha^{n}\wedge\beta^{n}\longrightarrow\overline{\alpha}\wedge\overline{\beta }+\sum_{k=1}^{\infty}\mathrm{d}\left(v^{k}\delta_{x^{k}}\right)\qquad\text{in $ \mathcal{D}^{\prime}$} \tag{0.3}\] _for some sequences of points \(\left\{x^{k}\right\}\subset X\) and \((\ell_{1}+\ell_{2}-1)\)-covectors \(\left\{v^{k}\right\}\). Moreover,_ \[\left|v^{k}\right|\lesssim_{N,X}\left[\mu\left(\left\{x^{k}\right\}\right) \right]\left[\nu\left(\left\{x^{k}\right\}\right)\right]^{\frac{1}{N}}.\] **Remark 0.8**.: _In this work, any theorem concerning \(L^{1}\) will always be formulated for \(\mathcal{M}\), with \(L^{1}\)-norm of integrable functions replaced by total variation of Radon measures._ **Remark 0.9**.: _For the endpoint critical wedge product Theorem 0.7, one may tend to complain that Condition (6) appears too strong. But even replacing this by "\(\mathrm{d}\beta^{n}\rightharpoonup\mathrm{d}\overline{\beta}\) weakly in \(L^{N}\)" will lead to counterexamples. See [10, Example 3.4]._ Our proof of Theorem 0.7 follows the strategies in [10, SS3], for which more delicate analysis for endpoint Sobolev embeddings are needed. Key tools include the endpoint estimates for elliptic systems _a la_ Bourgain-Brezis [6] and Brezis-Van Schaftingen [8]. The two endpoint critical div-curl lemmas by Briane-Casado-Diaz-Murat -- [10, Theorems 3.1 and 4.1] -- formulated and proved in rather different ways in [10], are generalised to the wedge product Theorem 0.7 in a unified manner. This is because in our formulation, as in Robbin-Rogers-Temple [52], the asymmetry between divergence and curl is circumvented by the exploitation of the wedge product, which is instead (super)symmetric. In the final section SS4 of this paper, we apply the theoretical results established in previous sections to study the weak continuity of the compatibility equations for curvatures -- named after Gauss, Codazzi(-Mainardi), and Ricci -- of isometric immersions of Riemannian manifolds with extrinsic geometry of \(L^{p}\)-regularity. More precisely, consider a family of (approximate, resp.) isometric immersions of a fixed Riemannian surface \((\Sigma,g)\) with \(W^{2,p}\)-regularity into \(\mathbb{R}^{3}\). Then the associated second fundamental forms \(\{\mathrm{II}^{\varepsilon}\}\), which are \(2\)-tensorfields on \(\Sigma\) with \(L^{p}\)-regularity, are (approximate, resp.) weak solutions to the Gauss-Codazzi equations. One hopes to find the smallest possible index \(p\) which ensures that any \(L^{p}\)-weak limit of \(\{\mathrm{II}^{\varepsilon}\}\) remains a weak solution to the Gauss-Codazzi equations. The weak continuity property has been used to prove the existence of isometric immersions with weak regularity of certain negatively curved surfaces. See S. Mardare [44], Chen-Slemrod-Wang [16], and the subsequent works [11, 12, 17, 18, 41]. The aforementioned investigations have potential applications to nonlinear elasticity. See [19, 35, 36] and the many references therein. We apply the generalised wedge product Theorem 0.4 to establish the following: **Theorem 0.10**.: _Let \((\Sigma,g)\) be an immersed Riemannian surface in \(\mathbb{R}^{3}\). Consider a family \(\{\mathrm{II}^{\varepsilon}\}\subset L^{p}_{\mathrm{loc}}\left(X;[TX\otimes TX ]^{*}\right)\) of weak solutions to the Gauss-Codazzi equations which converges to \(\overline{\Pi}\) in the weak-\(L^{p}_{\mathrm{loc}}\)-topology, where \(p\in\left]\frac{4}{3},\infty\right[\). Suppose that the coexact parts (i.e., projection onto the \(\mathrm{d}^{*}\)-image via Hodge decomposition) of \(\{\mathrm{II}^{\varepsilon}\}\) are precompact in the \(L^{p^{\prime}}_{\mathrm{loc}}\)-topology. Then \(\overline{\Pi}\) is still a weak solution to the Gauss-Codazzi equations._ Here we are able to go beyond the critical exponent \(p_{\mathbf{CS}}=2\) as in [16, 17] (which makes quadratic functions on \(\mathrm{II}^{\varepsilon}\) well defined via the Cauchy-Schwarz inequality; see Remark 0.6). The weak continuity of Gauss-Codazzi equations is established in the regime \(p>4/3\), with compactness in higher regularity classes (_i.e._\(L^{p^{\prime}}_{\rm loc}\)) assumed only for some components of \(\Pi^{\xi}\). For \(p>2\), Theorem 0.10 agrees with the weak compactness results established in [14, 15, 17]. In fact, we shall prove in SS4 a more general result (namely, Theorem 4.1) than Theorem 0.10, which applies to isometric immersions of arbitrary dimensions and codimensions. Two technical lemmas pertaining to concentration measures and endpoint elliptic regularity, which are variants of well-known theorems on Euclidean domains, are collected in Appendices. To conclude the introduction, we comment that it is expected that the entire programme of the present paper can be carried out for more general metric measure spaces, especially for RCD spaces. This is left for future investigations. ## 1. Proof of the wedge product Theorem 0.4 This section is devoted to the proof of the generalised wedge product Theorem 0.4. For this purpose, we combine (with nontrivial modifications) ideas from Robbin-Rogers-Temple [52] and Briane-Casado-Diaz-Murat [10]. We first exploit a nice "substructure" of \(\alpha^{n}\) and \(\beta^{n}\) based on the Hodge decomposition, which explains why \(\alpha^{n}\wedge\beta^{n}\) is well defined in the sense of distributions. The non-critical case \(\frac{1}{p}+\frac{1}{q}<1+\frac{1}{N}\) follows from a direct Sobolev embedding argument, while the critical case encompasses additional difficulties arising from Sobolev embeddings with critical exponents, which will be treated via P.-L. Lions' theory of concentrated compactness [43]. ### Hodge-type decomposition of differential forms We say that a pair of sequences \(\left(\left\{x^{n}\right\},\left\{y^{n}\right\}\right)\) is a _"weak-strong" duality pairing_ if for some \(r\in]1,\infty[\), one of \(\left\{x^{n}\right\}\) or \(\left\{y^{n}\right\}\) converges weakly in \(L^{r}\) and the other converges strongly in \(L^{r}\). Hence, for a quadratic form \(Q\), we have that \(\left\{Q(x^{n},y^{n})\right\}\) converges weakly in \(L^{1}\). Similarly we shall speak of "weak-weak" or "strong-strong" pairings, which are by now self-explanatory. Proposition 1.1 below is essentially [52, Lemma 4.1], generalised from Euclidean domains to compact manifolds and with \((p,q)\) relaxed from Holder conjugate exponents to the critical indices satisfying (0.1). Our proof is slightly different from the corresponding arguments in [10, 52]. **Proposition 1.1**.: _Let \(\left(X^{N},g\right)\), \(p\), \(q\), \(\left\{\alpha^{n}\right\}\), and \(\left\{\beta^{n}\right\}\) be as in Theorem 0.4. One can find differential forms \(\gamma_{n}\in W^{1,q^{\prime}}\left(X,\bigwedge^{\ell_{1}-1}T^{*}X\right)\), \(\xi^{n}\in L^{p}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\), \(\zeta_{n}\in W^{1,q}\left(X,\bigwedge^{\ell_{2}-1}T^{*}X\right)\), and \(\eta^{n}\in L^{p^{\prime}}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\) for each \(n=1,2,3,\ldots\) such that_ 1. _the following decompositions hold_ a.e._:_ \[\begin{cases}\alpha^{n}=\mathrm{d}\gamma_{n}+\xi^{n},\\ \beta^{n}=\mathrm{d}\zeta_{n}+\eta^{n};\end{cases}\] 2. _the following co-exactness conditions holds in the sense of distributions:_ \[\mathrm{d}^{*}\xi^{n}=0\quad\text{and}\quad\mathrm{d}^{*}\eta^{n}=0\qquad \text{ in }\mathcal{D}^{\prime};\] 3. _the following convergence results hold (modulo subsequences) in indicated topologies:_ \[\gamma_{n}\rightharpoonup\gamma\qquad\text{ weakly in }W^{1,p}\left(X, \bigwedge^{\ell_{1}-1}T^{*}X\right),\] \[\xi^{n}\longrightarrow\xi\qquad\text{ strongly in }L^{q^{\prime}}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right),\] \[\zeta_{n}\rightharpoonup\zeta\qquad\text{ weakly in }W^{1,q}\left(X, \bigwedge^{\ell_{2}-1}T^{*}X\right),\] \[\eta^{n}\longrightarrow\eta\qquad\text{ strongly in }L^{p^{\prime}}\left(X, \bigwedge^{\ell_{2}}T^{*}X\right).\] Here and hereafter the following notational convention is adopted in accordance with [10]. **Notation 1.2**.: _For the terms obtained from decomposing given weakly convergent sequences \(\{\alpha^{n}\}\) and \(\{\beta^{n}\}\), those with superscripts (subscripts resp.) converge strongly (weakly resp.) with respect to natural topologies given in the context._ Proof of Proposition 1.1.: We first observe that condition (0.1) on the range of \((p,q)\) implies \[1<p\leq q^{\prime}\qquad\text{ and }\qquad 1<q\leq p^{\prime}.\] The results for \(\alpha^{n}\) and \(\beta^{n}\) are completely parallel, so we argue for \(\alpha^{n}\) only. Indeed, by classical Hodge decomposition we have \[\alpha^{n}=\mathrm{d}\gamma_{n}+\mathrm{d}^{*}k^{n}+h^{n} \tag{1.1}\] with differential \((\ell_{1}+1)\)-forms \(\{k^{n}\}\) of \(W^{1,p}\)-regularity, and harmonic \(\ell_{1}\)-forms \(\{h^{n}\}\) of \(L^{p}\)-regularity. Thanks to [53, p.86, Theorem 2.4.7], here one may further select \(\gamma_{n}\) and \(k^{n}\) to be "minimal" in the sense that \(\mathrm{d}^{*}\gamma^{n}=0\) and \(\mathrm{d}k^{n}=0\) weakly. Then \[\mathrm{d}\alpha^{n}=\mathrm{d}\mathrm{d}^{*}k^{n}=\Delta k^{n},\] where \(\Delta=\mathrm{d}^{*}\mathrm{d}+\mathrm{d}\mathrm{d}^{*}\) is the Laplace-Beltrami operator on \((X,g)\). By the ellipticity of \(\Delta\) and the assumption that \(\{\mathrm{d}\alpha^{n}\}\) converges weakly in \(W^{-1,q^{\prime}}\), we deduce that \[\{k^{n}\}\text{ converges strongly in }W^{1,q^{\prime}}. \tag{1.2}\] Notice that although \(k^{n}\) may fail to be uniquely soluble from \(\mathrm{d}\alpha^{n}=\Delta k^{n}\), it is determined up to addition by harmonic forms, which is immaterial to (1.1) as \(\mathrm{d}^{*}k^{n}\) is uniquely determined. On the other hand, the projection \(\pi\) from \(L^{p}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\) onto the subspace \(\mathcal{H}\) of harmonic \(L^{p}\)-forms \([\pi(\alpha^{n})=h^{n}]\) is compact, for the range of \(\pi\) is finite-dimensional. This follows from the \(\mathcal{C}^{\infty}\)-regularity of \(L^{p}\)-harmonic forms and the fact that \(\mathcal{H}\) here coincides with the \(\ell_{1}^{\text{th}}\)-cohomology group. In this regard, one may equip \(\mathcal{H}\) with the \(\mathcal{C}^{\infty}\)-(Frechet) topology to maintain the compactness of \(\pi\). See [53, SS2.6] for rudiments of de Rham theory. In particular, \[\{h^{n}\}\text{ converges strongly in }L^{q^{\prime}}. \tag{1.3}\] Therefore, setting \[\xi^{n}:=\mathrm{d}^{*}k^{n}+h^{n},\] we have its strong convergence in \(L^{q^{\prime}}\) from (1.2) and (1.3). Also, since \(\mathrm{d}^{*}\circ\mathrm{d}^{*}=0\) and \(h^{n}\) is harmonic, we find that \(\mathrm{d}^{*}\xi^{n}=0\) in the sense of distributions. Finally, taking \(\mathrm{d}^{*}\) to (1.1) and recalling \(\mathrm{d}^{*}\xi^{n}=0\) as well as the choice of \(\gamma_{n}\), we have \[\mathrm{d}^{*}\alpha^{n}=\mathrm{d}^{*}\mathrm{d}\gamma_{n}=\Delta\gamma_{n}.\] As \(\{\alpha^{n}\}\) is weakly convergent in \(L^{p}\) by assumption, using the ellipticity (hence \(L^{p}_{w}\to L^{p}_{w}\) boundedness) of \(\Delta\), we deduce that \(\{\gamma_{n}\}\) is weakly convergent in \(W^{1,p}\). Again, \(\gamma_{n}\) is only determined up to harmonic forms, but then \(\mathrm{d}\gamma_{n}\) is uniquely determined; so, in view of (1.1), without loss of generality we may fix the cohomology class of \(\gamma_{n}\) once and for all. ### Definition of the weak-weak pairing Proposition 1.1 allows us to define \(\alpha^{n}\wedge\beta^{n}\) in the sense of distributions, utilising the algebraic structure of wedge product (bilinearity and superdistributivity under exterior differential). As commented in Remark 0.6, without probing into the "substructures" of \(\alpha^{n}\) and \(\beta^{n}\), one has apparently no sufficient regularity to define their wedge products as distributions. #### 1.2.1. Weak-weak paring is well defined in the sense of distributions First, due to the bilinearity of wedge product we (formally should) have \[\alpha^{n}\wedge\beta^{n}=\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}+\xi^{ n}\wedge\mathrm{d}\zeta_{n}+\mathrm{d}\gamma_{n}\wedge\eta^{n}+\xi^{n}\wedge \eta^{n}. \tag{1.4}\] The last three terms on the right-hand side of (1.4), being weak-strong or strong-strong duality pairings, all converge weakly in \(L^{1}\left(X,\bigwedge^{\ell_{1}+\ell_{2}}T^{*}X\right)\). The most singular term is the weak-weak pairing \(\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}\). We "differentiate by parts" to find that it is well defined in the sense of distributions. Indeed, by \(\mathrm{d}\circ\mathrm{d}=0\) in \(\mathcal{D}^{\prime}(X)\) and the super-distributivity of \(\mathrm{d}\) over wedge product, we have \[\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}=\mathrm{d}\left(\gamma_{n}\wedge \mathrm{d}\zeta_{n}\right), \tag{1.5}\] provided that \(\gamma_{n}\) and \(\zeta_{n}\) are of class \(\mathcal{C}^{1}\). But \(\{\mathrm{d}\zeta_{n}\}\) is contained in \(L^{q}\), while \(\{\gamma_{n}\}\) lies in \(L^{p^{*}}\) by virtue of Sobolev embedding (as usual, \(p^{*}=\frac{Np}{N-p}\) denotes the Sobolev conjugate exponent). By assumption we have that \(\left[q^{-1}+(p^{*})^{-1}\right]^{-1}=\left[p^{-1}+q^{-1}-N^{-1}\right]^{-1}\geq 1\), so \(\gamma_{n}\wedge\mathrm{d}\zeta_{n}\) is a well defined \(L^{1}\)-function _for each fixed \(n\)_. Hence, one may now define \(\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}\) via (1.5) as a differential \((\ell_{1}+\ell_{2})\)-form-valued) distribution. Of course, the argument above does not ensure that \(\gamma_{n}\wedge\mathrm{d}\zeta_{n}\) (hence \(\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}\)) tend to any limit as \(n\to\infty\) in the sense of distributions, as the weak \(L^{1}\)-topology is not compactly embedded in \(\mathcal{D}^{\prime}\). #### 1.2.2. Independence of representation We check that the definition of \(\alpha^{n}\wedge\beta^{n}\) given by (1.4) & (1.5) is independent of the decomposition of \(\alpha^{n}\) and \(\beta^{n}\) as in Proposition 1.1. **Lemma 1.3**.: _Let \(\alpha\in L^{p}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\) and \(\beta\in L^{q}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\) be differential forms on a closed Riemannian manifold \(\left(X^{N},g\right)\), where \(p,q\) satisfy (0.1). Assume the decompositions_ \[\begin{cases}\alpha=\mathrm{d}\gamma+\xi=\mathrm{d}\gamma^{\prime}+\xi^{\prime },\\ \beta=\mathrm{d}\zeta+\eta=\mathrm{d}\zeta^{\prime}+\eta^{\prime},\end{cases}\] _where \(\gamma,\gamma^{\prime}\in W^{1,q^{\prime}}\left(X,\bigwedge^{\ell_{1}-1}T^{*}X\right)\), \(\xi,\xi^{\prime}\in L^{p}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\), \(\zeta,\zeta^{\prime}\in W^{1,q}\left(X,\bigwedge^{\ell_{2}-1}T^{*}X\right)\), and \(\eta,\eta^{\prime}\in L^{p^{\prime}}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\). Also, assume that \(\mathrm{d}^{*}\xi=\mathrm{d}^{*}\xi^{\prime}=0\) and \(\mathrm{d}^{*}\eta=\mathrm{d}^{*}\eta^{\prime}=0\) in the sense of distributions. Then it holds, in the sense of distributions too, that_ \[\mathrm{d}\left(\gamma\wedge\mathrm{d}\zeta\right)+\xi\wedge\mathrm{d}\zeta+ \mathrm{d}\gamma\wedge\eta+\xi\wedge\eta=\mathrm{d}\left(\gamma^{\prime}\wedge \mathrm{d}\zeta^{\prime}\right)+\xi^{\prime}\wedge\mathrm{d}\zeta^{\prime}+ \mathrm{d}\gamma^{\prime}\wedge\eta^{\prime}+\xi^{\prime}\wedge\eta^{\prime}.\] Proof.: Our argument is an adaptation and generalisation of the arguments in [10, p.480]. First let us check that \[\mathrm{d}\left(\gamma\wedge\mathrm{d}\zeta\right)+\xi\wedge\mathrm{d}\zeta+ \mathrm{d}\gamma\wedge\eta+\xi\wedge\eta=\mathrm{d}\left(\gamma^{\prime}\wedge \mathrm{d}\zeta\right)+\xi^{\prime}\wedge\mathrm{d}\zeta+\mathrm{d}\gamma^{ \prime}\wedge\eta+\xi^{\prime}\wedge\eta. \tag{1.6}\] Indeed, the difference between the members on the left and right hand sides equals \[D:=\mathrm{d}\left[\left(\gamma-\gamma^{\prime}\right)\wedge\mathrm{d}\zeta \right]+\mathrm{d}\left(\gamma-\gamma^{\prime}\right)\wedge\eta+\left(\xi-\xi^ {\prime}\right)\wedge\beta.\] Noting \[\mathrm{d}\left(\gamma-\gamma^{\prime}\right)=\xi^{\prime}-\xi\] due to the decompositions of \(\alpha\), we combine the last two terms to deduce that \[D=\mathrm{d}\left[\left(\gamma-\gamma^{\prime}\right)\wedge\mathrm{d}\zeta \right]+\left(\xi-\xi^{\prime}\right)\wedge\mathrm{d}\zeta.\] Now, for the first term on the right-hand side, as \(\gamma-\gamma^{\prime}\in W^{1,q^{\prime}}\) and \(\zeta\in W^{1,q}\), we have \(\left(\gamma-\gamma^{\prime}\right)\wedge\left(\mathrm{d}\mathrm{d}\zeta \right)=0\) in the sense of distributions. So \[D=\mathrm{d}\left(\gamma-\gamma^{\prime}\right)\wedge\mathrm{d}\zeta+\left( \xi-\xi^{\prime}\right)\wedge\mathrm{d}\zeta.\] Finally, using the fact that \(\mathrm{d}\left(\gamma-\gamma^{\prime}\right)=\xi^{\prime}-\xi\), we obtain \[D=0.\] In the above, one may be skeptical about the pairing \(\left(\xi-\xi^{\prime}\right)\wedge\mathrm{d}\zeta\), since \(\xi-\xi^{\prime}\in L^{p}\) and \(\mathrm{d}\zeta\in L^{q}\), while \(\frac{1}{p}+\frac{1}{q}\) is only known to be greater than or equal to \(1\). However, since \(\mathrm{d}\left(\gamma-\gamma^{\prime}\right)=\xi^{\prime}-\xi\), one has \(\mathrm{d}\left(\xi^{\prime}-\xi\right)=0\). On the other hand, by assumption we have \(\mathrm{d}^{*}\left(\xi^{\prime}-\xi\right)=0\). So \(\xi^{\prime}-\xi\) is harmonic on \(X\), and hence is \(\mathcal{C}^{\infty}\) by elliptic regularity. Next we check that \[\mathrm{d}\left(\gamma^{\prime}\wedge\mathrm{d}\zeta\right)+\xi^{\prime}\wedge \mathrm{d}\zeta+\mathrm{d}\gamma^{\prime}\wedge\eta+\xi^{\prime}\wedge\eta= \mathrm{d}\left(\gamma^{\prime}\wedge\mathrm{d}\zeta^{\prime}\right)+\xi^{ \prime}\wedge\mathrm{d}\zeta^{\prime}+\mathrm{d}\gamma^{\prime}\wedge\eta^{ \prime}+\xi^{\prime}\wedge\eta^{\prime}. \tag{1.7}\] Indeed, the difference between left- and right-hand sides equals \[\mathrm{d}\left(\gamma^{\prime}\wedge\mathrm{d}\zeta\right)- \mathrm{d}\left(\gamma^{\prime}\wedge\mathrm{d}\zeta^{\prime}\right)+\mathrm{ d}\gamma^{\prime}\wedge\left(\eta-\eta^{\prime}\right)\] \[=\mathrm{d}\left[\gamma^{\prime}\wedge\mathrm{d}\left(\zeta-\zeta ^{\prime}\right)\right]+\mathrm{d}\gamma^{\prime}\wedge\left(\eta-\eta^{ \prime}\right)\] \[=\mathrm{d}\gamma^{\prime}\wedge\left[\mathrm{d}\left(\zeta- \zeta^{\prime}\right)-\left(\eta-\eta^{\prime}\right)\right]\] \[=0.\] The lemma now follows from (1.6) & (1.7), both understood in the distributional sense. ### Proof for the subcritical case: \(1<p,q<\infty\) and \(1\leq\frac{1}{p}+\frac{1}{q}<1+\frac{1}{N}\) In this case we have \(p^{\prime}<q^{*}\), the Sobolev conjugate of \(q\). Thus the embedding \(W^{1,q}\left(X,\bigwedge^{\ell_{2}-1}T^{*}X\right)\hookrightarrow\hookrightarrow L ^{p^{\prime}}\left(X,\bigwedge^{\ell_{2}-1}T^{*}X\right)\) is compact. So, \(\gamma_{n}\wedge\mathrm{d}\zeta_{n}\) in (1.5) becomes a _strong-strong_ pairing of \(L^{p}\)- and \(L^{p^{\prime}}\)-differential forms. The wedge product converges strongly in \(L^{1}\) by Holder's inequality, hence \(\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta_{n}\) (appropriately interpreted as in SS1.2) converges strongly in \(W^{-1,1}\). This together with the weak-\(L^{1}\)-convergence of the other three terms on the right-hand side of (1.4) implies that \(\alpha^{n}\wedge\beta^{n}\) converges in the sense of distributions. This proves (0.4) in the "sub-critical" case. Proof for the (non-endpoint) critical case: \(1<p,q<\infty\) and \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{N}\) As in the above proof for the subcritical case, the only term needs to be treated is \(\alpha^{n}\wedge\beta^{n}\) in (1.4). In the "non-endpoint" critical case (_i.e._, \(p\neq 1\neq q\)), we have \[1<p\leq q^{\prime},\quad 1<q\leq p^{\prime},\quad 1<p<N,\quad 1<q<N,\quad p^{ \prime}=q^{*},\quad\text{and}\quad q^{\prime}=p^{*}. \tag{1.8}\] Due to the assumptions (see Theorem 0.4, (3)) and Proposition 1.1, it holds that \[|{\rm d}\left(\gamma_{n}-\gamma\right)|^{p}\,{\rm dvol}_{g}^{X}\rightharpoonup\mu \qquad\text{weakly-$\star$ in $\mathcal{M}(X)$}. \tag{1.9}\] Here we used that \(\xi^{n}\to\xi\) strongly in \(L^{q^{\prime}}\), hence strongly in \(L^{p}\), as \(X\) is compact and \(1<p\leq q^{\prime}\) by (1.8). Meanwhile, we may choose \(\gamma_{n}\) and \(\gamma\) to be coexact; see [53, p.86, Theorem 2.4.7] and the arguments in the proof of Proposition 1.1. This observation, together with (1.9) and the classical Gaffney's inequality -- _i.e._, the \(W^{1,q}\)-norm of a differential form \(\omega\) can be bounded by the \(L^{q}\)-norms of \({\rm d}\omega\), \({\rm d}^{*}\omega\), and \(\omega\); see [5, 22, 29, 42] -- shows that \(\{\gamma_{n}\}\) is bounded in \(W^{1,p}\left(X,\bigwedge^{\ell_{2}-1}T^{*}X\right)\). We are in the situation of the critical, noncompact Sobolev embedding \(W^{1,p}\hookrightarrow L^{p^{*}}=L^{q^{\prime}}\); see (1.8). By the second concentration compactness lemma _a la_ P.-L. Lions [43, p.158, Lemma I.1], we have countably many points \(\{x^{j}\}\subset X\) and nonnegative constants \(\{c_{j}\}\) such that \[|\gamma_{n}-\gamma|^{q^{\prime}}\,\,{\rm dvol}_{g}^{X}\rightharpoonup \lambda^{\prime}:=\sum_{j=1}^{\infty}c_{j}\delta_{x^{j}}\qquad\text{weakly-$ \star$ in $\mathcal{M}(X)$},\] \[\qquad\qquad\text{with}\,\,\,(c_{j})^{p/q^{\prime}}\lesssim_{p,N, X}\mu\left(\left\{x^{j}\right\}\right)\quad\text{ for each $j$}. \tag{1.10}\] On the other hand, by assumption (Theorem 0.4, (4)) and Proposition 1.1, we have \[|{\rm d}\left(\zeta_{n}-\zeta\right)|^{q}\,\,{\rm dvol}_{g}^{X}\rightharpoonup \nu\qquad\text{weakly-$\star$ in $\mathcal{M}(X)$}.\] An application of Lemma A.1 in the appendix yields a measure \[\eth\in\mathcal{M}\left(X;\bigwedge^{\ell_{1}+\ell_{2}-1}T^{*}X\right)\] such that \[\gamma_{n}\wedge{\rm d}\zeta_{n}\rightharpoonup\gamma\wedge{\rm d}\zeta+ \eth\qquad\text{weakly-$\star$ in $\mathcal{M}(X)$}, \tag{1.11}\] and that, for each Borel set \(E\subset X\), the total variation of \(\eth\) is estimated by \[|\eth|\left(E\right)\leq\left[\lambda^{\prime}(E)\right]^{\frac{1}{q^{\prime }}}\left[\nu(E)\right]^{\frac{1}{q}}.\] By the structure of \(\lambda^{\prime}\) characterised in (1.10) via concentration compactness, we have \[|\eth|\left(E\right) \leq\left[\nu(E)\right]^{\frac{1}{q}}\left\{\sum_{\{j:\,x^{j}\in E \}}c_{j}\right\}^{\frac{1}{q^{\prime}}}\] \[\lesssim_{p,N,X}\sum_{\{j:\,x_{j}\in E\}}\left[\mu\left(\left\{ x^{j}\right\}\right)\right]^{\frac{1}{p}}\left[\nu\left(\left\{x^{j}\right\} \right)\right]^{\frac{1}{q}}.\] The defect measure \(\eth\) is thus atomic: \[\eth=\sum_{j=1}^{\infty}v^{j}\delta_{x^{j}}\quad\text{with}\quad\left|v^{j} \right|\lesssim_{p,N,X}\left[\mu\left(\left\{x^{j}\right\}\right)\right]^{ \frac{1}{p}}\left[\nu\left(\left\{x^{j}\right\}\right)\right]^{\frac{1}{q}} \quad\text{ for each $j$}. \tag{1.12}\] Note also that (1.12) together with the assumptions in Theorem 0.4, (1)-(4) (as well as the lower semicontinuity of Radon measures) implies that \[\sum_{j=1}^{\infty}\left|v^{j}\right|<\infty,\] so \(\eth\) is a well defined distribution. We may now conclude (0.2) and the ensuing estimates from (1.12) and (1.11). The proof of Theorem 0.4 is now complete. **Remark 1.4**.: _In the above proof, all the lengths of vectorfields and differential forms on \(X\) are taken with respect to the Riemannian metric. This is in agreement with [43, Remark I.5], for we are applying concentration compactness to \((X,g)\) as a metric measure space. This observation motivates the concluding paragraph in the Introduction SS0._ ## 2. Consequences of the Wedge Product Theorem 0.4 ### A multilinear wedge product theorem By an inductive argument, we obtain the following generalisation of the multiple wedge product theorem in Robbin-Rogers-Temple [52, Theorem 1.1] (see Lemma 0.2 above). For its formulation we need one more notation: for an \(L^{p}\)-differential form \(\omega\in L^{p}\left(X,\bigwedge^{\bullet}T^{*}X\right)\) with \(1<p<\infty\), Hodge decomposition (see, _e.g._, [53]) yields that \[\omega=\mathrm{d}\omega_{1}+\mathrm{d}^{*}\omega_{2}+\omega_{h},\] where \(\omega_{h}\) is harmonic and \(\mathrm{d}\omega_{1}\), \(\mathrm{d}^{*}\omega_{2}\), and \(\omega_{h}\) are \(L^{p}\)-forms. Moreover, one has the uniqueness result: if it also holds that \(\omega=\mathrm{d}\omega_{1}^{\prime}+\mathrm{d}^{*}\omega_{2}+\omega_{h}^{\prime}\) with \(\mathrm{d}\omega_{1}^{\prime}\), \(\mathrm{d}^{*}\omega_{2}^{\prime}\), \(\omega_{h}^{\prime}\in L^{p}\) and \(\omega_{h}^{\prime}\) being harmonic, then \(\mathrm{d}\omega_{1}=\mathrm{d}\omega_{1}^{\prime}\), \(\mathrm{d}^{*}\omega_{2}=\mathrm{d}^{*}\omega_{2}^{\prime}\), and \(\omega_{h}=\omega_{h}^{\prime}\). So there is a well-defined norm-1 projection \[\boldsymbol{\Pi}^{\mathrm{d}}:L^{p}\left(X,\bigwedge^{\bullet}T^{*}X\right) \longrightarrow L^{p}\left(X,\bigwedge^{\bullet}T^{*}X\right),\qquad \boldsymbol{\Pi}^{\mathrm{d}}(\omega):=\mathrm{d}\omega_{1}. \tag{2.1}\] That is, \(\boldsymbol{\Pi}^{\mathrm{d}}\) is the projection of \(L^{p}\)-differential forms onto the exact part in Hodge decomposition. Our multiple wedge product theorem is stated below. We have an additional "no-loss in norm" condition, which we currently do not know whether it can be dropped. **Theorem 2.1**.: _Let \(\left(X^{N},g\right)\) be an \(N\)-dimensional closed Riemannian manifold; \(N\geq 2\). Let \(1<p_{1},\ldots,p_{L}<\infty\) be such that_ \[1\leq\sum_{i=1}^{L}\frac{1}{p_{i}}\leq 1+\frac{1}{N}\] _where \(L\) is a natural number, and assume that for each \(i\in\{1,\ldots,L\}\), one has_ \[q_{i}:=\frac{1}{\sum_{j\in\{1,\ldots,L\}\sim\{i\}}\frac{1}{p_{j}}}>1.\] _Consider \(L\) sequences of differential forms \(\{\alpha_{1}^{n}\},\ldots,\{\alpha_{L}^{n}\}\), where \(\alpha_{i}^{n}\in L^{p_{i}}\left(X,\bigwedge^{\ell_{i}}T^{*}X\right)\) for each \(n\) and \(0\leq\ell_{i}\leq n\), that satisfy the following conditions:_ 1. \(\alpha_{i}^{n}\rightharpoonup\overline{\alpha}_{i}\) _weakly in_ \(L^{p_{i}}\) _for each_ \(i\in\{1,2,\ldots,L\}\)_;_ 2. \(|\alpha_{i}^{n}-\overline{\alpha}_{i}|^{p_{i}}\ \mathrm{d}\mathrm{vol}_{g}^{X} \rightharpoonup\mu_{i}\) _weakly-_\(\star\) _in_ \(\mathcal{M}\) _for each_ \(i\in\{1,2,\ldots,L\}\)_;_ 3. \(\mathrm{d}\alpha_{i}^{n}\to\overline{\alpha}_{i}\) _strongly in_ \(W^{-1,q_{i}^{\prime}}\) _for each_ \(i\in\{1,2,\ldots,L\}\)_; and_ 4. _("no-loss in norm") there exists_ \(k\in\{1,2,\ldots,L\}\) _such that_ \[\left\|\alpha_{1}^{n}\wedge\ldots\wedge\widehat{\alpha_{k}^{n}} \wedge\ldots\wedge\alpha_{L}^{n}-\boldsymbol{\Pi}^{\mathrm{d}}\alpha_{1}^{n} \wedge\ldots\wedge\widehat{\boldsymbol{\Pi}^{\mathrm{d}}\alpha_{k}^{n}}\wedge \ldots\wedge\boldsymbol{\Pi}^{\mathrm{d}}\alpha_{L}^{n}\right\|_{L^{p_{k}^{ \prime}}}\\ \longrightarrow\left\|\overline{\alpha}_{1}\wedge\ldots\wedge \widehat{\alpha}_{k}\wedge\ldots\wedge\overline{\alpha}_{L}-\boldsymbol{\Pi}^ {\mathrm{d}}\overline{\alpha}_{1}\wedge\ldots\wedge\widehat{\boldsymbol{\Pi}^ {\mathrm{d}}\overline{\alpha}_{k}}\wedge\ldots\wedge\boldsymbol{\Pi}^{\mathrm{d }}\overline{\alpha}_{L}\right\|_{L^{p_{k}^{\prime}}}\quad\text{as $n\to\infty$},\] (2.2) _wherein the projection operator_ \(\boldsymbol{\Pi}^{\mathrm{d}}\) _has been defined in (_2.1_)._ _Then the convergence result below holds after passing to subsequences:_ \[\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L}^{n}\longrightarrow\overline{\alpha}_{ 1}\wedge\ldots\wedge\overline{\alpha}_{L}+\sum_{k=1}^{\infty}\mathrm{d}\left(v^{ k}\delta_{x^{k}}\right)\qquad\text{in }\mathcal{D}^{\prime}\] _for some sequences of points \(\left\{x^{k}\right\}\subset X\) and \(\left(\sum_{j=1}^{L}\ell_{j}-1\right)\)-covectors \(\left\{v^{k}\right\}\). In addition, the weights \(\left\{v^{k}\right\}\) have their moduli bounded as follows:_ \[\left|v^{k}\right|\lesssim_{p_{1},\ldots,p_{L},N,X}\left\{\prod_{j=1}^{L} \left[\mu^{j}\left(\left\{x^{k}\right\}\right)\right]^{\frac{1}{p_{j}}}\right\}.\] **Notation 2.2**.: _Here and hereafter, circumflex in expressions of the form \(\mathfrak{b}_{1}\wedge\ldots\wedge\widehat{\mathfrak{b}_{k}}\wedge\ldots \wedge\mathfrak{b}_{L}\) (and the like) designates the omission of the \(k^{\text{th}}\) entry \(\mathfrak{b}_{k}\)._ Proof of Theorem 2.1.: First of all, by supercommutativity of the wedge product, there is no loss of generality to prove for only one \(k\) in the "no-loss in norm" condition (4). Throughout the proof we shall fix \(k=L\). The thesis follows immediately from Theorem 0.4 by identifying \(\left(\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n},\alpha_{L}^{n}, \overline{\alpha}_{1}\wedge\ldots\wedge\overline{\alpha}_{L-1},\overline{ \alpha}_{L}\right)\) with \(\left(\beta^{n},\alpha^{n},\overline{\beta},\overline{\alpha}\right)\) and \(\left(q_{L},p_{L}\right)\) with \(\left(q,p\right)\), _provided that the hypotheses in Theorem 0.4 are verified_. The estimate on \(v^{k}\) are proved by a simple induction argument, of which the case \(L=2\) is covered by Theorem 0.4. The verification for the hypotheses in Theorem 0.4 is carried out below in two steps. **Step 1.** First, observe that \(\left\{\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}\right\}\) is bounded in \(L^{q_{L}}\left(X,\bigwedge^{\sum_{j=1}^{L-1}\ell_{j}}T^{*}X\right)\), as each \(\left\{\alpha_{i}^{n}\right\}\) is bounded in \(L^{p_{i}}\) by assumption. We are in the subcritical case in the sense of SS1.3 (for only \((L-1)\) forms are taken into account here, so that \(1\leq\sum_{i=1}^{L-1}\frac{1}{p_{i}}<1+\frac{1}{N}\)). Thus, a direct induction and Theorem 0.4 yield that \(\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}\rightarrow\overline{\alpha}_ {1}\wedge\ldots\wedge\overline{\alpha}_{L-1}\) in the distributional sense. This together with the \(L^{q_{L}}\)-boundedness established above implies that the convergence takes place in the weak \(L^{q_{L}}\)-topology. As a result, by a standard compactness argument, there exists a positive finite Radon measure \(\mu\) on \(X\) such that \[\left|\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}-\overline{\alpha}_{1} \wedge\ldots\wedge\overline{\alpha}_{L-1}\right|^{q_{L}}\,\mathrm{dvol}_{g}^{ X}\rightharpoonup\mu\qquad\text{weakly-}\star\text{ in }\mathcal{M}(X).\] **Step 2.** We also need to justify that \(\left\{\mathrm{d}\left(\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}\right)\right\}\) converges strongly in \(W^{-1,p_{L}^{\prime}}\). This relies on the cancellation phenomenon exploited in SS1.2, and will be presented in four substeps. **Step 2A.** By Proposition 1.1, one can decompose each \(\alpha_{j}^{n}\) for \(n\in\mathbb{N}\) and \(j\in\left\{1,\ldots,L-1\right\}\) into \[\alpha_{j}^{n}=\mathrm{d}\gamma_{n,j}+\xi_{j}^{n},\] where \[\begin{cases}\gamma_{n,j}\rightharpoonup\gamma_{j}&\text{weakly in }W^{1,p_{j}},\\ \xi_{j}^{n}\longrightarrow\xi_{i}&\text{strongly in }L^{q_{j}^{\prime}}.\end{cases}\] We need to show that \[\mathrm{d}\left\{\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n} \right\}=\mathrm{d}\left\{(\mathrm{d}\gamma_{n,1}+\xi_{1}^{n})\wedge\ldots \wedge\left(\mathrm{d}\gamma_{n,L-1}+\xi_{L-1}^{n}\right)\right\}\\ \longrightarrow\mathrm{d}\left\{(\mathrm{d}\gamma_{1}+\xi_{1}) \wedge\ldots\wedge(\mathrm{d}\gamma_{L-1}+\xi_{L-1})\right\}=\mathrm{d}\left\{ \overline{\alpha}_{1}\wedge\ldots\wedge\overline{\alpha}_{L-1}\right\}\text{ strongly in }W^{-1,p_{L}^{\prime}}. \tag{2.3}\] **Remark 2.3**.: _Of course, (2.3) would follow immediately if one could prove that_ \[\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}=(\mathrm{d}\gamma_{n,1}+\xi_ {1}^{n})\wedge\ldots\wedge\left(\mathrm{d}\gamma_{n,L-1}+\xi_{L-1}^{n}\right)\] \[\begin{split}&\longrightarrow(\mathrm{d}\gamma_{1}+\xi_{1})\wedge \ldots\wedge(\mathrm{d}\gamma_{L-1}+\xi_{L-1})=\overline{\alpha}_{1}\wedge \ldots\wedge\overline{\alpha}_{L-1}\qquad\text{ strongly in }L^{p^{\prime}_{L}}.\end{split} \tag{2.4}\] _This, unfortunately, is impossible -- the term \(\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}\) is an \((L-1)\)-multilinear combination of weakly convergent sequences in \(L^{p_{j}}\) for \(j\in\{1,\ldots,L-1\}\), which by Holder's inequality is only bounded in \(L^{\left[\sum_{j=1}^{L-1}\frac{1}{p_{j}}\right]^{-1}}=L^{q_{L}}\). But as for (1.8) we have \(q_{L}\leq p^{\prime}_{L}\), so the false claim (2.4) is stronger than what we can actually obtain._ **Step 2B.**: Nevertheless, \(\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}\) is the _only_ troublesome term here -- we assert that \[\begin{split}&\text{ all the other }\left(2^{L-1}-1\right)\text{ terms in }(\mathrm{d}\gamma_{n,1}+\xi_{1}^{n})\wedge\ldots\wedge\left(\mathrm{d} \gamma_{n,L-1}+\xi_{L-1}^{n}\right)\\ &\text{ converge strongly in }L^{p^{\prime}_{L}}.\end{split} \tag{2.5}\] To verify the assertion, first we note that \[\begin{split}&\left\{\text{the sum of all such }\left(2^{L-1}-1\right)\text{ terms in }(\mathrm{d}\gamma_{n,1}+\xi_{1}^{n})\wedge\ldots\wedge\left(\mathrm{d} \gamma_{n,L-1}+\xi_{L-1}^{n}\right)\right\}\\ &\qquad\equiv\left(\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{ n}\right)-\left(\mathbf{\Pi}^{\mathrm{d}}\alpha_{1}^{n}\wedge\ldots\wedge\mathbf{ \Pi}^{\mathrm{d}}\alpha_{L-1}^{n}\right).\end{split}\] This follows from the definition of \(\mathbf{\Pi}^{\mathrm{d}}\), since it holds for each \(j\in\{1,\ldots,L-1\}\) that \[\mathbf{\Pi}^{\mathrm{d}}\alpha_{j}^{n}=\mathrm{d}\gamma_{j}^{n}.\] In addition, if we can prove that \[\left\{\left(\alpha_{1}^{n}\wedge\ldots\wedge\alpha_{L-1}^{n}\right)-\left( \mathbf{\Pi}^{\mathrm{d}}\alpha_{1}^{n}\wedge\ldots\wedge\mathbf{\Pi}^{ \mathrm{d}}\alpha_{L-1}^{n}\right)\right\}\] converges _weakly_ in \(L^{p^{\prime}_{L}}\), then the "no-loss in norm" condition (2.2) will promote it to _strong_ convergence in \(L^{p^{\prime}_{L}}\), by virtue of the Radon-Riesz theorem. See, _e.g._, [1, p.17, Theorem 1.37]. For this purpose, notice that any of the \(\left(2^{L-1}-1\right)\) terms in consideration takes the form \(\mathfrak{a}_{n,1}\wedge\ldots\wedge\mathfrak{a}_{n,L-1}\), where * each \(\mathfrak{a}_{n,j}\) is either \(\mathrm{d}\gamma_{n,j}\) or \(\xi_{j}^{n}\); and * for at least one \(j\in\{1,\ldots,L-1\}\), \(\mathfrak{a}_{n,j}=\xi_{j}^{n}\). It suffices to prove the convergence of those terms with \(\mathfrak{a}_{n,j}=\xi_{j}^{n}\) for _only one_\(j\in\{1,\ldots,L-1\}\), because at each place \(\mathfrak{a}_{n,j}\) the choice \(\xi_{j}^{n}\) has higher regularity than \(\mathrm{d}\gamma_{n,j}\). In fact, \(\left\{\xi_{j}^{n}\right\}\) is strongly convergent in \(L^{q^{\prime}_{j}}\) and \(\{\mathrm{d}\gamma_{n,j}\}\) is weakly convergent in \(L^{p_{j}}\), with the indices \(p_{j}\leq q^{\prime}_{j}\) (again argued as for (1.8)). Thus, as the underlying space \(\left(X,\,\mathrm{d}\mathrm{vol}_{g}^{X}\right)\) is compact, the more \(\xi_{j}^{n}\) we have in \(\mathfrak{a}_{n,j}\), the better convergence results one may obtain. Summarising the above discussions, it now suffices show that \[\left\{\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,j-1}\wedge \xi_{j}^{n}\wedge\mathrm{d}\gamma_{n,j+1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}\right\}\quad\text{converges weakly in }L^{p^{\prime}_{L}} \tag{2.6}\] for each \(j\in\{1,\ldots,L-1\}\). In the \((L-1)\)-multilinear combination above, we have one strongly convergent term \(\xi_{j}^{n}\). By Holder's inequality, \(\left\{\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,j-1} \wedge\xi_{j}^{n}\wedge\mathrm{d}\gamma_{n,j+1}\wedge\ldots\wedge\mathrm{d} \gamma_{n,L-1}\right\}\) converges weakly in \(L^{\mathcal{Q}}\), with the index \[\begin{split}\mathcal{Q}&=\left[\left(\frac{1}{p_{1}}+ \ldots+\frac{1}{p_{j-1}}+\frac{1}{p_{j+1}}+\ldots+\frac{1}{p_{L-1}}\right)+ \frac{1}{q^{\prime}_{j}}\right]^{-1}\\ &=\left[\left(\frac{1}{q_{j}}-\frac{1}{p_{L}}\right)+\frac{1}{q^{ \prime}_{j}}\right]^{-1}\end{split}\] \[=\left[1-\frac{1}{p_{L}}\right]^{-1}\] \[=p_{L}^{\prime}.\] Thus, (2.6) is proved, and hence follows the assertion (2.5). **Step 2C.** Let us return to the term \(\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}\). Recall from the fake claim (2.4) and the ensuing paragraph that it cannot be proved to converge strongly in \(L^{p_{L}^{\prime}}\). However, \[\mathrm{d}\left[\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L- 1}\right]=0\qquad\text{in the sense of distributions}. \tag{2.7}\] Once established, this identity together with (2.5) above implies (2.3). The key to the justification of (2.7) lies in its definition: the term \([\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}]\) therein is well defined only when understood as follows: \[[\mathrm{d}\gamma_{n,1}\wedge\ldots\wedge\mathrm{d}\gamma_{n,L-1}] :=\mathrm{d}\left\{\gamma_{n,1}\wedge\mathrm{d}\gamma_{n,2}\wedge\ldots \wedge\mathrm{d}\gamma_{n,L-1}\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ 1. \(\alpha^{n}\rightharpoonup\overline{\alpha}\) _weakly in_ \(L^{p}\)_;_ 2. \(\theta^{n}\rightharpoonup\overline{\theta}\) _weakly in_ \(L^{q}\)_;_ 3. \(|\alpha^{n}-\overline{\alpha}|^{p}\;\mathrm{dvol}_{g}^{X}\rightharpoonup\mu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 4. \(\left|\theta^{n}-\overline{\theta}\right|^{q}\;\mathrm{dvol}_{g}^{X} \rightharpoonup\nu\) _weakly-_\(\star\) _in_ \(\mathcal{M}\)_;_ 5. \(\mathrm{d}\alpha^{n}\rightharpoonup\mathrm{d}\overline{\alpha}\) _strongly in_ \(W^{-1,q^{\prime}}\)_;_ 6. \(\mathrm{d}^{*}\theta^{n}\rightharpoonup\mathrm{d}^{*}\overline{\theta}\) _strongly in_ \(W^{-1,p^{\prime}}\)_._ _Then we have the following convergence (modulo subsequences) in the sense of distributions:_ \[\left\langle\alpha^{n},\theta^{n}\right\rangle_{g}\longrightarrow\left\langle \overline{\alpha},\overline{\theta}\right\rangle_{g}+\sum_{k=1}^{\infty} \mathrm{div}_{g}\left(r^{k}\delta_{x^{k}}\right)\qquad\text{in $\mathcal{D}^{\prime}$}\] _for some sequences of points \(\left\{x^{k}\right\}\subset X\) and vectors \(\left\{r^{k}\right\}\) such that_ \[\left|r^{k}\right|\lesssim_{p,N,X}\left[\mu\left(\left\{x^{k}\right\} \right)\right]^{\frac{1}{p}}\left[\nu\left(\left\{x^{k}\right\}\right)\right]^ {\frac{1}{q}}.\] _Moreover, if \(\frac{1}{p}+\frac{1}{q}<1+\frac{1}{N}\) in (0.1), then all \(r^{k}\) are zero._ Recall that \(\left\langle\alpha,\theta\right\rangle_{g}:=\star\left[\alpha\wedge\star \theta\right]\) for differential \(\ell\)-forms \(\alpha\) and \(\theta\) on \(X\). When \(\ell=1\) (with differential one-forms identified with vectorfields via the musical isomorphisms), Corollary 2.4 reduces to [10, Theorem 2.3] in Briane-Casado-Diaz-Murat. The Hodge star \(\star\), the codifferential \(\mathrm{d}^{*}=(-1)^{N(\ell+1)+1}\star\mathrm{d}\star\), the divergence \(\mathrm{div}_{g}\), and the inner product \(\left\langle\bullet,\bullet\right\rangle_{g}\) are all taken with respect to the Riemannian metric \(g\). Meanwhile, as commented beneath the statement of Theorem 0.4, when \(\frac{1}{p}+\frac{1}{q}<1\), classical div-curl lemma or wedge product theorem applies to yield that \(\left\langle\alpha^{n},\theta^{n}\right\rangle_{g}\rightarrow\left\langle \overline{\alpha},\overline{\theta}\right\rangle_{g}\) in the sense of distributions. Proof of Corollary 2.4.: Applying Theorem 0.4 to \(\alpha^{n}\) and \(\beta^{n}:=\star\theta^{n}\in L^{q}\left(X,\bigwedge^{N-\ell}T^{*}X\right)\), one obtains that \[\alpha^{n}\wedge\star\beta^{n}\longrightarrow\overline{\alpha}\wedge\star \overline{\beta}+\sum_{k=1}^{\infty}\mathrm{d}\left(v^{k}\delta_{x^{k}}\right) \qquad\text{in $\mathcal{D}^{\prime}$}\left(X,\bigwedge^{N}T^{*}X\right),\] where \(v_{k}\) are \((N-1)\)-covectors. We conclude the proof by taking the Hodge star on both sides and identifying \(r^{k}\) with \(\star v^{k}\) via the musical isomorphism \(\sharp:TX\cong T^{*}X\), using the fact that both Hodge star and \(\sharp\) are Sobolev space isometries. ### Smooth cycles We observe from Theorem 0.4 that, even in the critical case \(\frac{1}{p}+\frac{1}{q}=\frac{1}{N}+1\), the concentration distribution is "exact". Thus we make the following observation (where \(\left\langle\bullet,\bullet\right\rangle\) denotes the duality paring between currents and differential forms): **Corollary 2.5**.: _Let \(\left(X^{N},g\right)\) be an \(N\)-dimensional closed Riemannian manifold; \(N\geq 2\). Let \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=\frac{1}{N}+1\). Consider differential forms \(\{\alpha^{n}\}\) and \(\{\beta^{n}\}\) as in Theorem 0.4, where \(\ell_{1}+\ell_{2}=N=\dim X\). Then for any smooth N-current \(T\in\mathcal{R}^{N}(X)\) which is a cycle (i.e., \(\partial T=0\)), we have \(\left\langle T,\alpha^{n}\wedge\beta^{n}\right\rangle\rightarrow\left\langle T,\overline{\alpha}\wedge\overline{\beta}\right\rangle\) as \(n\rightarrow\infty\)._ Proof.: This follows from \(\left\langle T,\mathrm{d}\left(\sum_{k}v^{k}\delta_{x^{k}}\right)\right\rangle= \left\langle\partial T,\sum_{k}v^{k}\delta_{x^{k}}\right\rangle=0\), where \(\mathrm{d}\left(\sum_{k}v^{k}\delta_{x^{k}}\right)\) is the concentration distribution as in Theorem 0.4. Alternatively, rectifiable currents with \(\mathcal{C}^{\infty}\)-regularity can be represented by smooth vectorfields, for which the cycle condition \(\partial T=0\) is equivalent to the divergence-free condition. So the corollary also follows from \(\mathrm{d}^{*}\circ\mathrm{d}^{*}=0\) ## 3. The "endpoint critical" cases \(\left\{p,q\right\}=\left\{1,N\right\}\) In this section we discuss the generalisation of SSSS3 & 4 in [10], _i.e._, the cases \(\left(p,q\right)=\left(1,N\right)\) or \(\left(p,q\right)=\left(N,1\right)\), to differential forms on closed manifolds. These are the endpoint cases of the critical regime \(\frac{1}{p}+\frac{1}{q}=\frac{1}{N}+1\), for which we need stronger compactness assumptions for \(\mathrm{d}\alpha^{n}\) and \(\mathrm{d}\beta^{n}\) (namely, Assumptions (5)+(6) in Theorem 0.7) to ensure the distributional convergence of wedge products. For the convenience of readers, let us reproduce Theorem 0.7 below. **Theorem**.: _Let \(\left(X^{N},g\right)\) be an \(N\)-dimensional closed Riemannian manifold; \(N\geq 2\). Consider two sequences of differential forms \(\left\{\alpha^{n}\right\}\subset\mathcal{M}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\) and \(\left\{\beta^{n}\right\}\subset L^{N}\left(X,\bigwedge^{\ell_{2}}T^{*}X\right)\) satisfying the following conditions:_ 1. \(\alpha^{n}\rightharpoonup\overline{\alpha}\) _weakly_\(\star\) _in_ \(\mathcal{M}\)_;_ 2. \(\beta^{n}\rightharpoonup\overline{\beta}\) _weakly in_ \(L^{N}\)_;_ 3. \(\left|\alpha^{n}-\overline{\alpha}\right|\rightharpoonup\mu\) _weakly_\(\star\) _in_ \(\mathcal{M}\)_;_ 4. \(\left|\beta^{n}-\overline{\beta}\right|^{N}\rightharpoonup\nu\) _weakly_\(\star\) _in_ \(\mathcal{M}\)_;_ 5. \(\mathrm{d}\alpha^{n}\to\mathrm{d}\overline{\alpha}\) _strongly in_ \(W^{-1,N^{\prime}}\)_;_ 6. \(\mathrm{d}\beta^{n}\to\mathrm{d}\overline{\beta}\) _strongly in_ \(L^{N}\)_._ _Then we have the following convergence (modulo subsequences) in the sense of distributions:_ \[\alpha^{n}\wedge\beta^{n}\longrightarrow\overline{\alpha}\wedge\overline{ \beta}+\sum_{k=1}^{\infty}\mathrm{d}\left(v^{k}\delta_{x^{k}}\right)\qquad \text{in }\mathcal{D}^{\prime} \tag{3.1}\] _for some sequences of points \(\left\{x^{k}\right\}\subset X\) and \(\left(\ell_{1}+\ell_{2}-1\right)\)-covectors \(\left\{v^{k}\right\}\). Moreover,_ \[\left|v^{k}\right|\lesssim_{N,X}\left[\mu\left(\left\{x^{k}\right\}\right) \right]\left[\nu\left(\left\{x^{k}\right\}\right)\right]^{\frac{1}{N}}.\] Recall that \(\mathcal{M}\left(X,\bigwedge^{\bullet}T^{*}X\right)\) is understood as the space of \(\bigwedge^{\bullet}T^{*}X\)-valued finite Radon measures on \(X\), where \(\bigwedge^{\bullet}T^{*}X\) is a vector space. Thus the total variation measure (hence the total variation norm) of each of its elements is well defined. Also note that \(N^{\prime}=\frac{N}{N-1}\). Our proof essentially follows the strategies in [10, SS3], and the idea is similar to that of Theorem 0.4, except that more delicate analysis for endpoint Sobolev embeddings are required. To avoid redundancies, we shall only elaborate on the necessary modifications. Proof of Theorem 0.7.: We divide our arguments into seven steps below. **Step 1.** As for Theorem 0.4, consider the decomposition \[\begin{cases}\alpha^{n}=\mathrm{d}\gamma_{n}+\xi^{n},\\ \beta^{n}=\mathrm{d}\zeta^{n}+\eta^{n}.\end{cases}\] The proof for Proposition 1.1 yields that \[\zeta^{n}\rightharpoonup\zeta\text{ weakly in }W^{1,N}\left(X, \bigwedge^{\ell_{2}-1}T^{*}X\right),\] \[\eta^{n}\longrightarrow\eta\text{ strongly in }W^{1,N}\left(X, \bigwedge^{\ell_{2}}T^{*}X\right),\] \[\overline{\beta}=\mathrm{d}\zeta+\eta.\] Estimates for \(\gamma_{n}\) and \(\xi^{n}\) pertains to the endpoint case of elliptic estimates _a la_ Bourgain-Brezis [6] and Brezis-Van Schaftingen [8]. Thanks to [6, 8], \(\gamma_{n}\) and \(\xi^{n}\) can be chosen to verify \[\gamma_{n}\longrightarrow\gamma\text{ strongly in }W^{1,N^{\prime}} \left(X,\bigwedge^{\ell_{1}-1}T^{*}X\right),\] \[\xi^{n}\rightharpoonup\xi\text{ weakly in }W^{-1,N^{\prime}} \left(X,\bigwedge^{\ell_{1}}T^{*}X\right)\text{ and weakly-}\star\text{ in }\mathcal{M}\left(X,\bigwedge^{\ell_{1}}T^{*}X\right),\] \[\overline{\alpha}=\mathrm{d}\gamma+\xi.\] **Step 2.** Now let us define \[\alpha^{n}\wedge\beta^{n} :=\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta^{n}+\xi^{n}\wedge \mathrm{d}\zeta^{n}+\mathrm{d}\gamma_{n}\wedge\eta^{n}+\xi^{n}\wedge\eta^{n}\] \[=J_{1}^{n}+J_{2}^{n}+J_{3}^{n}+J_{4}^{n}\qquad\text{in the sense of distributions.}\] Indeed, \(J_{3}^{n}\) is a strong-strong pairing of \(L^{N^{\prime}}\) and \(W^{1,N}\), while \(J_{4}^{n}\) is a weak-strong pairing of \(W^{-1,N^{\prime}}\) and \(W^{1,N}\); hence, they converge in the sense of distributions. Also, for \(J_{1}^{n}\) one observes that \(\gamma_{n}\wedge\mathrm{d}\zeta^{n}\) is a weak-strong pairing of \(L^{N}\) and \(L^{N^{\prime}}\), so it converges in the weak \(L^{1}\)-topology. As a consequence, we may define \(\mathrm{d}\gamma_{n}\wedge\mathrm{d}\zeta^{n}:=\mathrm{d}\left(\gamma_{n} \wedge\mathrm{d}\zeta^{n}\right)\) in the distributional sense. It remains to investigate the (well-definedness as a distribution and) convergence of the weak-weak pairing \(J_{2}^{n}\). **Step 3.** To see that \[J_{2}^{n}:=\xi^{n}\wedge\mathrm{d}\zeta^{n}\] is well defined as a differential form-valued distribution on \(X\), one needs to prove that for any testform \(\varrho\in\mathcal{C}^{\infty}\left(X,\bigwedge^{N-\ell_{1}-\ell_{2}}T^{*}X\right)\), the following expression is well defined: \[I^{n}\equiv\int_{X}\xi^{n}\wedge\mathrm{d}\zeta^{n}\wedge\varrho.\] Clearly there is nothing to prove when \(\ell_{1}+\ell_{2}>N\). For this purpose, we integrate by parts using the superdistributivity of \(\mathrm{d}\) over wedge product and the Stokes' theorem to define \[I^{n} =I_{1}^{n}+I_{2}^{n}\] \[:=\left[(-1)^{\ell_{1}+1}\int_{X}\mathrm{d}\xi^{n}\wedge\zeta^{n} \wedge\varrho\right]+\left[(-1)^{\ell_{2}+1}\int_{X}\xi^{n}\wedge\zeta^{n} \wedge\mathrm{d}\varrho\right].\] For \(I_{1}^{n}\), we make use of the decomposition \(\alpha^{n}=\mathrm{d}\gamma_{n}+\xi^{n}\) to infer that \(\mathrm{d}\alpha^{n}=\mathrm{d}\xi^{n}\) in the sense of distributions. But it is assumed that \(\mathrm{d}\alpha^{n}\to\mathrm{d}\overline{\alpha}\) strongly in \(W^{-1,N^{\prime}}\) (see assumption (5)), so the term \(\mathrm{d}\xi^{n}\wedge\zeta^{n}\) appearing in the integrand of \(I_{1}^{n}\) is a strong-weak pairing of \(W^{-1,N^{\prime}}\) and \(W^{1,N}\), thus making \(I_{1}^{n}\) well defined and converge as \(n\to\infty\) to \((-1)^{\ell_{1}+1}\int_{X}\mathrm{d}\xi\wedge\zeta\wedge\varrho\) as desired. It now remains to investigate the convergence of \[\tilde{I}_{2}^{n}:=\int_{X}\xi^{n}\wedge\zeta^{n}\wedge\mathrm{d}\varrho,\] which is nothing but \(I_{2}^{n}\) with the immaterial sign neglected. Here \(\varrho\) is an arbitrary testform in \(\mathcal{C}^{\infty}\left(X,\bigwedge^{N-\ell_{1}-\ell_{2}}T^{*}X\right)\), and \(\xi^{n}\wedge\zeta^{n}\) is a weak-weak pairing of \(W^{-1,N^{\prime}}\) and \(W^{1,N}\). **Step 4**.: To make sense of \(\xi^{n}\wedge\zeta^{n}\), recall that \(\xi^{n}\) is chosen to be coexact: by Hodge decomposition \(\xi^{n}=\mathrm{d}^{*}k^{n}+h^{n}\) where \(h^{n}\) is harmonic. Then one may find from Lemma B.1 that \[\sigma^{n}=\Delta^{-1}\xi^{n}\in W^{1,N^{\prime}}\left(X;\bigwedge^{\ell_{1}}T^{ *}X\right),\] which is unique by fixing the cohomology class once and for all. Moreover, we have \(\mathrm{d}^{*}\sigma^{n}=0\). The above arguments lead to \[\tilde{I}_{2}^{n} =\int_{X}\Delta\sigma^{n}\wedge\zeta^{n}\wedge\mathrm{d}\varrho\] \[=\int_{X}\left(\mathrm{d}\mathrm{d}^{*}\sigma^{n}+\mathrm{d}^{*} \mathrm{d}\sigma^{n}\right)\wedge\zeta^{n}\wedge\mathrm{d}\varrho\] \[=\int_{X}\mathrm{d}\sigma^{n}\wedge\mathrm{d}^{*}\left[\zeta^{n} \wedge\mathrm{d}\varrho\right].\] In the right-most term, \(\{\mathrm{d}\sigma^{n}\}\) is weakly convergent in \(L^{N^{\prime}}\), and \(\{\mathrm{d}^{*}\left[\zeta^{n}\wedge\mathrm{d}\varrho\right]\}\) is weakly convergent in \(L^{N}\), after passing to subsequences if necessary. Denote the weak limits of \(\{\sigma^{n}\}\) in \(W^{1,N^{\prime}}\) (existence follows from the quantitative statement in Lemma B.1) and \(\{\zeta^{n}\}\) in \(W^{1,N}\) by \(\sigma\) and \(\zeta\), respectively. We are now in the situation of applying Lemma A.1 to obtain a defect measure \(\varpi=\varpi[\varrho]\in\mathcal{M}\left(X;\bigwedge^{N}T^{*}X\right)\) that is linear in the testform \(\varrho\): \[\lim_{n\to\infty}\tilde{I}_{2}^{n}=\int_{X}\mathrm{d}\sigma\wedge\mathrm{d}^{* }\left[\zeta\wedge\mathrm{d}\varrho\right]+\varpi(X).\] By Stokes' theorem and the fact that \(\mathrm{d}^{*}\sigma=0\) in the sense of distributions, we can rewrite this identity as follows: \[\lim_{n\to\infty}\tilde{I}_{2}^{n} =\int_{X}\Delta\sigma\wedge\zeta\wedge\mathrm{d}\varrho+\varpi(X)\] \[=\int_{X}\xi\wedge\zeta\wedge\mathrm{d}\varrho+\varpi(X).\] It thus remains to investigate the defect measure(-valued \(N\)-form) \(\varpi\). **Step 5**.: Consider the limiting measures \[\lambda :=\mathcal{M}-\lim_{n\to\infty}\left|\mathrm{d}^{*}\big{[}\zeta^{n }\wedge\mathrm{d}\varrho\big{]}-\mathrm{d}^{*}\big{[}\zeta\wedge\mathrm{d} \varrho\big{]}\right|^{N},\] \[\lambda^{\prime} :=\mathcal{M}-\lim_{n\to\infty}\left|\mathrm{d}\left(\sigma^{n}- \sigma\right)\right|^{N^{\prime}}.\] We **claim** that the total variation measure of \(\lambda\) is controlled by \(\nu\), the measure in assumption (4): \[\left|\lambda\right|(B)\lesssim_{(X,g),\left\|\varrho\right\|_{\mathrm{c}^{2}} }\nu(B)\qquad\text{ for any Borel }B\subset X. \tag{3.2}\] Proof of Claim (3.2).: It suffices to prove for \(B=\mathbf{B}_{1}(0)\subset\mathbb{R}^{N}\), the Euclidean ball, and \(X\) is a compact subset of \(\mathbb{R}^{N}\). To see this, let \(\{\chi_{1},\ldots,\chi_{K}\}\) be a \(\mathcal{C}^{\infty}\)-partition of unity on the compact manifold \((X,g)\) subordinate to an atlas of \(\mathcal{C}^{\infty}\)-charts, each of which is diffeomorphic to \(\mathbf{B}_{1}(0)\). Then, viewing \(|\bullet|^{N}\) as positive measures on \(X\), we have \[\left|\mathrm{d}^{*}\big{[}\zeta^{n}\wedge\mathrm{d}\varrho\big{]} -\mathrm{d}^{*}\big{[}\zeta\wedge\mathrm{d}\varrho\big{]}\right|^{N}(B) =\left|\mathrm{d}^{*}\left[(\zeta^{n}-\zeta)\wedge\mathrm{d} \varrho\left(\sum_{i=1}^{K}\chi_{i}\right)\right]\right|^{N}(B)\] \[\lesssim_{N,K}\sum_{i=1}^{K}\left|\mathrm{d}^{*}\left[\chi_{i}( \zeta^{n}-\zeta)\wedge\mathrm{d}\varrho\right]\right|^{N}(B)\] \[\lesssim_{K,(X,g)}\|\varrho\|_{\mathcal{C}^{2}}^{N}\cdot\left\{\sum_{i=1}^{K} \left|\nabla\left(\chi_{i}\zeta^{n}-\chi_{i}\zeta\right)\right|^{N}(B)\right\},\] where the last inequality depends on the \(\mathcal{C}^{1}\)-geometry of \((X,g)\). So it is enough to prove for \((\chi_{i}\zeta^{n},\chi_{i}\zeta)\) in place of \((\zeta^{n},\zeta)\). Also, as \(\zeta^{n}-\zeta\) is coexact, we may also assume that \(\chi_{i}\zeta^{n}-\chi_{i}\zeta\) is coexact, since \[|\mathrm{d}^{*}\left[\chi_{i}\left(\zeta^{n}-\zeta\right)\right]|^{N}=|\chi_{i }\mathrm{d}^{*}\left(\zeta^{n}-\zeta\right)|^{N}+R,\] where the remainder term \(R\) involves no derivative in \(\zeta^{n}-\zeta\), hence \(|R|\lesssim_{\|\chi_{i}\|_{\mathcal{C}^{1}}}|\zeta^{n}-\zeta|^{N}\to 0\) as \(n\to\infty\) in the sense of Radon measures. A similar computation for commuting \(\nabla\) with the diffeomorphism between the chart containing the support of \(\chi_{i}\) and the Euclidean ball \(\mathbf{B}_{1}(0)\) shows that (3.2) is invariant under such diffeomorphisms. Thus, we may assume without loss of generality that \(B=\mathbf{B}_{1}(0)\) and \(\chi_{i}\zeta^{n}-\chi_{i}\zeta\) is coexact and compactly supported in \(B\) for each fixed \(i\in\{1,\ldots,K\}\). We relabel \((\chi_{i}\zeta^{n},\chi_{i}\zeta)\) in the above as \((\zeta^{n},\zeta)\) from now on. It remains to argue, in view of the previous reduction, that \[\left|\nabla\left(\zeta^{n}-\zeta\right)\right|^{N}(B)\lesssim\left|\mathrm{d }\left(\zeta^{n}-\zeta\right)\right|^{N}(B). \tag{3.3}\] Indeed, assuming this and recalling from Step 1 that \(\mathrm{d}\left(\zeta^{n}-\zeta\right)=\left(\beta^{n}-\overline{\beta} \right)-(\eta^{n}-\eta)\), where \(\eta^{n}\to\eta\) strongly in \(W^{1,N}\) and \(\left|\beta^{n}-\overline{\beta}\right|^{N}\rightharpoonup\nu\) in \(\mathcal{M}\) by Assumption (4), we conclude the **claim**. To see (3.3), we consider a standard mollifier \(\mathcal{J}_{\delta}\) acting on \(\left|\nabla\left(\zeta^{n}-\zeta\right)\right|^{N}\), which is possible since \(\zeta^{n}-\zeta\) is compactly supported in \(B\). Then, for \(s^{n}:=\nabla\left(\zeta^{n}-\zeta\right)\) we have that \[\left|\left.\left\|s^{n}\right\|_{L^{N}(B)}-\left\|s^{n}\star\mathcal{J}_{ \delta}\right\|_{L^{N}(B)}\right.\right|\leq\left\|s^{n}-s^{n}\star\mathcal{J }_{\delta}\right\|_{L^{N}(B)}\lesssim\mathfrak{o}(1)\quad\text{ as }\delta\to 0^{+}.\] So one can further assume \(\zeta^{n},\zeta\in\mathcal{C}^{\infty}\). In this case, since \(\mathrm{d}^{*}(\zeta^{n}-\zeta)=0\) and \(\zeta^{n}-\zeta\in W^{1,N}_{0}\left(B;\bigwedge^{\ell_{2}-1}\mathbb{R}^{N}\right)\) (so that the usual Poincare's inequality applies), we deduce (3.3) directly from the Gaffney's inequality. See [22, 29, 42, 5] among other references. The proof of the **claim** is now complete. **Step 6.** As argued in SS1.4, it remains to prove the bound \[\left\{\int_{X}\left|\varphi\right|^{N^{\prime}}\mathrm{d}\lambda^{\prime} \right\}^{\frac{1}{N^{\prime}}}\lesssim_{N,X}\int_{X}\left|\varphi\right| \mathrm{d}\mu\quad\text{for any }\varphi\in\mathcal{C}^{\infty}(X). \tag{3.4}\] Once this is established, an application of the second concentration compactness lemma _a la_ P.-L. Lions [43, p.158, Lemma I.1] together with Lemma A.1 readily concludes the proof. See (1.10) and the ensuing arguments, as well as [10, p.486, from Equation (50) to the end of the proof of Theorem 3.1]. To this end, observe that \[\mathcal{M}-\lim_{n\to\infty}\left|\Delta\left(\sigma^{n}-\sigma \right)\right| =\mathcal{M}-\lim_{n\to\infty}\left|\xi^{n}-\xi\right|\] \[=\mathcal{M}-\lim_{n\to\infty}\left|(\alpha^{n}+\mathrm{d}\gamma _{n})-(\alpha+\mathrm{d}\gamma)\right|\] \[=\mathcal{M}-\lim_{n\to\infty}\left|\alpha^{n}-\alpha\right|\] \[=\mu. \tag{3.5}\] The penultimate equality follows from the strong convergence \(\mathrm{d}\gamma_{n}\to\mathrm{d}\gamma\) in \(L^{N^{\prime}}\), and the final one from the definition of \(\mu\). Also, as \(\sigma^{n}\) is coexact, so is \(\Delta\left(\sigma^{n}-\sigma\right)\). By (3.5) and \(\lambda^{\prime}:=\mathcal{M}-\lim_{n\to\infty}\left|\mathrm{d}\left(\sigma^{n }-\sigma\right)\right|^{N^{\prime}}\), we see that (3.4) follows immediately from the bound \[\limsup_{n\to\infty}\left\|\varphi\,\mathrm{d}\left(\sigma^{n}-\sigma\right) \right\|_{L^{N^{\prime}}(X)}\lesssim_{N,X}\limsup_{n\to\infty}\left\|\varphi \,\Delta\left(\sigma^{n}-\sigma\right)\right\|_{\mathcal{M}(X)}\quad\text{for any }\varphi \in\mathcal{C}^{\infty}(X). \tag{3.6}\] [In contrast to (3.4), the norms \(\left\|\bullet\right\|_{L^{N^{\prime}}(X)}\) and \(\left\|\bullet\right\|_{\mathcal{M}(X)}\) are taken with respect to the metric \(g\) as usual.] To prove (3.6), notice that it is equivalent to \[\limsup_{n\to\infty}\left\|\mathrm{d}\left\{\varphi\left(\sigma^{n}-\sigma \right)\right\}\right\|_{L^{N^{\prime}}(X)}\lesssim_{N,X}\limsup_{n\to\infty }\left\|\Delta\left\{\varphi\left(\sigma^{n}-\sigma\right)\right\}\right\|_{ \mathcal{M}(X)}, \tag{3.7}\] in view of the identities \[\mathrm{d}\left[\varphi\left(\sigma^{n}-\sigma\right)\right]=\mathrm{d}\varphi \wedge\left(\sigma^{n}-\sigma\right)+\varphi\,\mathrm{d}\left(\sigma^{n}- \sigma\right),\] where \(\mathrm{d}\varphi\wedge\left(\sigma^{n}-\sigma\right)\to 0\) in \(L^{N^{\prime}}\), and \[\Delta\left\{\varphi\left(\sigma^{n}-\sigma\right)\right\}=\left(\Delta \varphi\right)\left(\sigma^{n}-\sigma\right)+2g^{ij}\nabla_{i}\varphi\nabla_{ j}\left(\sigma^{n}-\sigma\right)+\varphi\Delta\left(\sigma^{n}-\sigma\right),\] where \(\left(\Delta\varphi\right)\left(\sigma^{n}-\sigma\right)\to 0\) in \(L^{N^{\prime}}\) and \(2g^{ij}\nabla_{i}\varphi\nabla_{j}\left(\sigma^{n}-\sigma\right)\rightharpoonup 0\) weakly-\(\star\) in \(\mathcal{M}\). It remains to justify (3.7). For the moment, let us work with _an additional assumption_: \[X\] has no nontrivial harmonic \[\ell_{1}\] -form, namely that \[H^{\ell_{1}}_{\mathrm{dR}}(X)=\{0\}\] . (3.8) It then follows from the endpoint elliptic estimate in Brezis-Van Schaftingen [8] (see also Lemma B.1 in the appendix) that \[\left\|\mathrm{d}\left\{\varphi\left(\sigma^{n}-\sigma\right)\right\}\right\|_ {L^{N^{\prime}}(X)}\lesssim_{N,X}\left\|\Delta\left\{\varphi\left(\sigma^{n}- \sigma\right)\right\}\right\|_{\mathcal{M}(X)}+\left\|\mathrm{d}^{\ast}\Delta \left\{\varphi\left(\sigma^{n}-\sigma\right)\right\}\right\|_{W^{-2,N^{\prime} }}.\] The last term on the right-hand side vanishes under \(\limsup_{n\to\infty}\), because \[\mathrm{d}^{\ast}\Delta\left\{\varphi\left(\sigma^{n}-\sigma \right)\right\} =\Delta\mathrm{d}^{\ast}\left\{\varphi\left(\sigma^{n}-\sigma \right)\right\}\] \[=(-1)^{N(\ell_{1}+1)+1}\Delta\star\mathrm{d}\left\{\varphi \left[\star\left(\sigma^{n}-\sigma\right)\right]\right\}\] \[=(-1)^{N(\ell_{1}+1)+1}\Delta\star\left\{\mathrm{d}\varphi \wedge\left[\star\left(\sigma^{n}-\sigma\right)\right]\right\}+\Delta\left[ \varphi\,\mathrm{d}^{\ast}\left(\sigma^{n}-\sigma\right)\right]\] \[=:A_{1}+A_{2},\] thanks to the commutativity between \(\mathrm{d}^{\ast}\) and \(\Delta\), the definition of codifferential \(\mathrm{d}^{\ast}\), and the superdistributivity of \(\mathrm{d}\) under wedge product. The term \(A_{2}\) is zero in \(W^{-2,N^{\prime}}\) since \(\sigma^{n}-\sigma\in W^{1,N^{\prime}}\) is coexact, while \(\lim_{n\to\infty}\left\|A_{1}\right\|_{W^{-2,N^{\prime}}}=0\) since \(\sigma^{n}-\sigma\) converges weakly to zero in \(W^{1,N^{\prime}}\), hence strongly in \(L^{N^{\prime}}\) by Rellich's lemma. Thus, (3.7) is established under the hypothesis (3.8), from which (3.4) follows. **Step 7.** Finally, let us remove the additional assumption (3.8). This follows from a simple observation: the statement of Theorem 0.7 that we are proving is essentially local. More precisely, it suffices to prove the theorem in each local chart, and then globalise it via partition of unity. Thus, without loss of generality, one may assume that \(X\) is \(\ell_{1}\)-connected (indeed, contractible). The proof of the theorem is now complete. ## 4. Application to isometric immersions Our main result of this section is the following theorem, which generalises Theorem 0.10 in the Introduction SS0 to arbitrary dimensions and codimensions. **Theorem 4.1**.: _Let \((X,g)\) be a Riemannian immersed submanifold of dimension \(N\geq 2\) in \(\mathbb{R}^{N+k}\). Consider a family \(\{\mathrm{I}\mathrm{I}^{\varepsilon}\}\subset L^{p}_{\mathrm{loc}}\left(X; \mathrm{Hom}\left(TX\otimes TX,T^{\perp}X\right)\right)\) of weak solutions to the Gauss-Codazzi-Ricci equations which converges to \(\overline{\Pi}\) in the weak-\(L^{p}_{\mathrm{loc}}\)-topology, where \(p\in\left]\frac{2N}{N+1},\infty\right[\). Suppose that the coexact parts of \(\{\mathrm{I}\mathrm{I}^{\varepsilon}\}\) are precompact in strong \(L^{p^{\prime}}_{\mathrm{loc}}\)-topology. Then \(\overline{\Pi}\) is still a weak solution to the Gauss-Codazzi-Ricci equations._ As the main scope of the current paper is on wedge product theorems in compensated compactness theory, we refrain ourselves from giving a detailed exposition on isometric immersions and/or the Gauss-Codazzi-Ricci equations. Let us only point out that the Gauss-Codazzi-Ricci equations are compatibility PDEs of curvatures for the existence of isometric immersions. See do Carmo [26, Chapter 6] and Tenenblat [57] for the derivation of these equations, and the monograph [31] for histories and up-to-date developments of the isometric immersions problem. For closed manifold \(X\), \[\text{coexact part of }\mathrm{I}\mathrm{I}^{\varepsilon}=\left(\mathbf{Id}- \mathrm{d}\Delta^{-1}\mathrm{d}^{*}\right)\mathrm{I}\mathrm{I}^{\varepsilon}, \tag{4.1}\] where \(\Delta^{-1}\) is defined with respect to any cohomology group. In passing we remark that, for any vectorfield \(V\in\Gamma(TX)\), the expression \(\left(\mathbf{Id}-\mathrm{d}\Delta^{-1}\mathrm{d}^{*}\right)V\) modulo musical isomorphisms between \(TX\) and \(T^{*}X\) is the well-known _Leray projection_ of \(V\). Proof of Theorem 4.1.: Without loss of generality, we assume that \(X\) is compact and hence drop the subscripts "loc". Consider the structural equation \[\mathrm{d}\Omega^{\varepsilon}+\Omega^{\varepsilon}\wedge\Omega^{\varepsilon} =0. \tag{4.2}\] Here \(\{\Omega^{\varepsilon}\}\) is bounded in \(L^{p}\), as \[\Omega^{\varepsilon}=\begin{bmatrix}\nabla&\mathrm{I}\mathrm{I}^{\varepsilon }\\ -\mathrm{I}\mathrm{I}^{\varepsilon}&\nabla^{\perp,\varepsilon}\end{bmatrix}\] via the Cartan formalism. One may refer to Clelland [20], Tenenblat [57], as well as [14, 15] for details on the Cartan formalism applied to isometric immersions. By Rellich's lemma, \(\{\mathrm{d}\Omega^{\varepsilon}\}\) is bounded in \(W^{-1,p}\). In fact, we have a stronger mode of convergence -- consider the Hodge decomposition \(\Omega^{\varepsilon}=\mathrm{d}\Upsilon_{\varepsilon}+\Xi^{\varepsilon}\), where \(\Xi^{\varepsilon}\) is the divergence-free part that is assumed to be precompact in \(L^{p^{\prime}}\). So \(\{\mathrm{d}\Omega^{\varepsilon}\}=\{\mathrm{d}\Xi^{\varepsilon}\}\) is precompact in \(W^{-1,p^{\prime}}\). Now, from the generalised wedge product Theorem 0.4, we infer that \[\Omega^{\varepsilon}\wedge\Omega^{\varepsilon}\to\overline{\Omega}\wedge \overline{\Omega}\qquad\text{in the sense of distributions.}\] Here, as in SS1.2, the wedge product is defined as follows: \[\Omega^{\varepsilon}\wedge\Omega^{\varepsilon}=\mathrm{d}\left(\Upsilon_{ \varepsilon}\wedge\mathrm{d}\Upsilon_{\varepsilon}\right)+\Xi^{\varepsilon} \wedge\Omega^{\varepsilon}+\mathrm{d}\Upsilon_{\varepsilon}\wedge\Xi^{ \varepsilon}.\] and similarly for \(\overline{\Omega}\wedge\overline{\Omega}\). The last two terms on the right-hand side are weak-strong pairings of \(L^{p}\) and \(L^{p^{\prime}}\). The first term is also well defined in the sense of distributions -- \(\{\Upsilon_{\varepsilon}\}\) is bounded in \(W^{1,p}\), hence is precompact in \(L^{p^{*}-\delta}\) for arbitrary \(\delta>0\); recall that \(p^{\star}=\frac{Np}{N-p}\) is the Sobolev conjugate of \(p\). Thus \((\Upsilon_{\varepsilon}\wedge\mathrm{d}\Upsilon_{\varepsilon})\) is a weak-strong pairing provided that \(\frac{1}{p}+\frac{1}{p^{\star}}<1\). But this condition is equivalent to \(p>\frac{2N}{N+1}\). Note also that we are always in the subcritical regime in the sense of SS1.3: this is because \(\frac{p}{2}<1+\frac{1}{N}\) whenever \(p>\frac{2N}{N+1}\). Therefore, we may pass to the distributional limits separately for \(\mathrm{d}\Omega^{\varepsilon}\) and \(\Omega^{\varepsilon}\wedge\Omega^{\varepsilon}\) in (4.2). In this way, one obtains \(\mathrm{d}\overline{\Omega}+\overline{\Omega}\wedge\overline{\Omega}=0\) in the sense of distributions. We actually proved that the wedge product sequence \(\{\Omega^{\varepsilon}\wedge\Omega^{\varepsilon}\}\) converges in a stronger topology than \(\mathcal{D}^{\prime}\); that is, in the negative Sobolev space \(W^{-1,p}\). **Remark 4.2**.: _The above theorem and its proof distinguishes for the Gauss-Codazzi-Ricci equations one critical exponent \(p_{\mathbf{crit}}:=\frac{2N}{N+1}\) other than the obvious exponent \(p_{\mathbf{CS}}=2\). (The latter is critical due to the Cauchy-Schwarz inequality.) Observe that_ \[p_{\mathbf{crit}}=\frac{2N}{N+1}<p\leq p_{\mathbf{CS}}=2\leq p^{\prime}<\frac{ 2N}{N-1}=p_{\mathbf{crit}}^{\star}\equiv p_{\mathbf{crit}}^{\prime}. \tag{4.3}\] ## Appendix A A lemma on weak-weak pairing We have the following variant of [10, Lemma 2.11] by Briane-Casado-Diaz-Murat: **Lemma A.1**.: _Assume that \(1<r<\infty\), and let \(E\) be a locally compact Hausdorff topological space. Assume that \(\{u_{n}\}\subset L_{\mathrm{loc}}^{r}(E;\mathbb{C})\) and \(\{u_{n}^{\prime}\}\subset L_{\mathrm{loc}}^{r^{\prime}}(E;\mathbb{C})\) satisfy \(|u_{n}-u|^{r}\rightharpoonup\lambda\) and \(|u_{n}^{\prime}-u|^{r^{\prime}}\rightharpoonup\lambda^{\prime}\), both weakly-\(\star\) in \(\mathcal{M}_{\mathrm{loc}}(E;\mathbb{C})\). Then, for each quadratic polynomial \(\mathbf{q}\), there exists a Radon measure \(\varpi\in\mathcal{M}_{\mathrm{loc}}(E;\mathbb{C})\) such that_ \[\mathbf{q}\left(u_{n},u_{n}^{\prime}\right)\rightharpoonup\mathbf{q}\left(u,u ^{\prime}\right)+\varpi\qquad\text{weakly-}\star\text{ in }E,\] _with the total variation of \(\varpi\) satisfying the bound_ \[|\varpi|(B)\lesssim_{\mathbf{q}}[\lambda(B)]^{\frac{1}{r}}\left[\lambda^{ \prime}(B)\right]^{\frac{1}{r^{\prime}}}\qquad\text{ for any }B\subset E\text{ Borel}.\] ## Appendix B Endpoint elliptic estimates The following result is a variant of [8, Theorem 3.1] by Brezis-Van Schaftingen and its generalisation to Radon measures on Euclidean balls in [10, Proposition B.1, Appendix B]: **Lemma B.1**.: _Let \(\left(X^{N},g\right)\) be a closed Riemannian manifold, and let \(\xi\in L^{1}\left(X,\bigwedge^{\ell}T^{*}X\right)\) be a differential form of \(L^{1}\)-regularity. Then we can solve for \(\sigma\in W^{1,N^{\prime}}\left(X,\bigwedge^{\ell}T^{*}X\right)\) from \(\Delta\sigma=\xi\) on \(X\), where \(\Delta=\mathrm{d}^{*}\mathrm{d}+\mathrm{d}\mathrm{d}^{*}\) is the Laplace-Beltrami operator. The solution is unique modulo harmonic \(\ell\)-forms, and one has_ \[\inf\left\{\left\|\mathrm{d}\sigma+h\right\|_{L^{N^{\prime}}}:\,h\text{ is a harmonic form}\right\}\lesssim_{(X,g)}\left\|\xi\right\|_{L^{1}}+\left\|\mathrm{d}^{*}\xi\right\|_{W^{-2,N^{ \prime}}}.\] (B.1) _The same result holds for \(\xi\in\mathcal{M}\left(X,\bigwedge^{\ell}T^{*}X\right)\), with the right-hand side of (B.1) replaced by \(\|\xi\|_{\mathcal{M}(X)}\), namely the total variation norm of \(\xi\). Recall that \(N^{\prime}=\frac{N}{N-1}\)._ Note that [8, Theorem 3.1] is originally formulated as an \(L^{1}\)-estimate under the Dirichlet boundary condition on Euclidean domains. By a routine partition of unity argument one can generalise it to closed manifold \((X,g)\), but the uniqueness of solution can only be retained modulo harmonic \(\ell\)-forms, which are nontrivial in general on \(X\). **Acknowledgement**. The research of SL is supported by NSFC (National Natural Science Foundation of China) Project #12201399, the SJTU-UCL joint seed fund WH610160507/067, and the Shanghai Frontier Research Institute for Modern Analysis. I am indebted to Professor Armin Schikorra, from whom I learned a lot about Hodge decomposition and compensated compactness over time, and I thank Professor Gui-Qiang Chen for his unwavering support. I also thank New York University-Shanghai for providing excellent working atmosphere during my adjunct professorship and visiting scholarship.
2306.04720
Intrinsic antiferromagnetic multimeronic Néel spin-textures in ultrathin films
The realization of topological antiferromagnetic (AFM) solitons in real materials is a major goal towards their use in information technology. While they bear various advantages with respect to their ferromagnetic cousins, their observation is scarce. Utilizing first-principles simulations, here we predict new chiral particles in the realm of AFM topological magnetism, frustrated multimeronic spin-textures hosted by a N\'eel magnetic state, arising in single Mn layers directly grown on Ir(111) surface or interfaced with Pd-based films. These topological structures are intrinsic, i.e. they form in a single AFM material, can carry distinct topological charges and can combine in various multimeronic sequences with enhanced stability against external magnetic fields. We envision the frustrated N\'eel AFM multimerons as exciting highly-sought after AFM solitons having the potential to be utilized in novel spintronic devices hinging on non-synthetic AFM quantum materials.
Amal Aldarawsheh, Moritz Sallermann, Muayad Abusaa, Samir Lounis
2023-06-07T18:34:10Z
http://arxiv.org/abs/2306.04720v1
# Intrinsic Neel antiferromagnetic multimeronic spin-textures in ultrathin films ###### Abstract The realization of topological antiferromagnetic (AFM) solitons in real materials is a major goal towards their use in information technology. While they bear various advantages with respect to their ferromagnetic cousins, their observation is scarce. Utilizing first-principles simulations, here we predict new chiral particles in the realm of AFM topological magnetism, frustrated multimeronic spin-textures hosted by a Neel magnetic state, arising in single Mn layers directly grown on Ir(111) surface or interfaced with Pd-based films. These topological structures are intrinsic, i.e. they form in a single AFM material, can carry distinct topological charges and can combine in various multimeronic sequences with enhanced stability against external magnetic fields. We envision the frustrated Neel AFM multimerons as exciting highly-sought after AFM solitons having the potential to be utilized in novel spintronic devices hinging on non-synthetic AFM quantum materials. ## Introduction Recent experimental breakthroughs promoted antiferromagnetic (AFM) materials into the realm of information technological applications [1, 2, 3, 4, 5, 6, 7, 8, 9] and triggered state-of-the-art activities in the world of topological magnetism [10, 11, 12, 13, 7]. The antiparallel spin sublattices present in AFM materials result in zero dipolar fields, making them insensitive to magnetic field perturbations and enhances the stabilization of nanoscale topological structures [14, 15, 16, 4, 9, 10]. Moreover, AFM materials possess faster spin dynamics than ferromagnets by orders of magnitude [17, 18, 19, 20], which is an appealing characteristic for THz magnetic memory and logic devices. The race in identifying AFM non-trivial spin-swirling objects is strongly motivated by their particle-like nature, potentially realizing ideal magnetic bits, augmented with the low power consumption [14, 21, 22, 23, 24, 21, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] involved in their manipulation with the possibility of controlling their current-driven motion [22, 28, 25, 26, 14, 16, 22, 23] while avoiding the skyrmion Hall effect that plagues the ferromagnetic (FM) cousins [35, 36, 37, 38, 39]. This led to the recent discovery of synthetic AFM skyrmions, which consist of two FM skyrmions that are realized in two distinct magnetic layers and antiferromagnetically coupled through a non-magnetic spacer layer [7, 10, 11, 12, 13]. Here, utilizing first-principles in conjunction with atomistic spin dynamics we unveil multimeronic textures, a new type of topological AFM particles, which are non-synthetic and emerge in magnetically frustrated thin films (see Fig.1). Regular FM merons are in-plane magnetized textures with magnetization that curls around a stable core pointing out-of-plane, and are topologically equivalent to one half of a skyrmion [40, 41, 42, 43, 44, 45, 46, 47, 48, 49]. The meronic topological charge equals \(\pm\frac{1}{2}\), where **n** is the direction vector of magnetization. While they have been observed experimentally in bulk [50] and thin films [45, 51], they emerge in AFM synthetic [42, 44, 52, 53] and intrinsic bulk (thick films) phases [54, 55, 31], following a large body of phenomenology-based simulations [32, 33, 34, 30, 35]. However, a pristine ultrathin film material that hosts AFM merons remains elusive. Our multi-meronic textures are distinct from current predictions since they form in a realistic material a rich set of combinations materializing in a frustrated in-plane Neel ground state, shown in Fig. 1j, which can be decomposed into three FM sublattices, with an opening angle of 120\({}^{\circ}\) between their respective magnetic moments. We predict a single Mn layer as a universal hosting material once interfaced in different fashions with Ir(111) surface with and without Pd, Fe monolayers, PdFe bilayer or Pd\({}_{2}\)Fe trilayer (see Fig.2a-d). The different substrates form a typical family of substrates typically known to host FM [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67], and AFM skyrmions [9, 16]. The in-plane AFM Neel state is the ground state for Mn layer in all magnetic systems, formed as a result of magnetic frustration caused by strong AFM exchange coupling among the first nearest neighbors, as illustrated in Supplementary Fig. 1. While the in-plane orientation is dictated by the \(z\) component of the antisymmetric exchange interactions (Dzyaloshinskii-Moriya Interactions - DMI) and is further reinforced by the in-plane magnetic anisotropy energy (MAE) \(K\). ## Results ### Topological magnetic states in frustrated Mn layer. The Ir(111) substrate forms a triangular lattice, on which we deposit layers of Mn, PdMn, MnPdFe and MnPd\({}_{2}\)Fe and perform atomistic spin dynamics [68], minimizing the Heisenberg Hamiltonian (Eq. 1 in Methods section) equipped with the magnetic interactions derived from first-principles (see Methods section). We identify a plethora of AFM Neel meronic magnetic states forming metastable states in the Mn layer as depicted in Fig. 1a, Fig. 2g, h, and Supplementary Fig. 2. The Neel ordering of the spins is the ground state of the Mn layer in all of the aforementioned magnetic systems. The associated critical temperatures range from 130K for PdMn bilayers to about 600K or more for the rest of explored Mn-based films. The spins forming the AFM Neel order are segmented into three sublattices L1, L2 and L3, each hosting FM spin alignment (Fig. 1j). At each sublattice, FM meronic pair can be stabilized, so in total, in the case of single AFM Neel meronic pair (Fig. 1a), we have six FM merons (antimerons), as shown in Fig. 1d-i, which we refer to as a hexameronic state. By zooming in into the two spin-swirling extremities of the hexameron (Figs. 1b-c) and their respective sublattice decomposition (Figs. 1d-i), we identify a vortex (Fig. 1d) and an antivortex (Fig. 1h) whose cores reside on an Mn lattice site, around which the spins of the remaining meronic textures precess, as dictated by the magnetic frustration induced by the underlying AFM magnetic interactions. Each of the FM building blocks of our AFM explored solitons holds a topological charge (N) defined as: N = (wp/2) [40], where w = +1 (-1) for the vortex (antivortex) is the winding number describing the rotational direction of the in-plane magnetization, and p is the polarity which defines the out-of-plane magnetization of the center being +1 (-1) when pointing up (down) [69]. Since the merons and antimerons carry a topological charge of -1/2 and +1/2, respectively [31, 45, 51], the sublattice charge N\({}_{L}\) is either -1 (+1) for a meron-meron (antimeron-antimeron) pair, as the case of L3 (Fig. 1 f, i), or 0 for a hybrid (see L1 and L2 in Fig. 1 d, e, g, h) meron-antimeron pair. By summing up the total charge N\({}_{t}\) for a hexameron, one can end up with three possible values -1, 0 and +1 (see Fig. 1k), which interestingly are energetically degenerate in the absence of an external magnetic field. Besides the hexameronic frustrated AFM Neel state, we identified a rich set of other meronic textures, such as the dodecameron, hosting 12 merons, shown in Fig. 2g. Further examples of complex multimerons are presented in Supplementary Fig. 2. Similarly to the purely FM counterparts, in confined geometries (See Supplementary Fig. 2b-c) a "single" AFM Neel meronic state can be stabilized. This object is a trimeron resulting from three frustrated merons with overlapping cores, carrying in total a half integer topological charge. ### Stability against external magnetic fields. The investigation of the response of topologically paired AFM Neel meronic pairs to an magnetic fields is important to inspect stability aspects and to fingerprint subsequent potential non-trivial topological transitions. The frustrated meronic textures survive to extremely high in-plane magnetic fields (\(>200\) Tesla). The case of an out-of-plane (OOP) magnetic field shows a rather rich impact on the explored spin-textures. Therefore, here, we scrutinize in detail the latter scenario by focusing on three different AFM Neel meronic states (see Figs. 2 f-h). As a prototypical chiral magnetic object, we consider the hexameron emerging either in the AFM Neel (Fig. 2f) or in the spiraling AFM Neel states (Fig. 2h) as well as the dodecameron (Fig. 2g). For interfaces hosting the Fe layer, MnPdFe/Ir(111) and MnPd\({}_{2}\)Fe/Ir(111), we examined both cases: switching-off (solid bars in Fig. 2e) and -on (dashed bars in Fig. 2e) the Mn-Fe magnetic interactions. A snapshot for the Mn-hexameron interfaced with ferromagnetic Fe spirals and skyrmion is illustrated in Fig. 2i. While we were expecting the robustness of the unveiled meronic textures against external magnetic fields, we were intrigued by the annihilation of some hexamerons emerging in an AFM Neel background with experimentally accessible OOP fields, e.g. 10 Tesla, in contrast to dodecamerons and hexamerons arising in a Neel spiraling state (red and green bars in Fig. 2e). To get insight into the origin of the sensitivity of these magnetic states,hexamerons forming in an AFM Neel background, we scrutinize the sublattices topological distribution along with the spin orientation at each sublattice of the different hexameronic states shown in Fig. 3 (See supplementary Fig. 3 illustrating snapshots of the different hexamerons). As introduced earlier, there is a quadruple degeneracy for each hexameron in the absence of a magnetic field. The four states, denoted Hexa A-D and illustrated in Fig. 3, can be distinguished by the vortex nature of their core constituents and the orientation of the core spins (see Fig. 1k). A finite OOP field lifts partially the degeneracy and favors the hexameron, here Hexa D, with most spins pointing along the field direction (see also Supplementary Fig. 3). Among the four hexamerons, Hexa D will be the most robust to the applied field and therefore survives gigantic fields. The remaining hexamerons experience at some point magnetization switching to reach the optimal sublattice topological distribution defined by Hexa D. This requires a flip of the spins for at least one meron (antimeron) implying going through a topological charge transition, being a non trivial process, during which, the AFM meronic structure might encounter an unstable spin distribution, leading to the annihilation of the AFM meronic structure where the AFM vortex and antivortex start rolling towards each other and then collapse at a rather low magnetic field. If the transition occurs, however, the new magnetic state would be capable of surviving large magnetic fields similar to Hexa D. However, the presence of Neel spirals in the background or additional pairs of AFM meronic textures (leading to for example to dodecamerons) prevent the formation of unstable states within the topological transition induced by the magnetic field, which would lead to the collapse of the frustrated soliton. Effectively, a barrier is provided by enabling the rearrangement of the spins to acquire the desired topological state, which would withstand immense magnetic fields. ### Emergence mechanism. We have identified that the formation of our frustrated AFM Neel meronic spin textures requires a strong AFM exchange coupling among the first nearest-neighbor atoms \(J_{1}\) (see Supplementary Fig. 1a-d). This coupling is responsible for the AFM Neel order of the spins, and it is through magnetic frustration that these solitons may arise. Additionally, another magnetic interaction is required to align the spins in the in-plane direction. This interaction can be provided by the in-plane MAE, \(K<\) 0, as observed in Mn/Ir(111), while for the other three magnetic systems studied, \(K\) prefers an out-of-plane orientation of spins (Supplementary Fig.1e). However, the \(z\) component of the DMI vector (\(D_{z}\)) plays a crucial role in aligning the spins in-plane, ultimately leading to the emergence of the AFM Neel meronic textures. In conclusion, to obtain our AFM solitons on a triangular lattice, an AFM \(J_{1}\) is required, along with either a finite \(D_{z}\) or an in-plane \(K\). To explore the fundamental mechanisms defining the stability of the spin-textures, we built a minimal spin model that contains only an AFM \(J_{1}\) and \(D_{z}\) since the latter played the main role in stabilizing the meronic textures in the four investigated Mn-based interfaces. The resulting phase diagram is shown in Fig. 4a. While the ground state would have been a pure Neel state without \(D_{z}\), the latter enables quickly the formation of frustrated merons. Increasing \(D_{z}\) enforces a stronger in-plane alignment of the spins, which reduces the size of the meronic constituents (Fig. 4b and Supplementary Fig. 4). Clearly, the size of merons is dictated by a competition of magnetic exchange and DMI. Keeping \(D_{z}\) fixed while increasing the AFM \(J_{1}\) counteracts the effect of DMI and enlarges the meron core (Fig. 4c). Fig. 4d presents the critical OOP magnetic field upon which the meronic texture, here Hexa D similar to that shown in Fig. 3, is annihilated as function of the OOP DMI component all normalized by the nearest neighboring AFM exchange interaction. The obtained curve follows a quadratic dependence, highlighting that the DMI enhances the stability of the frustrated merons. In fact, the application of an OOP magnetic field counteracts the influence of the OOP DMI component by tilting the spins to the OOP direction, causing disruption to the in-plane alignment of the spins, imposed by the OOP DMI component, throughout the surrounding area, including the region spanning between the extremities of the hexameron, ultimately leading to its collapse. Consequently, the larger the OOP DMI component (smaller meronic cores), the larger the critical field required to destroy the AFM spin-swirling textures. ## Discussion Our ab-initio simulations uncovered non-synthetic Neel-frustrated AFM meronic textures emerging in a realistic set of materials and interfaces. The newly unveiled magnetic objects are hosted by triangular Mn layer interfaced with an Ir(111) surface, with either a Pd overlayer, separated from Ir with PdFe bilayer of Pd\({}_{2}\)Fe trilayer, which all represent substrates that can readily be grown experimentally. The frustrated AFM states form hexamerons, composed of three FM meronic pairs each located at one of the three FM sublattices building up the AFM Neel background. Other solitons can emerge such as dodecamerons (12 merons) while confined geometries enables the stabilization of a frustrated trimeron. We have observed that these AFM Neel meronic solitons survive high values of magnetic fields if the majority spins align in the direction of the OOP magnetic field. Otherwise, a transition of the sublattice topological charge occurs, leading to the potential annihilation of the AFM solitons at experimentally accessible values of magnetic fields. To gain a better understanding of the characteristics of these AFM solitons, we provided a spin model that outlines the minimum set of magnetic interactions necessary to generate the detected AFM solitons. Identifying new AFM solitons with a realistic existence scenario is at the heart of AFM topological magnetism. Our predictions can initiate the experimental discovery of the intriguing intrinsic frustrated multimeronic textures, which can delineate in various topological sequences. It remains to be explored how such spin states can be implemented and designed in AFM spintronic devices. Certainly, the thin films being proposed provide a solid platform for AFM meronic textures with a potential impact in information technology. ## Methods In this study, we conducted a systematic investigation to explore the magnetic structures that can be hosted by the magnetic layers of our four layered systems. Our approach involved a three-fold procedure, combining ab-initio calculations with spin atomistic dynamics. The details of this procedure are outlined below. ### Ab-initio calculations. To simulate the magnetic properties of our magnetic layers, we utilized in a first step the Quantum-Espresso computational package [70]. The calculations employed projector augmented wave pseudo potentials sourced from the PS Library [71] and the self-consistent calculations were performed with a k-mesh of \(28\times 28\times 1\) points for the unit cell. The layers were arranged in an fcc-stacked configuration along the [111] direction (See to Fig. 2a-d). The relaxation parameters were then extracted, revealing the relaxation percentages of the different layers in relation to the ideal interlayer distance in the Ir-based systems. Specifically, for **Mn/Ir(111)**, the relaxation percentages were 2.3% and -3.4%; for **PdMn/Ir(111)**, they were 8.6%, 10.3%, and -2.3%; for **MnPdFe/Ir(111)**, the percentages were 4%, 5.2%, 8.1%, and -1%; and for **MnPd\({}_{2}\)Fe/Ir(111)**, they were 5.9%, -4%, 8.2%, 8.2%, and -0.7%, for each layer respectively. Here, positive (negative) values indicate atomic relaxations towards (away from) the Ir surface. After establishing the geometries of the various magnetic systems, we conducted in a second step a detailed investigation of their magnetic properties and interactions using the all-electron full-potential relativistic Korringa-Kohn-Rostoker (KKR) Green function method, implemented in the JuKKR computational package [72, 73],in the local spin density approximation. Each of the four magnetic systems consists of a slab with 30 layers. In the case of **Mn/Ir(111)**, the slab consists of 5 vacuum + 1 Mn + 20 Ir layers + 4 vacuum layers. For **PdMn/Ir(111)**, the slab comprises 4 vacuum + 1 Pd + 1 Mn layer+ 20 Ir layers + 4 vacuum. In the case of **MnPdFe/Ir(111)**, the slab includes 3 vacuum + 1 Mn layer + 1 Pd + 1 Fe + 20 Ir + 4 vacuum layers. Lastly, for **MnPd\({}_{2}\)Fe/Ir(111)**, the slab is composed of 2 vacuum + 1 Mn + 2 Pd + 1 Fe + 20 Ir + 4 vacuum. To perform the calculations, the momentum expansion of the Green function was truncated at \(\ell_{\text{max}}=3\). Self-consistent calculations were conducted using a k-mesh of \(30\times 30\times 1\) points. The energy contour consisted of 23 complex energy points in the upper complex plane, and it incorporated 9 Matsubara poles. To extract the Heisenberg exchange interactions and Dzyaloshinskii-Moriya (DM) vectors, we employed the infinitesimal rotation method [74, 75]. For this extraction, we used a finer k-mesh of \(200\times 200\times 1\) points. ### Hamiltonian Model and atomistic spin dynamics. After extracting the magnetic parameters for our magnetic atoms from first-principles, we employ the Landau-Lifshitz equation (LLG) implemented in the Spirit code [68] to explore the magnetic properties and complex states. This exploration involves minimizing the two-dimensional Heisenberg Hamiltonian on a triangular lattice. The Hamiltonian comprises several terms, including Heisenberg exchange coupling, Dzyaloshinskii-Moriya interaction (DMI), magnetic anisotropy energy, and Zeeman term. The energy functional of the system can be described as follows: \[H=H_{\text{Exchange}}+H_{\text{DMI}}+H_{\text{Anisotropy}}+H_{\text{Zeeman}}, \tag{1}\] with \[H_{\text{Exchange}}=-\sum_{<i,j>}J_{ij}^{\text{Mn-Mn}}\,\mathbf{S }_{i}\cdot\mathbf{S}_{j}-\sum_{<i,j>}J_{ij}^{\text{Fe-Mn}}\,\mathbf{S}_{i}\cdot \mathbf{S}_{j}-\sum_{<i,j>}J_{ij}^{\text{Fe-Fe}}\,\mathbf{S}_{i}\cdot\mathbf{S }_{j},\] \[H_{\text{DMI}}=-\sum_{<i,j>}\mathbf{D}_{ij}^{\text{Mn-Mn}}\cdot[ \mathbf{S}_{i}\times\mathbf{S}_{j}]-\sum_{<i,j>}\mathbf{D}_{ij}^{\text{Fe-Mn}} \cdot[\mathbf{S}_{i}\times\mathbf{S}_{j}]-\sum_{<i,j>}\mathbf{D}_{ij}^{\text{Fe -Fe}}\cdot[\mathbf{S}_{i}\times\mathbf{S}_{j}],\] \[H_{\text{Anisotropy}}=-K^{\text{Mn}}\sum_{i}(\mathbf{S}_{i} \cdot\mathbf{e}_{i})^{2}-K^{\text{Fe}}\sum_{i}(\mathbf{S}_{i}\cdot\mathbf{e} _{i})^{2},\] \[H_{\text{Zeeman}}=-\sum_{i}\mu_{i}\mathbf{B}\cdot\mathbf{S}_{i},\] where we assign indices \(i\) and \(j\) to denote specific sites, each associated with a magnetic moment. The magnetic moment is represented by the unit vector \(\mathbf{S}\). The Heisenberg exchange coupling strength \(J_{ij}^{\text{X-Y}}\) describes the interaction between an atom X on site \(i\) and an atom Y on site \(j\), where a negative value indicates AFM interaction. Similarly, we use the notation **D** for the Dzyaloshinskii-Moriya interaction vector, \(K\) for the magnetic anisotropy energy, and \(\mu_{i}\mathbf{B}\) to represent the Zeeman coupling to the atomic spin moment \(\mu\) at site \(i\). It is important to note that the Fe-Mn and Fe-Fe interactions are only considered in the **MnPdFe/Ir(111)** and **MnPd\({}_{2}\)Fe/Ir(111)** systems. For our spin atomistic simulations, we adopt both periodic and finite boundary conditions to model the extended and confined two-dimensional system, respectively, with cells containing \(249^{2}\), \(300^{2}\), \(390^{2}\) sites. ### Data availability The data needed to evaluate the conclusions in the paper are present in the paper and the Supplementary Information. ### Code availability We used the following codes: Quantum ESPRESSO which can be found at [https://www.quantum-espresso.org/download](https://www.quantum-espresso.org/download), SPIRIT can be found at [https://github.com/spirit-code/spirit](https://github.com/spirit-code/spirit), and the KKR code is a rather complex ab-initio DFT-based code, which is in general impossible to use without proper training on the theory behind it and on the practical utilization of the code. We are happy to provide the latter code upon request. ## Acknowledgements We acknowledge fruitful discussions with Nihad Abuawwad. This work was supported by the Federal Ministry of Education and Research of Germany in the framework of the Palestinian-German Science Bridge (BMBF grant number 01DH16027) and the Deutsche Forschungsgemeinschaft (DFG) through SPP 2137 "Skyrmionics" (Project LO 1659/8-1). The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA at Forschungszentrum Julich. S.L. initiated, designed and supervised the project. A.A. performed the simulations and post processed the data. A.A., M.S., M.A., and S.L. discussed the results. A.A. and S.L. wrote the manuscript to which all co-authors contributed. The authors declare no competing interests. Correspondence and requests for materials should be addressed to A.A. (email: [email protected]) or to S.L. (email: [email protected]).
2310.08824
Confounding-Robust Policy Improvement with Human-AI Teams
Human-AI collaboration has the potential to transform various domains by leveraging the complementary strengths of human experts and Artificial Intelligence (AI) systems. However, unobserved confounding can undermine the effectiveness of this collaboration, leading to biased and unreliable outcomes. In this paper, we propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM). Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden. We present a deferral collaboration framework for incorporating the MSM into policy learning from observational data, enabling the system to control for the influence of unobserved confounding factors. In addition, we propose a personalized deferral collaboration system to leverage the diverse expertise of different human decision-makers. By adjusting for potential biases, our proposed solution enhances the robustness and reliability of collaborative outcomes. The empirical and theoretical analyses demonstrate the efficacy of our approach in mitigating unobserved confounding and improving the overall performance of human-AI collaborations.
Ruijiang Gao, Mingzhang Yin
2023-10-13T02:39:52Z
http://arxiv.org/abs/2310.08824v1
# Confounding-Robust Policy Improvement with Human-AI Teams ###### Abstract Human-AI collaboration has the potential to transform various domains by leveraging the complementary strengths of human experts and Artificial Intelligence (AI) systems. However, unobserved confounding can undermine the effectiveness of this collaboration, leading to biased and unreliable outcomes. In this paper, we propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM). Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden. We present a deferral collaboration framework for incorporating the MSM into policy learning from observational data, enabling the system to control for the influence of unobserved confounding factors. In addition, we propose a personalized deferral collaboration system to leverage the diverse expertise of different human decision-makers. By adjusting for potential biases, our proposed solution enhances the robustness and reliability of collaborative outcomes. The empirical and theoretical analyses demonstrate the efficacy of our approach in mitigating unobserved confounding and improving the overall performance of human-AI collaborations. ## 1 Introduction In recent years, policy learning has emerged as a powerful tool for learning and optimizing decision-making policies across a diverse range of applications, including healthcare, finance, and marketing. One of the most promising avenues for leveraging machine learning is policy learning on observational data [1], which aims to infer optimal decision rules from historical data without the need for costly randomized experiments. Observational data, generated from real-world systems, is abundant and easily accessible, making it an attractive source for training models that can guide policy decisions. Many algorithms have been proposed for efficient policy learning from observational data [2, 3, 4, 5], usually under the disputable _unconfoundness_ assumption. It assumes there are no hidden confounders that simultaneously influence both the treatment assignment and individual outcomes [6]. This assumption is defensible in certain domains such as automated recommendation [7] or pricing systems [8] where we have full control of the historical algorithm, but may never hold true for domains where the observational data is generated by human decision-makers. Consider a healthcare scenario, where observational data is generated by human experts in the form of electronic health records (EHRs). These records contain a wealth of information about patients' medical histories, treatments, and outcomes, which can be used to inform policy learning for personalized medical interventions. However, human experts, such as physicians, may seek additional information when making decisions about patient care, such as the patient's lifestyle, mental well-being, or other contextual factors such as bedside information that might influence their decision-making process. This additional information, although crucial for decision-making, may not be systematically recorded in the EHRs, leading to potential confounding issues in the observational data. In this case, the unobserved confounding factors can result in biased estimates of optimal actions and reduce the reliability of learned policies. For example, if a physician prescribes a certain medication to patients with a specific lifestyle that is not recorded in the EHRs, the observed treatment might be confounded by the unmeasured lifestyle factor. To address the confounding problem, [5; 9] propose to use the marginal sensitivity model (MSM) to bound the possible value of the true propensity score by the degree of potential confounding in the historical data. In doing so, policy learning is formulated as a constraint optimization problem, which can be efficiently solved using observational data together with a specified confounding strength factor. In this paper, we propose a human-AI collaboration system to improve a purely algorithmic approach by incorporating human evaluations. Certain tasks in the proposed human-AI system are executed by human experts while others are handled by the algorithm. Considering that the historical data in our motivating example are all generated by human decision-makers, an AI-only algorithm is likely to be inferior to humans in cases where external information, such as patients' lifestyles, is necessary for optimal decision-making. The benefit of human involvement stands out in the confounding setting as human decision-makers are adept at making choices based on (unobserved) confounding factors [10]. We refer to this problem as _deferral collaboration under unobserved confounding_[11]. By adopting a human-AI collaborative approach, we can alleviate the impact of these unobserved confounders in the usual deferral collaboration. In addition, human experts can provide the AI system with insights about the additional information they consider during decision-making which may not be explicitly recorded in the data. The AI system can incorporate this knowledge into the learning process, allowing for a more accurate estimation of optimal actions in policy learning. This collaborative framework ensures that the learned policies better account for the missing confounders and yield more reliable and robust decision-making in practice. We make the following contributions in the paper: We propose a novel algorithm for the problem of _deferral collaboration under unobserved confounding_, where our algorithm works under a data-driven uncertainty set over the nominal propensity scores. The proposed algorithm leverages human decision-makers who have the capacity to acquire additional unrecorded information to aid their decision-making and a trained algorithmic policy. It offers policy improvement over a baseline policy only based on the available features. In addition, we propose a personalized variant of our algorithm that can route each instance to a specific human decision-maker by exploiting the diverse expertise of humans. We theoretically and empirically examine the efficacy of the proposed method. ## 2 Related Work We consider human-AI collaboration as a decision-making problem in contrast to the predictive problem mostly considered by the extant human-AI systems. Related to policy learning on the observational data, we relax the untestable unconfoundedness assumption using sensitivity analysis from causal inference. The proposed human-AI algorithm is related to several threads of literature. **Policy Learning from Observational Data with Unconfoundness.** The challenge of deducing an optimal personalized policy from offline observational data has been extensively explored in various domains, including e-commerce, contextual pricing, and personalized medicine [12, 13, 14, 15, 16, 17, 18]. The majority of these studies assume that historical data were generated by a previous decision-maker, focusing on estimating treatment effects or determining an optimal algorithmic policy using proposed estimators and specific policy classes. Most research in this domain has not considered or developed learning algorithms for scenarios that could benefit from a combined human-AI team to enhance decision performance. **Sensitivity Analysis.** Sensitivity analysis is applied to evaluate the assumptions that are not testable from the data in causal inference. Originating from the debate of smoking's effect on lung cancer [19, 20], sensitivity analysis is often used to check the robustness of a causal conclusion under potentially unmeasured confounding. One direction of sensitivity analysis is to compute the influence of a latent confounding variable on the treatment and outcome variables using parametric models [21, 22, 23]. Another framework models the confounding effect on the treatment assignment nonparametrically. The seminal work of the Rosenbaum sensitivity model assumes the odds ratio of the treatment probability is bounded in an assumed range for a pair of units in the propensity score matching [24, 25]. The MSM generalizes this idea beyond matching algorithms. It assumes the odds ratio is bounded for a propensity score conditional on the observed variables and a true propensity score conditional on all the confounding variables [26, 27]. Due to its interpretability and minimal model assumptions, the MSM is widely adopted for heterogeneous treatment effect estimation [28, 29, 30], robust optimization [31, 32], policy evaluation [33], policy learning under unmeasured confounding [9] and selection bias [34]. In this paper, we adopt the MSM to quantify the deviation of unconfoundedness in the context of human-AI collaboration. **Human-AI Collaboration.** Recent studies on human-AI collaboration methods have explored ways to improve classification performance, such as accuracy and fairness, by capitalizing on the complementary strengths of humans and AI [35, 36, 37]. We focus on the setting without human-AI interaction, where decisions are made by either a human or an algorithm. Previous research has also addressed the task of routing instances to either a human or an algorithm [38, 39, 40, 41, 42]. [38, 39, 40] examine the optimization of overall classification performance, [41] investigates human-AI collaboration in a regression context, and [42] explores the simultaneous enhancement of human accuracy and fairness. The primary distinction between these studies and ours is that they explore contexts where the AI's learning task is a conventional supervised classification task while we focus on policy learning. [4, 11] study how to design a deferral collaboration system similar to ours under the unconfoundness assumption, which assigns optimal decision maker (the human or the AI) for decision subjects to help make better decisions, based on the observational data. However, humans often leverage unrecorded information (unobserved confounders) to aid their decision making [10], which makes estimating the propensity scores in [4, 11] difficult. ## 3 Confounding-Robust Deferral Collaboration with Bandit Feedback **Deferral Collaboration with Observational Data [4].** The observational data consist of the tuples \(\{X_{i},T_{i},Y_{i}\}_{i=1}^{N}\), where the covariates \(X_{i}\in\mathcal{X}\), the treatment arm \(T_{i}\in\{0,\cdots,m-1\}\), and a scalar outcome \(Y_{i}\in\mathbb{R}\). The data is generated by the human decision maker with a behavior policy \(\pi_{0}\) as \(T_{i}\sim\text{Categorical}(\pi_{0}(T_{i}|X_{i}))\). The behavior policy can be estimated from data. Using the potential outcome framework, we assume \(Y_{i}=Y_{i}(T_{i})\), _i.e._, the potential outcome under the observed treatment equals the observed outcome, known as the SUTVA assumption [43]. We consider \(Y\) as the risk and aim to minimize the risk aggregated over the population. Deferral collaboration concerns how to evaluate and learn an algorithmic policy \(\pi:\mathcal{X}\rightarrow\Delta^{m}\) where the element in simplex \(\Delta^{m}\) is the probability over the treatment arms, along with a routing algorithm \(\phi:\mathcal{X}\rightarrow[0,1]\) that can _complement_ human decision-makers. \(\phi(X)\) denotes the probability of routing to humans. A successful deferral collaboration routes different instances to the entity that is likely to yield the best reward by \(\phi(X)\), and it leverages the policy \(\pi(x)\) for the instances routed to the AI. The human decision-maker may incur a cost of \(C(X)\) for producing a decision on an instance with features \(X\). The system is trained by optimizing the empirical estimate of the value function by inverse propensity weighting [1], using an estimated behavior policy \(\hat{\pi}_{0}\), \[\min_{\phi\in\Phi,\pi\in\Pi}\sum_{i=1}^{N}\phi(X_{i})(Y_{i}+C(X_{i}))+\frac{( 1-\phi(X_{i}))\pi(T_{i}|X_{i})}{\hat{\pi}_{0}(T_{i}|X_{i})}Y_{i}. \tag{1}\] At the testing time, a new decision instance will be routed to either human or the AI. Equation (1) considers a single expert but in practice often there are multiple human decision-makers \(H\in\{1,\cdots,K\}\). Accordingly, the data is generated by first assigning an instance with covariates \(X_{i}\) to different human decision-makers by the the assignment rule \(d_{0}(H_{i}|X_{i}):\mathcal{X}\rightarrow\Delta^{K}\). Each human decision-maker \(H_{i}\) chooses the treatment by the behavior policy \(\hat{\pi}_{0}(T_{i}|X_{i},H_{i})\). In this paper, we assume the system can route to a specific human decision-maker (personalization). The process is illustrated in Figure 1. The system is trained by optimizing \[\min_{\phi\in\Phi,\pi\in\Pi}\sum_{i=1}^{N}\frac{\phi(H_{i}|X_{i})}{d_{0}(H_{i} |X_{i})}(Y_{i}+C(X_{i}))+\frac{\phi(A|X_{i})\pi(T_{i}|X_{i})}{\hat{\pi}_{0}(T_{ i}|X_{i},H_{i})}Y_{i}. \tag{2}\] The routing algorithm \(\phi\) is generalized to a mapping \(\phi:\mathcal{X}\rightarrow\Delta^{K+1}\) where \(\phi(A|X),\phi(H|X)\) means the probability of routing instances to the algorithm and a specific human expert \(H\). The goal is to learn an optimal routing algorithm \(\phi\) and policy \(\pi\) that minimizes the risk. **Marginal Sensitivity Model.** We assume there is an unobserved feature \(U_{i}\), which can be regarded as the unobserved potential outcome itself, _i.e._, \(U_{i}=Y_{i}(t)\)[27]. The unconfoundness assumption would hold if we account for both \(U_{i}\) and \(X_{i}\). The nominal propensity is denoted as \(\tilde{\pi}_{0}(t|x)=P(T=t|X=x)\), which can be estimated from the observational data using a classifier such as logistic regression. Due to the unobserved confounding, the true propensity (which cannot be estimated from data) \(\pi_{0}(t|x,y)=P(T=t|X=x,Y(t)=y)\) generally is not equal to the \(\tilde{\pi}_{0}(t|x)\). To quantify the difference between the nominal and true propensity scores incurred by confounding, we consider an uncertainty set based on the odds-ratio bounds of the MSM [26], \[\Gamma^{-1}\leq\frac{(1-\tilde{\pi}_{0}(T|X)\pi_{0}(T|X,Y)}{\tilde{\pi}_{0}(T| X)(1-\pi_{0}(T|X,Y))}\leq\Gamma, \tag{3}\] where the scalar parameter \(\Gamma\geq 1\). When \(\Gamma=1\), it corresponds to the unconfoundness setup in Equation (1) and Equation (2). ### Problem Statement We first consider the situation of homogeneous human experts. We transform the expected team performance in Equation (1) to the self-normalized Hajek estimator [2] by \[\mathbb{E}\frac{\pi(T|X)}{\pi_{0}(T|X,Y)}Y(1-d_{\phi}(X))+\phi(X) (Y+C(X))\] \[= \mathbb{E}\phi(X)(Y+C(X))+\sum_{t=0}^{m-1}\frac{\mathbb{E}\frac{ 1(T=t)}{\pi_{0}(T|X,Y)}\pi(T|X)Y(1-\phi(X))}{\mathbb{E}\frac{1(T=t)}{\pi_{0}( T|X,Y)}}.\] Throughout the paper, without further specification, the expectation is with respect to the underlying data distribution. The equality is because \(\mathbb{E}\frac{1^{(T=t)}}{\pi_{0}(T|X,Y)}=1\) for every \(t\). Suppose in addition there is a baseline policy \(\pi_{c}(T|X)\), such as the never-treat policy \(\pi_{c}(0|x)=1\)[9], that we aim to improve upon. The objective can be written as the improvement over \(\pi_{c}(T|X)\), \[R(\pi,\phi,\pi_{c})=\mathbb{E}\phi(X)(Y+C(X))+\sum_{t=0}^{m-1}\frac{\mathbb{E }\frac{1(T=t)}{\pi_{0}(T|X,Y)}Y[(1-\phi(X))\pi(T|X)-\pi_{c}(t|X)]}{\mathbb{E} \frac{1(T=t)}{\pi_{0}(T|X,Y)}}. \tag{4}\] Let \(\tilde{W}_{i}=\frac{1}{\tilde{\pi}_{0}(T_{i}|X_{i})}\) and \(W_{i}=\frac{1}{\pi_{0}(T_{i}|X_{i},Y_{i})}\). By the MSM, a key observation is that the true weights \(W_{i}\) are in the uncertainty set \(\mathcal{W}_{n}^{\Gamma}=\{W:1+\Gamma^{-1}(\tilde{W}_{i}-1)\leq W_{i}\leq 1+ \Gamma(\tilde{W}_{i}-1),\forall i=1,\cdots,n\}\). Hence, the worst-case empirical estimator \(\tilde{R}_{n}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma})\) equals \[\begin{split}&\max_{W}\frac{1}{n}\sum_{i=1}^{n}\phi(X_{i})(Y_{i}+C (X_{i}))+\sum_{t=0}^{m-1}\frac{\frac{1}{n}\sum_{i}\mathbb{I}(T_{i}=t)[(1-\phi( X_{i}))\pi(T_{i}|X_{i})-\pi_{c}(T_{i}|X_{i})]W_{i}Y_{i}}{\frac{1}{n}\sum_{i} \mathbb{I}(T_{i}=t)W_{i}}\\ &=\max_{W}\frac{1}{n}\sum_{i=1}^{n}\phi(X_{i})(Y_{i}+C(X_{i}))+ \sum_{t=0}^{m-1}\frac{\frac{1}{n}\sum_{i}\mathbb{I}(T_{i}=t)[(1-\phi(X_{i})) \pi(T_{i}|X_{i})-\pi_{c}(T_{i}|X_{i})]W_{i}Y_{i}}{\frac{1}{n}\sum_{i} \mathbb{I}(T_{i}=t)W_{i}}\end{split} \tag{5}\] \[\begin{split} s.t.& 1+\Gamma^{-1}(\tilde{W}_{i}-1) \leq W_{i}\leq 1+\Gamma(\tilde{W}_{i}-1),\end{split}\] where \(W=(W_{1},\cdots,W_{n})\). The algorithm then chooses the policy and router that minimize the robust regret bound, where \[\bar{\pi}(\Pi,\Phi,\pi_{c},\mathcal{W}_{n}^{\Gamma}),\bar{\phi}(\Pi,\Phi,\pi_ {c},\mathcal{W}_{n}^{\Gamma})=\underset{\pi\in\Pi,\phi\in\Phi}{\arg\min}\hat{R }_{n}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma}). \tag{6}\] If we are interested in the policy improvement over the human's policy, we can optimize the future decision and routing policy by minimizing \(\hat{R}_{n}^{H}(\pi,\phi,\mathcal{W}_{n}^{\Gamma})\) as Figure 1: Decision-Making Process for Human-AI Collaboration with Unobserved Confounders. \[\max_{W}\frac{1}{n}\sum_{i=1}^{n}(\phi(X_{i})-1)(Y_{i}+C(X_{i}))+\sum_{t=0}^{m-1} \frac{\frac{1}{n}\sum_{i}\mathbb{I}(T_{i}=t)[(1-\phi(X_{i}))\pi(T_{i}|X_{i})]W_{ i}Y_{i}}{\frac{1}{n}\sum_{i}\mathbb{I}(T_{i}=t)W_{i}}. \tag{7}\] by removing the baseline policy \(\pi_{c}\) and comparing the future system's performance with the human's decision policy. ### Personalization In the collaborative objective mentioned earlier, we assume the experts have similar performance. However, this is not typically the case in real-world scenarios. Experts often possess different areas of expertise, and may get different levels of confounding information. For example, one general physician might excel at treating older patients with multiple health conditions by observing their bedside conditions, whereas another might only focus on recorded information. Therefore, implementing a personalized routing model could potentially enhance the performance of the human-machine team. Rather than indiscriminately assigning an expert to evaluate a given instance, the routing algorithm can make a decision to either delegate the instance to an algorithm or to a human, and, more importantly, determine the most suitable human decision-maker for the task at hand accounting for varied degrees of confounding for each human. We assume the odds ratio of each human decision-maker \(H\in\{0,\cdots,K-1\}\)'s propensity scores are associated with the confounding bound \(\Gamma_{H}\). Similarly, the policy improvement with personalization has confounding-robust objective \(R_{n}^{P}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}})\) (as compared to the unconfounded version in Equation (2)) is \[\mathbb{E}\frac{\phi(a|X)\pi(T|X)-\pi_{c}(T|X)}{\pi_{0}(T|X,Y,H)}Y +\frac{\phi(H|X)}{d_{0}(H|X)}(Y+C(X))\] \[=\mathbb{E}\sum_{t=0}^{m-1}\frac{\mathbb{I}(T=t)}{\pi_{0}(T|X,Y,H )}[\phi(a|X)\pi(T|X)-\pi_{c}(T|X)]Y+\mathbb{E}\frac{\phi(H|X)}{d_{0}(H|X)}(Y+C (X))\] \[=\sum_{t=0}^{m-1}\frac{\mathbb{E}\frac{\mathbb{I}(T=t)}{\pi_{0}(T |X,Y,H)}[\phi(a|X)\pi(T|X)-\pi_{c}(T|X)]Y}{\mathbb{E}\frac{\mathbb{I}(T=t)}{ \pi_{0}(T|X,Y,H)}}+\mathbb{E}\frac{\phi(H|X)}{d_{0}(H|X)}(Y+C(X)) \tag{8}\] Let \(\tilde{W}_{i}=\frac{1}{\tilde{\pi}_{i}(T_{i}|X_{i},H_{i})}\), \(W_{i}=\frac{1}{\pi_{i}(T_{i}|X_{i},Y_{i},H_{i})}\), then the worst-case estimator \(\hat{R}_{n}^{P}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}})\) is \[\max_{W}\sum_{t=0}^{m-1}\frac{\sum_{i}\mathbb{I}(T_{i}=t)W_{i}[ \phi(a|X_{i})\pi(T_{i}|X_{i})-\pi_{c}(T_{i}|X_{i})]Y_{i}}{\sum_{i}\mathbb{I}( T_{i}=t)W_{i}}+\frac{1}{n}\sum_{i=1}^{n}\frac{\phi(H_{i}|X_{i})}{d_{0}(H_{i}|X_{i})}(Y_{i }+C(X_{i})) \tag{9}\] \[s.t.\quad 1+\Gamma_{H_{i}}^{-1}(\tilde{W}_{i}-1)\leq W_{i}\leq 1 +\Gamma_{H_{i}}(\tilde{W}_{i}-1).\] The policy and router can be similarly found by optimizing the following objective, \[\bar{\pi}(\Pi,\Phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}}),\bar{\phi}(\Pi,\Phi,\pi_ {c},\mathcal{W}_{n}^{\Gamma_{H}})=\operatorname*{arg\,min}_{\pi\in\Pi,\phi\in \Phi}\hat{R}_{n}^{P}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}}). \tag{10}\] Compared to Equation (6), Equation (10) further considered how to leverage individual human expertise to minimize the human-AI team's risk. It is easy to see that when the historical and future human assignment is fully randomized and each human decision maker has the same \(\Gamma\), then Equation (10) recovers Equation (6). ## 4 Improvement Guarantee In this section, we demonstrate that the worst-case empirical regret is an asymptotic upper bound for the true population regret. For the theoretical analysis, we assume the outcome and true propensity score is bounded, _i.e._, \(|Y|\leq B,\pi_{0}(t|x,y)\geq v,\forall t\in\{0,\cdots,m-1\},x\in\mathcal{X},y \in\mathcal{Y}\). The following theorem guarantees the improvement over the population regret by solving the minimax optimization for the empirical regret. **Theorem 1**.: _Suppose the true inverse propensities \(1/\pi_{0}(T_{i}|X_{i},Y_{i})\in\mathcal{W}_{n}^{\Gamma},i=1,\cdots,n\), \(|Y|\leq B,\pi_{0}(t|x,y)\geq v,\forall t\in\{0,\cdots,m-1\},x\in\mathcal{X},y \in\mathcal{Y}\) and denote policy and router's class \(\Pi\) and \(\Phi\)'s Rademacher Complexity as \(\mathfrak{R}_{n}(\Pi)\) and \(\mathfrak{R}_{n}(\Phi)\), then for \(\delta>0\), with probability \(1-\delta\), we have_ \[R(\pi,\phi,\pi_{c})\leq\hat{R}_{n}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma})+ (3B+3\bar{c}+\frac{5B+1}{\nu^{2}})\sqrt{\frac{2\log\frac{8m}{\delta}}{n}}+2 \frac{B}{\nu}\mathfrak{R}_{n}(\Pi)+2(B+\bar{c})\mathfrak{R}_{n}(\Phi)\] The proof is included in Appendix A and can be easily extended for the improvement guarantee over the human policy and of the personalized version of the algorithm. Note that the global optima of the empirical objective is never positive when \(\pi_{c}\in\Pi\) since we can take \(\pi=\pi_{c}\) and \(\phi(X)=0\). If we only consider \(\Pi,\Phi\) with vanishing Rademacher Complexity (_i.e._, \(O(n^{-1/2})\)), then Theorem 1 implies that given enough samples, if the empirical objective is negative, we can get some improvement over \(\pi_{c}\) under well-specification. **Data-Driven Calibration of \(\Gamma\).** Since the theorem assumes well-specification, the practitioner needs to specify a plausible strength of the unmeasured confounding by the confounding strength parameter \(\Gamma\). Though the exact value of \(\Gamma\) involves the true propensity score that is not identifiable from the data, we can have a reference point by quantifying the impact of the observed covariates on the propensity score. The parameter \(\Gamma\) defined in Equation (3) measures the degree of influence of the confounder \(U=Y(t)\) on the propensity odds. Similarly, following [44, 45, 23], we measure the impact of an observed covariate Z given the other observed covariates \(X\backslash Z\). The reference points of \(\Gamma\) is computed from the odds ratio \(\frac{(1-\bar{\pi}_{0}(T|X\backslash Z)\bar{\pi}_{0}(T|X)}{\bar{\pi}_{0}(T|X \backslash Z)(1-\bar{\pi}_{0}(T|X))}\). The covariate \(Z\) can also be a group of correlated observed variables that better mimic the nature of unmeasured confounding variables [23]. We refer to [45] for a detailed discussion on potential conservatism in this calibration procedure. Optimizing Confounding-Robust Deferral Collaboration System We next discuss how to optimize the deferral collaboration system in Equation (6) and Equation (10). First, we need to solve the inner maximization in Equation (5) and Equation (9). To simplify notations, we consider the following optimization problem \[\hat{Q}_{t}(r,\mathcal{W})= \max_{W\in\mathcal{W}}\frac{\sum_{i=1}^{n}r_{i}W(T_{i},X_{i},Y_{i })}{\sum_{i=1}^{n}W(T_{i},X_{i},Y_{i})}\] \[s.t.\ \ a_{i}^{\Gamma_{i}}\leq W(T_{i},X_{i},Y_{i})\leq b_{i}^{ \Gamma_{i}} \tag{11}\] When \(r_{i}=\mathbb{I}(T_{i}=t)[(1-\phi(X_{i}))\pi(T_{i}|X_{i})-\pi_{c}(T_{i}|X_{i})]Y _{i}\), \(a_{i}^{\Gamma_{i}}=1+\Gamma^{-1}(\tilde{W}_{i}-1)\), \(b_{i}^{\Gamma_{i}}=1+\Gamma(\tilde{W}_{i}-1)\), solving Equation (11) is equivalent to optimizing \(W\) for the empirical \(\hat{R}_{n}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma})\) in Equation (5), and when \(r_{i}=\mathbb{I}(T_{i}=t)[\phi(a|X_{i})\pi(T_{i}|X_{i})-\pi_{c}(T_{i}|X_{i})]Y _{i}\), \(a_{i}^{\Gamma_{i}}=1+\Gamma_{H_{i}}^{-1}(\tilde{W}_{i}-1)\), \(b_{i}^{\Gamma_{i}}=1+\Gamma_{H_{i}}(\tilde{W}_{i}-1)\), solving Equation (11) is equivalent to optimizing \(W\) for \(\hat{R}_{n}^{P}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}})\) in Equation (9). The optimization problem in Equation (11) is known to be a _linear fractional program_[46, 9, 30, 28]. Taking the derivative of the objective in Equation (11) w.r.t. \(W_{i}=W(T_{i},X_{i},Y_{i})\), the objective is monotonically increasing (decreasing) with \(W_{i}\) if \(r_{i}\sum_{j\neq i}W_{j}-\sum_{j\neq i}r_{j}W_{j}\) is greater (less) than zero. Hence the optima is achieved when all the \(W_{i}\) are taking the value at the boundary. Furthermore, the objective can be viewed as a weighted combination of \(r_{i}\) with the weights adding up to one. So the objective is maximized when the weights \(W_{i}/\sum_{i}W_{i}\) are high for the large \(r_{i}\) and are low for the small \(r_{i}\). Based on these insights, the optimal weights \(\{W_{i}\}\) of the linear fractional program can be characterized by the following theorem. **Theorem 2**.: _Let \((i)\) be the ordering such that \(r_{(1)}\leq r_{(2)}\leq\cdots\leq r_{(n)}\). \(\hat{Q}_{t}(r,\mathcal{W})=\lambda(k^{*})\), where \(k^{*}=\inf\{k=1,\cdots,n+1:\lambda(k)<\lambda(k-1)\}\) and_ \[\lambda(k)=\frac{\sum_{i<k}a_{(i)}^{\Gamma}r_{(i)}+\sum_{i\geq k}b_{(i)}^{ \Gamma}r_{(i)}}{\sum_{i<k}a_{(i)}^{\Gamma}+\sum_{i\geq k}b_{(i)}^{\Gamma}} \tag{12}\] See Appendix A for the proof. Theorem 2 provides an efficient way to solve Equation (11) by line search: first sort \(r_{i}\) in ascending order and initialize all \(W_{i}=a_{i}^{\Gamma}\), then change \(W_{k}\) to \(b_{k}^{\Gamma}\) for \(k=n,n-1,\cdots,1\) until the first time when \(\lambda(k)\) decreases. After solving the inner maximization problem, we can proceed to optimize the minimization problem in Equation (6) and Equation (10). In this paper, we consider differentiable policies \(\Pi=\{\pi_{\theta}:\theta\in\Theta\}\) and router class \(\Phi=\{\phi_{\rho},\rho\in\mathrm{P}\}\), such as logistic policies with \(\pi_{\{\alpha,\beta\}}(x)=\sigma(\alpha+\beta^{T}x)\) or neural networks, so the following optimization problem can be efficiently solved by gradient descent. For every iteration, our algorithm starts by finding the weights \(W\) given the current model parameters through line search, then uses gradient descent to update policy and router jointly. We refer to our main algorithm assuming all decision makers can only be queried randomly as ConfHAI and its variant considering the diverse expertise of individual human decision makers as ConfHAIPerson. The algorithm is summarized in Algorithm 1. ``` number of iterations \(N\) 0:\(\pi_{\theta},\phi_{\rho}\) for\(i\gets 1\) to \(N\)do \(W\leftarrow\underset{W\in\mathcal{W}}{\arg\max}\) Equation (5) (Equation (9) for ConfHAIPerson). \(\theta,\rho\leftarrow\nabla\hat{R}_{n}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{ \Gamma})\) (\(\nabla\hat{R}_{n}^{P}(\pi,\phi,\pi_{c},\mathcal{W}_{n}^{\Gamma_{H}})\) for ConfHAIPerson). endfor ``` **Algorithm 1** Confounding-Robust Deferral Collaboration (ConfHAI/ConfHAIPerson) ## 6 Experiments In this section, we report empirical findings from two experiments conducted to examine the advantages of being robust to unobserved confounding. Our initial experiment involves a synthetic example used in [47], which we employ to demonstrate the benefit of human-AI collaboration within a controlled environment. Our subsequent experiment establishes two real-world examples in financial lending and healthcare scenarios where human-AI collaboration can be implemented. In our experiment, we examine the following decision-making configurations. For all baseline scenarios that don't involve personalization, human experts are selected at random. Human Only (Human) solely queries human-decision-makers randomly to output final decisions. Algorithm Only (AO) uses the inverse propensity score weighting method [18] to train a policy. Confounding-Robust Algorithm Only (ConfAO) trains a confounding-robust policy [47] to determine the final decisions. Human-AI team (HAI) uses the deferral collaboration method proposed in [4] to train a router and policy jointly assuming unconfoundness. Our method and its personalized variant are denoted as ConfHAI and ConfHAIPerson respectively. See Appendix B for a more detailed discussion about the baselines. We use the logistic policies [47] for the policy and router model classes. The baseline policy is the never-treat policy \(\pi_{c}(0|x)=1\) following [47, 9]. ### Synthetic Experiment We use the synthetic example described in [9] to demonstrate the benefit of human-AI team performance with potential confounding. \[\xi\sim\text{Bernoulli}(0.5),X\sim\mathcal{N}((2\xi-1)\mu_{x},I_{ 5}),U=\mathbb{I}[Y_{i}(1)<Y_{i}(-1)]\] \[Y(t)=\beta_{0}^{T}x+\mathbb{I}[t=1]\beta_{\text{treat}}^{T}x+0. 5\alpha\xi\mathbb{I}[t=1]+\eta+w\xi+\epsilon\] where \(\beta_{\text{treat}}=[1.5,1,1.5,1,0.5]\), \(\mu_{x}=[1.5,1,0,1]\), \(\eta=2.5,\alpha=-2,w=1.5\) and \(\epsilon\sim\mathcal{N}(0,1)\). The nominal propensity \(\pi_{0}(T=1|X)=\sigma(\beta^{T}X)\) and \(\beta=[0,.75,.5,0,1,0]\), \(T_{i}\) is then generated according to true propensities by \(\pi_{0}(T=1|X,U)=\frac{(\Gamma U+1-U)\pi_{0}(T=1|X)}{[1+2(\Gamma-1)\pi_{0}(T =1|X)-\Gamma]U+\Gamma+(1-\Gamma)\pi_{0}(T=1|X)}\), where \(\Gamma\) is the specified level of confounding. In this setting, the human decision maker will acquire unobserved information to improve their decisions, similar to [47, 9], we set \(\log(\Gamma)=2.5\) and \(C(x)=0\). We vary the confounding parameter in \(\{0.01,0.5,1,1.5,2,2.5,3,3.5,4\}\). To also test the personalized variant, we simulate three human decision makers with the same \(\Gamma\) in this setting. The results are shown in Figure 2a. The baselines do not consider unobserved confounding have similar performance with our methods and ConfAO when \(\Gamma\) is small, which assumes small confoundness. When \(\Gamma\) is approaching the underlying confounding factor, we observe a significant policy improvement over the baseline policy (regret is smaller than 0). The personalized method achieves similar performance compared to ConfHAI since all human decision makers have the same performance here. In this example, interestingly, the ConfAO policy is actually worse than humans' performance, and almost never exceeds humans' performance with varying \(\Gamma\), while human-AI complementarity is still possible with our proposed methods under correctly specified and large \(\Gamma\), which emphasizes the benefit of our proposed confounding-robust deferral system. Next, we simulate three human workers with \(\log(\Gamma)=1,2.5,4\) respectively, which corresponds to the setting where different humans may acquire different unobserved information to aid their decision making and some expert workers may demonstrate better performance than their peers. We examine four \(\Gamma\) specifications with heterogeneous workers: \([1,1,1],[2.5,2.5,2.5],[1,2.5,4],[4,4,4]\) and show the results in Figure 2b. With small \(\Gamma\) (a weak unmeasured confounding), similarly, we observe all methods perform suboptimally with no policy improvement. The personalized variant has an additional improvement over ConfHAI by leveraging the diverse expertise of human decision makers. With correctly specified and relatively large \(\Gamma\), we observe that ConfHAI and ConfHAIPerson significantly outperform other baselines and demonstrate human-AI complementarity, where they outperform both human-only and algorithm-only teams. ### Real-World Examples We provide two real-world examples in this section to help readers better understand our methods. We set \(C(x)=0.1\) for both examples. See Appendix C for more details about the datasets and training details in the experiments due to space constraints. **Financial Lending.** Consider a financial leading setup, where loan officers can obtain additional information by visiting the loan applicants and improving their decision-making. However, due to the unexpected scenarios the loan officers may encounter, such information was not recorded in the historical data. We use the Home Equity Line of Credit(HELOC)1 dataset which contains anonymized information about credit applications by real homeowners. Some of the used features include average months since the account opened, maximum delinquency, and number of inquiries in the last 6 months. We assume there are three human decision makers with \(\log(\Gamma)=[0.1,0.1,1]\), which means two of them rarely seek external information to improve their decision making and another decision maker is more likely to get external risk estimation when evaluating applications. We train a logistic regression on 10% of the data to simulate nominal policies, which can be a guideline policy of the insurance company, and the actual treatments taken is generated using the same procedure in Section 6.1 and the fitted nominal propensity is estimated using logistic regression on actual treatments. The outcome of the dataset is a binary outcome indicating whether the applicant was 90 days past due since the account was opened over 24 months. We build a risk function where the loan company will receive a risk \(Y\sim\mathcal{N}(0,1)\) if not approving the loan, \(Y\sim\mathcal{N}(-2,1)\) if approving for an applicant with good credit and \(Y\sim\mathcal{N}(2,1)\) if approving for an applicant with bad credit. **Acute Stroke Treatment.** Consider another healthcare example built in [47] where the doctors need to treat patients with acute stroke. In this case, experienced doctors may observe bedside information, and past patient behaviors to aid their decision-making, which are not recorded in the historical records. We use the data from the International Stroke Trial [48] and focus on two treatment arms: the treatment of both aspirin and heparin (medium and high doses), and the treatment of aspirin only. Since the trial only has the outcome under action taken, we create potential outcomes by fitting a separate random forest model for each treatment as in [8, 49]. The outcome is a composite score including variables like death, recurrent stroke, pulmonary embolism, and recovery. Some of the features used by the algorithm include age, sex, deficient symptoms, stroke types, and cerebellar signs. Similarly, Figure 2: Policy Regret for Different Methods. ConfHAI and ConfHAIPerson offer consistent and significantly better policy improvement with correct and large \(\Gamma\) compared to algorithm only baselines. When \(\Gamma\) is small, they perform similarly to baseline methods assuming unconfoundness. we assume there are three human physicians with \(\log(\Gamma)=[0.1,0.1,1]\) prescribing treatments. The results are shown in Figure 2(a) and Figure 2(b) respectively. For each experiment, we try three \(\log(\Gamma)x\) specifications: \([0.1,0.1,0.1]\), \([0.1,0.1,1]\) and \([1,1,1]\), which correspond to under, correct and over specifications. In HELOC, the baselines not considering unobserved confounding can still achieve policy improvement, while it is not consistent across settings, _e.g._, in IST. We observe that ConfHAI and ConfHAIPerson achieve the best performance with correctly-specified \(\Gamma\) and the personalized variant achieves significantly better performance than other methods. With over-specification, the performance of confounding-robust methods decreases but can reliably provide policy improvement. Similarly, ConfAO can provide policy improvement with different specifications of \(\Gamma\), however, its performance is often much worse than the human-AI methods we propose. ### Real Human Responses In addition, we use the real human responses to validate our approach. We use the scientific annotation dataset FOCUS [50, 4] with responses from five human annotators. We assume if the human annotator considers the sentence scientific, they will apply action I (e.g., retweet the paper), if the sentence is indeed scientific, the risk is \(\mathcal{N}(-1,1)\), otherwise the risk is from \(\mathcal{N}(1,1)\). On the other hand, if the human annotator considers the sentence as non-scientific, they will apply action II (e.g., ignore the paper), if the sentence is indeed non-scientific, the risk is \(\mathcal{N}(-1,1)\) (otherwise the risk is \(\mathcal{N}(1,1)\)). Figure 3: Policy Regret with Real-World Examples. In HELOC and IST, we find that the ConfHAI and ConfHAIPerson can offer policy improvement (negative regret) under different \(\Gamma\) specifications while the benefit is the largest with the correct specification. The proposed methods achieve significantly better performance compared to ConfAO and more robust performance with respect to unobserved confounding compared to baselines assuming unconfoundness. The confounding is created by removing samples with 20% top outcomes in the treated group and 20% bottom outcomes in the control group following [47]. We specify the same \(\Gamma\) for each human decision maker and vary it. The results are shown in Figure 4. This dataset is different from our simulations since humans' true propensities may not reflect the worst case indicated in the MSM optimization. However, we still observe our methods consistently offer the best performance with a wide range of \(\Gamma\). ### Ablation Studies We also examine the effect of human cost on the risk of each method. We use the synthetic data setup and vary the human cost from 0 to 0.3. Since the human's cost becomes higher, the Human baseline's performance gets worse. Human-AI systems' performance (HAI, ConfHAI, ConfHAIPerson) is also impacted when human cost is higher, but the proposed methods consistently outperform other baselines. ## 7 Conclusion and Future Work In this paper, we propose a novel approach to the confounding-robust policy improvement problem within Human-AI team settings. The algorithms we developed aim to optimize policy decisions by selectively deferring decision instances to either humans or the AI, based on the context and capabilities of both. Our results indicate that the proposed models exhibit promise in achieving robust policy improvements, providing a foundation for future work in this area. Despite the encouraging results, this work is not without its limitations. The most substantial limitation is the absence of real human studies. While our algorithm has been designed with Figure 4: Results with Real Human Responses. We still observe our methods consistently offer the best performance with a wide range of \(\Gamma\). a significant gap. Future work should therefore incorporate user studies to validate and refine the model based on actual human behavior and decision-making processes. Another limitation lies in the relatively strict constraints on the marginal sensitivity model. These constraints may limit the generalizability and applicability of our approach in certain complex decision-making environments. As such, further research is necessary to explore and develop methods for relaxing these constraints without sacrificing the robustness of policy improvement. In conclusion, we believe that our work presents an important step forward in the field of Human-AI teams and confounding-robust policy improvement. It opens up new possibilities for future research and practical applications, particularly in terms of how humans and AI can work together more effectively to improve policy decisions. We anticipate that subsequent research will build upon our work, conducting human studies and refining the approach to overcome the identified limitations. We are optimistic about the potential of this line of inquiry to drive significant advancements in policy-making processes in the years to come.
2301.01827
A GOA-Based Fault-Tolerant Trajectory Tracking Control for an Underwater Vehicle of Multi-Thruster System without Actuator Saturation
This paper proposes an intelligent fault-tolerant control (FTC) strategy to tackle the trajectory tracking problem of an underwater vehicle (UV) under thruster damage (power loss) cases and meanwhile resolve the actuator saturation brought by the vehicle's physical constraints. In the proposed control strategy, the trajectory tracking component is formed by a refined backstepping algorithm that controls the velocity variation and a sliding mode control deducts the torque/force outputs; the fault-tolerant component is established based on a Grasshopper Optimization Algorithm (GOA), which provides fast convergence speed as well as satisfactory accuracy of deducting optimized reallocation of the thruster forces to compensate for the power loss in different fault cases. Simulations with or without environmental perturbations under different fault cases and comparisons to other traditional FTCs are presented, thus verifying the effectiveness and robustness of the proposed GOA-based fault-tolerant trajectory tracking design.
Danjie Zhu, Lei Wang, Hua Zhang, Simon X. Yang
2023-01-04T21:30:16Z
http://arxiv.org/abs/2301.01827v1
A GOA-based Fault-tolerant Trajectory Tracking Control for an Underwater Vehicle of Multi-thruster System without Actuator Saturation ###### Abstract This paper proposes an intelligent fault-tolerant control (FTC) strategy to tackle the trajectory tracking problem of an underwater vehicle (UV) under thruster damage (power loss) cases and meanwhile resolve the actuator saturation brought by the vehicle's physical constraints. In the proposed control strategy, the trajectory tracking component is formed by a refined backstepping algorithm that controls the velocity variation and a sliding mode control deducts the torque/force outputs; the fault-tolerant component is established based on a Grasshopper Optimization Algorithm (GOA), which provides fast convergence speed as well as satisfactory accuracy of deducting optimized reallocation of the thruster forces to compensate for the power loss in different fault cases. Simulations with or without environmental perturbations under different fault cases and comparisons to other traditional FTCs are presented, thus verifying the effectiveness and robustness of the proposed GOA-based fault-tolerant trajectory tracking design. Note to Practitioners: This paper is motivated by the actuator saturation problem that exists in the trajectory tracking of an underwater vehicle (UV) when encountering power loss of the thruster system. The fault-tolerance trajectory tracking performance is affected by physical constraints of the vehicle when using the traditional methods as they may deduct excessive kinematic/dynamic requirements during the control process, thus inducing the deviation of the tracking trajectory. Therefore, the refined backstepping as well as the grasshopper optimization (GOA) are combined to eliminate the excess, where the refined backstepping is used to alleviate the speed jumps (kinematic outputs) and the GOA is to control the propulsion forces (dynamic outputs) when facing thruster fault cases. This innovates the industrial practitioners that the control design of the vehicle can be improved to avoid the tracking deviation brought by unsatisfied driving commands under fault cases through embedding optimization algorithms. Moreover, for the specific type of UV studied in this paper used for dam detection, simulations regarding practical dam detection such as the 3D polygonal line trajectory tracking and the frequently occurring UV single-fault cases are chosen, which can serve as references for practitioners working in the related field. In the future, underwater experiments of the UV will be investigated, with more effects of the practical environment involved. actuator saturation, backstepping control, fault-tolerant control, grasshopper optimization, trajectory tracking, underwater vehicle. ## I Introduction The study of vehicle controls has been extended to various conditions where fault tolerance is widely involved in the corresponding control design, which is denoted as fault-tolerant control (FTC) [1, 2]. Scientists have worked on the FTC study for decades in various fields such as crafts in the air or space, land vehicles and industrial manufacturing [3, 4, 5, 6, 7, 8, 9, 10]. In previous studies, the FTC is usually applied to alleviate abrupt errors and provides the most feasible solution when inevitable damages happen for equipment in different fields [11]. However, the research regarding the FTC on underwater vehicles (UV) has not been thoroughly investigated, due to the complexity brought by the underwater environment and the UV system [12, 13, 14, 15]. Corresponding studies on the FTC have been proposed in this century [3, 4, 6, 16]. Based on these studies, the design of the excessive number of thrusters compared to the number of degrees of freedom (DOF) is raised and accepted as a resolution to the UV FTC problem, which is called thruster reconfiguration [17, 18]. For example, when unexpected fault cases of the vehicle thrusters occur, the thrusters installed on the vehicle that exceeds the number of DOFs (six: surge, sway, heave, row, pitch and yaw) have enough flexible space to be regulated to provide the required propulsion at corresponding DOFs. To implement the thruster configuration theory in practical cases, the weighted pseudo-inverse matrix method has been proposed, where the fault cases are quantified as degrees of damage and serve as the inputs to form the thruster configuration model [19]. By this method, the process of the FTC is largely simplified, as the required thruster propulsion can be deducted directly through a weighted pseudo-inverse matrix model. Nevertheless, physical constraints of the thruster outputs are rarely considered, thus inducing the over-actuated vehicle issue [20, 21]. Additionally, among these studies, most of them work on eliminating the static errors induced by the fault cases. While in UV practical application, the realization of dynamic control on the vehicle's outputs in a real-time manner, which commonly refers to the trajectory tracking control for underwater vehicles, is of crucial importance [22, 23, 24]. Therefore, motivated by the over-actuated issue and meanwhile realizing robustness for underwater vehicle trajectory tracking, optimization methods are combined with the tracking control to optimize the vehicle's dynamic outputs within allowable domains during the tracking procedure when encountering fault cases. Among the current mainstream optimization algorithms, the genetic algorithm consumes a long time on iteration, which is not ideal for the UV FTC that requires both fast and feasible solutions [25, 26, 27]. The neural network is demanding on the choice of data inputs while the UV cannot provide when encountering fault cases, which shares the same concern with the greedy algorithm, as the greedy algorithm needs to decompose the data for processing [28, 29, 30]. Hence the swarm intelligence algorithm for optimization stands out to be a preferable method to tackle the FTC application of UVs due to its flexibility of data inputs and fast convergence speed [31, 32, 33]. Zhu's group has applied Particle Swarm Optimization (PSO) based FTC on the unmanned underwater vehicle, though satisfactory torque outputs are achieved, the traditional PSO method shows poor real-time feedback, which does not conform to the online requirement of UV FTCs [34, 35]. In this study, an advanced swarm intelligent method named Grasshopper Optimization Algorithm (GOA) is chosen based on its satisfactory balance between fast convergence and accuracy of obtaining optimization results as well as its simple implementation [36]. The fast convergence is realized by its simply updated iteration derived from the position of each search agent, which dramatically promotes optimizing efficiency and offers the possibility of real-time feedback for the UV FTC [37, 38]. At the same time, GOA can provide accurate fault-tolerant results within acceptable driving constraints through the limitations embedded in the optimization algorithm, thus performing adaptive in resolving the over-actuated issue of the UV FTC [39, 40]. The contribution of the control strategy proposed in this paper is to combine an advanced swarm intelligence algorithm (GOA) with the fault-tolerant trajectory tracking control to resolve fault cases of a progressed UV without actuator saturation. By identifying and quantifying the degree of damage (power loss) of the multi-thruster system and then efficiently reallocating their forces through the GOA method, a feasible solution with satisfactory accuracy will be given in a fast convergence manner. The strategy establishes a systematic fault identification as well as efficient error elimination process for the UV with abrupt damages; and it first realizes the application of the GOA method in FTC of specific underwater vehicles with a multi-thruster system. Conventional methods such as the constrained control allocation method developed by Durham are based on the basic linear algebra concepts and a means to determine the bounding surface of the attainable moment space, yet the bounding surface is difficult to be addressed [41]. Commonly applied methods used for constructing UV FTC such as T-approximation or S-approximation cannot thoroughly resolve the problem of actuator saturation due to the inevitable errors brought by the vehicle constraints [34]. Therefore, the fast convergence speed and ideal accuracy of the GOA method help to amend the commands given by the dynamic controller in time, which accomplishes real-time FTC on the UV trajectory tracking [36]. The rest of the paper is organized as follows. First, the kinematic and dynamic models of an advanced UV, "YuLong", are introduced and the torque-force transition/normalization is defined based on the UV thruster configuration. Next, the fault-tolerant trajectory tracking problem of the UV is described, with its restrictions explained. The grasshopper optimization-based fault-tolerant trajectory tracking control is then proposed, where a refined backstepping algorithm is applied to form the kinematic control component; a sliding mode control works as the dynamic control component; and the grasshopper optimization algorithm is used to form the fault-tolerant component by reconfiguring the thruster force outputs to eliminate the errors produced by different damage degrees of the thrusters in fault cases. In the last section, the effectiveness of the proposed grasshopper optimization-based FTC is evaluated by tracking desired polygonal line or helix trajectories, under the condition that one thruster (single-fault) or two thrusters (double-fault) are supposed to be damaged. ## II UV Models and Problem Statement In this section, a typical type of UV named "YuLong" is studied. Its robot models and trajectory tracking problem descriptions are given in the form of specific equations. ### _Models of the "YuLong" UV_ In this section, a typical type of UV named "YuLong" is studied. Its robot models and fault-tolerant trajectory tracking problem descriptions are given in the form of specific equations. Ii-A1 Kinematic and Dynamic Model: "YuLong" UV is one of the latest UV for dam detection, designed by the Underwater Engineering Institute of China Ship Scientific Research Center, whose diving depth reaches 3000m. Its rough sketch is shown in Fig. 1 and its multi-thruster system structure is shown in Fig. 2. As a UV specially designed for detecting and maintaining a satisfactory condition of the dam in the deep-water area, the robustness and efficiency of the vehicle's operation are crucial for dam detectors. Among the six degrees of freedom (DOF) of the UV, surge, sway, heave, roll, pitch and yaw, roll and pitch can be neglected because these two DOFs barely have an influence on the underwater vehicle during practical navigation (Fig. 1). Therefore, when establishing the trajectory tracking model to keep a controllable operation of the UV, usually four DOFs surge, sway, heave and yaw are involved. Based on the four involved DOFs, for the kinematic equation of the UV, the velocity vector \(\mathbf{v}\) can be transformed into the time derivative of the trajectory vector \(\hat{\mathbf{p}}\) as \[\hat{\mathbf{p}}=\begin{bmatrix}\hat{x}\\ \hat{y}\\ \hat{z}\\ \hat{v}\end{bmatrix}=\mathbf{J}(\mathbf{p})\mathbf{v}==\begin{bmatrix}\cos \psi&-\sin\psi&0&0\\ \sin\psi&\cos\psi&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}u\\ v\\ w\\ r\end{bmatrix}, \tag{1}\] where \(\mathbf{J}\) is a transformation matrix derived from the physical structure of the UV body, \([u\ v\ v\ r]^{T}\) represents the velocities at the chosen four axes of the UV (see Fig. 1). In an actual UV system, several complex and nonlinear forces such as hydrodynamic drag, damping, lift forces, Coriolis and centripetal forces, gravity and buoyancy forces, thruster forces, and environmental disturbances are acting on t the vehicle. Considering the origins and effect of the forces, a general dynamic model can be written as \[\mathbf{M}\hat{\mathbf{v}}+\mathbf{C}(\mathbf{v})\mathbf{v}+\mathbf{D}(\mathbf{v} )\mathbf{v}+\mathbf{g}(\mathbf{p})=\boldsymbol{\tau}\,, \tag{2}\] where M is the inertia matrix of the summation of rigid body and added mass; \(\mathbf{C}(\mathbf{v})\) is the Coriolis and centripetal matrix of the summation of rigid body and added mass; \(\mathbf{D}(\mathbf{v})\) is the quadratic and linear drag matrix; \(\mathbf{g}(\mathbf{p})\) is the matrix of gravity and buoyancy; and \(\boldsymbol{\tau}\) is the torque vector of the thruster inputs. As mentioned in the previous section, in this study only four states are considered for the specific model YuLong UV. The torque vector of the thruster input is represented by \[\boldsymbol{\tau}=\begin{bmatrix}\tau_{x}&\tau_{y}&\tau_{z}&\tau_{n}\end{bmatrix}, \tag{3}\] where \(x\), \(y\) and \(z\) represent the linear displacements of the UV at surge, sway and heave directions, while \(n\) represents the angular displacement of the UV at yaw direction (see Fig. 1). For the "Yulong" UV, the following parameter values are assigned: inertia matrix \(\mathbf{M}=[42\;0\;0\;0;\;0\;153\;0\;0;\;0\;141\;0;\;0\;0\;0\;100]^{T}\); Coriolis and centripetal matrix \(\mathbf{C}(\mathbf{v})=\mathbf{J}^{-1}\mathbf{M}\mathbf{j}\mathbf{J}^{-1}\), where \(\mathbf{J}^{-1}\) represents the inverse matrix of \(\mathbf{J}\) and \(\mathbf{J}\) represents the derivative of \(\mathbf{J}\) and quadratic and linear drag matrix \(\mathbf{D}(\mathbf{v})=[42+69u\;0\;0\;0\;0;\;0\;\;319+245v\;0\;0;\;0\;0\;\;0\;272 +86w\;0;\;0\;\;0\;0\;\;0\;33+4r]^{T}\). In addition, the gravity force applied on the vehicle is balanced off by the buoyancy force when the whole system sustains at an equilibrium status. #### Ii-B2 Torque-force Transition and Normalization According to the physical structure of the "Yulong" vehicle propulsion system (see Fig. 2), the relation between its torque vector and the forces of the thrusters is \[\boldsymbol{\tau}=\begin{bmatrix}\tau_{x}\\ \tau_{y}\\ \tau_{z}\\ \tau_{n}\end{bmatrix}=\begin{bmatrix}T_{1}+T_{2}\\ T_{7}+T_{8}\\ T_{3}+T_{4}+T_{5}+T_{6}\\ T_{1}+1.4\times T_{8}-T_{2}-1.4\times T_{7}\end{bmatrix}, \tag{4}\] where \(T_{1}\), \(T_{2}\), \(T_{3}\), \(T_{4}\), \(T_{5}\), \(T_{6}\), \(T_{7}\), \(T_{8}\) are the forces produced by the eight thrusters installed around the vehicle body. The eight thrusters are of the same type and are supposed to have the same maximum force \(T_{m}\). Therefore the maximum of the torque vector \(\boldsymbol{\tau_{m}}\) can be deducted based on Eq. (4) as \[\boldsymbol{\tau_{m}}=\begin{bmatrix}\tau_{rm}\\ \tau_{ym}\\ \tau_{zm}\\ \tau_{zm}\\ \tau_{nm}\end{bmatrix}=\begin{bmatrix}2T_{m}\\ 2T_{m}\\ 4T_{m}\\ (2+2.8)T_{m}\end{bmatrix}. \tag{5}\] On the basis of Eq. (4), divide both sides of Eq. (5) by the maximum torques to restrict the output in a certain range of -1 to 1, and set \(\overline{\boldsymbol{\tau}}=\boldsymbol{\tau}/\boldsymbol{\tau_{m}}\), \(\overline{\boldsymbol{\tau}}=\mathbf{T}/\mathbf{T_{m}}\), the vector is transformed into \[\begin{bmatrix}\overline{\boldsymbol{\tau}}_{x}\\ \overline{\boldsymbol{\tau}}_{y}\\ \overline{\boldsymbol{\tau}}_{z}\\ \overline{\boldsymbol{\tau}}_{n}\end{bmatrix}=\begin{bmatrix}\tau_{x}/\tau_{ zm}\\ \tau_{y}/\tau_{ym}\\ \tau_{z}/\tau_{zm}\\ \tau_{n}/\tau_{nm}\end{bmatrix}\] \[=\begin{bmatrix}\frac{1}{2}&\frac{1}{2}&0&0&0&0&0&0\\ 0&0&0&0&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&0&0\\ \frac{5}{24}&-\frac{5}{24}&0&0&0&0&-\frac{7}{24}&\frac{7}{24}\end{bmatrix} \begin{bmatrix}T_{1}/T_{m}\\ T_{2}/T_{m}\\ T_{3}/T_{m}\\ T_{4}/T_{m}\\ T_{5}/T_{m}\\ T_{6}/T_{m}\\ T_{7}/T_{m}\\ T_{8}/T_{m}\end{bmatrix}\] \[=\overline{\mathbf{B}}\begin{bmatrix}T_{1}/T_{m}\\ T_{2}/T_{m}\\ T_{3}/T_{m}\\ T_{4}/T_{m}\\ T_{5}/T_{m}\\ T_{6}/T_{m}\\ T_{7}/T_{m}\\ T_{8}/T_{m}\end{bmatrix}=\overline{\mathbf{B}}\begin{bmatrix}\overline{T}_{1} \\ \overline{T}_{2}\\ \overline{T}_{3}\\ \overline{T}_{4}\\ \overline{T}_{5}\\ \overline{T}_{6}\\ \overline{T}_{7}\\ \overline{T}_{7}\\ \overline{T}_{8}\end{bmatrix}. \tag{6}\] The above equation has the compact form as \[\overline{\boldsymbol{\tau}}=\overline{\mathbf{B}}\,\overline{\boldsymbol{ \tau}}\] \[\overline{\boldsymbol{\mathbf{T}}}=\overline{\mathbf{B}}^{-1}\,\overline{ \boldsymbol{\tau}}\,, \tag{7}\] where \(\overline{\mathbf{B}}^{-1}\) is the generalized inverse matrix of \(\overline{\mathbf{B}}\). Therefore, the transition between thruster forces and torques is achieved and normalized. For all the torques and forces in Eq. (6), they are ranged from -1 to 1 (\(-1\leq\overline{\boldsymbol{\tau}}\leq 1\) and \(-1\leq\overline{\boldsymbol{\mathbf{T}}}\leq 1\)) to perform a direct and simplified showcase during the tracking control process. Fig. 1: The reference frame and six degrees of freedom (x, y, z, k, m and n) of the ”YuLong” UV (Top view). Fig. 2: The thruster distribution in the ”Yulong” UV propulsion system. (a) The top view, (b) The lateral view. ### _Problem Statement_ In this subsection, the fault-tolerant control problem on the thruster system of the "Yulong" vehicle is modeled and explained. Requirements and constraints during the control process are introduced. #### Ii-B1 Fault-tolerant Problem in UV Trajectory Tracking To achieve the accuracy and efficient control of the UV, the fault cases of the thruster system should be taken into consideration, where one or more of the thrusters might get broken and corresponding controls are proposed to sustain the movement and posture of the vehicle as desired, by deriving the normalized desired torques \(\overline{\mathbf{r}}_{d}\). The control of the torque outputs \(\overline{\mathbf{r}}\) is realized by the combined action of the thruster forces under fault conditions, which is allocated by approximation/optimization methods. The approximation/optimization deducts the optimal reallocation when the thrusters' working condition changes, and accomplishes the elimination of the torque errors during the process. For UV with a multi-thruster system, the ideal fault-tolerant control is realized by, \[||e||=||\overline{\mathbf{r}}_{d}-\overline{\mathbf{r}}||\;\rightarrow\;0\,, \tag{8}\] \[||\theta_{e}||=\arccos\frac{\overline{\mathbf{r}}_{d}\bullet\overline{ \mathbf{r}}}{||\overline{\mathbf{r}}_{d}||\cdot||\overline{\mathbf{r}}||}\; \rightarrow\;0\,, \tag{9}\] where \(||e||\) is the magnitude error obtained by the vector norm of the difference between the desired and the actual torques of the vehicle, \(||\theta_{e}||\) is the direction error that computes the arc cosine of the ratio between the multiplication of desired and actual torques and their vector norm. Besides the magnitude error, the direction error is also important to the trajectory tracking control of the underwater vehicle, as the vehicle's movement is also determined by the direction displacement of its dynamic outputs (torques) along the axes. It is possible for the vehicle to have an acceptable magnitude dynamic error but meanwhile obtain excessive direction error, thus inducing non-ideal tracking results. Hence, when designing the fault-tolerant control, the error values should be as small as possible to obtain good controlling results. #### Ii-B2 Constraints on Fault-tolerant Control of the UV In the actual application, the controlling effect is always restricted by the physical constraints of the vehicle. The UV cannot provide infinite driving inputs such as torques/forces to complete the navigation, thus resulting in the problem of actuator saturation. Therefore driving restrictions are always applied on the UV to achieve a reliable and controllable navigation process. The maximum forces of the vehicle thrusters, derived from the maximum torques that can be offered by the vehicle body, are the essential constraints of the vehicle's controlling problem. To assess the influence of the constraints, maximum torques \(\mathbf{\tau_{\boldsymbol{m}}}\) are introduced in the simulation part of this paper (see Fig. 3). By the definition given in Eq. (6), normalized torques \(\overline{\mathbf{r}}\) and normalized forces \(\overline{\mathbf{T}}\) are supposed to have the limits of -1 to 1. The two variables are used to quantify the effect of the constraints during the trajectory tracking process in the simulation part. ## III GOA-based Fault-tolerant Trajectory Tracking Control (GFTC) Design The basic control architecture of the system is illustrated in Fig. 3. The design of the control strategy consists of two parts: (1) a trajectory tracking component formed by an outer loop of auxiliary kinematic control based on the position errors of the UV and an inner loop of dynamic torque controller based on the velocity state vector; (2) a fault-tolerant control component that identifies the fault cases of the UV propulsion system and reallocates the thruster force outputs based on Grasshopper optimization algorithm to eliminate the effect brought by the fault cases. Additionally, maximum thruster force outputs \(\mathbf{T}_{m}\) are applied (see the bold frame in Fig. 3). Specially for the simulation of underwater environmental perturbations, current disturbance on torque outputs is considered. Moreover, the environmental noise at the step of forming vehicle positions is also involved. Details of the control design will be presented in this section. ### _Component of Trajectory Tracking Control_ In this section, the methods that form the kinematic and dynamic control of the UV are explained, with detailed equations and stability analysis given. #### Iii-A1 Error Restricted Backstepping Control Suppose the maximum velocity vector of UV is \(\mathbf{v_{m}}=[v_{zm}\;\;v_{ym}\;\;v_{zm}\;\;v_{\psi m}]^{T}\), the input \(\mathbf{e}(t)\) is the error between the desired and actual trajectories. a transfer function is defined to process the input trajectory errors within an acceptable range, \[\mathbf{v}_{e}=f(\mathbf{e})\mu\mathbf{v}_{m}=\frac{\mathbf{e}(t)}{|\mathbf{e} (t)|+1}\mu\mathbf{v}_{m}\,, \tag{10}\] where \(\mathbf{v_{m}}\) represents the maximum velocities of the UV DOFs at their corresponding axes; \(\mu\) is a positive constant chosen according to the variation requirement and is assigned with 0.5 in this study. Therefore, when \(\mathbf{e}(t)\to 0\), \(\mathbf{v_{e}}=f(\mathbf{e})\mu\mathbf{v_{m}}\to 0\); and when \(\mathbf{e}(t)\rightarrow\infty\), \(\mathbf{v_{e}}=f(\mathbf{e})\mu\mathbf{v_{m}}\rightarrow\) the restricted maximum velocity \(\mu\mathbf{v_{m}}\). Based on the restriction, the transfer function \(\mathbf{v}_{e}\) limits its control output in a smaller domain and meanwhile provides faster convergence to the input errors. The processed outputs \(\mathbf{v_{e}}\) after the convergence shall not exceed the maximum possible value of the UV velocity, and the definition of \(f(\mathbf{e})\) provides a smooth transition of the UV at the beginning. Hence the speed-jump problem of the UV can be alleviated. The vector \(\mathbf{v_{e}}\) at the four DOFs can be written as \(\mathbf{v_{e}}=[v_{ex}\;\;v_{ey}\;\;v_{ez}\;\;v_{en}]^{T}\). Additionally, in the backstepping method, control functions for each subsystem are designed based on the Lyapunov techniques and generated to form the complete control law [42]. Therefore, based on Eq. (1) and the definition of the backstepping method, the error variables in the control law of the backstepping method are replaced by the restricted outputs processed in Eq. (10), the control law of the backstepping control can be derived as \[\mathbf{v_{c}}=\begin{bmatrix}u_{c}\\ v_{c}\\ w_{c}\\ r_{c}\end{bmatrix} \tag{11}\] \[=\begin{bmatrix}k(v_{ex}\cos\psi+v_{ey}\sin\psi)+u_{d}\cos v_{e\psi}-v _{d}\sin v_{e\psi}\\ k(-v_{ex}\sin\psi+v_{ey}\cos\psi)+u_{d}\sin v_{e\psi}-v_{d}\cos v_{e\psi}\\ w_{d}+k_{z}v_{ez}\\ r_{d}+k_{\psi}v_{e\psi}\end{bmatrix}.\] where \(k\), \(k_{z}\) and \(k_{\psi}\) are positive constants. Then the processed control velocities \(\mathbf{v_{c}}\) are passed to the UV, where they are calculated to keep pace with the desired trajectory through the dynamic model of the UV. Additionally, the stability of the refined backstepping control can be proved by constructing a Lyapunov function \(\Gamma_{0}=\frac{1}{2}(e_{x}^{2}+e_{y}^{2}+e_{z}^{2}+e_{\psi}^{2})\), whose derivative is less than and equal to zero (see Appendix A). #### Iii-B2 Sliding Mode Control To design the sliding mode control, the desired dynamics (s) should be introduced. Based on Eq. (2) where the UV dynamic system is of the second order for the velocity \(\mathbf{v}\), the dynamics can be designed as \[\mathbf{s}=\left[\frac{d}{dt}+\lambda\right]^{2}\,\int\mathbf{e_{v}}dt=\mathbf{ \hat{e}_{v}}+2\lambda\mathbf{e_{v}}+\lambda^{2}\int\mathbf{e_{v}}dt, \tag{12}\] where \(\frac{d}{dt}\) is the derivative operator; \(\mathbf{e_{v}}\) represents the errors given by the control velocities (see Fig. 3), \(\mathbf{e_{v}}=\mathbf{v_{c}}-\mathbf{v}\); and \(\lambda>0\) is a positive parameter [43]. Then take the derivative of \(\mathbf{s}\), we can get \[\mathbf{\hat{s}}=\mathbf{\tilde{e}_{v}}+2\lambda\mathbf{\hat{e}_{v}}+\lambda^ {2}\mathbf{e_{v}}\,, \tag{13}\] where \(\mathbf{\hat{e}_{v}}=\mathbf{\hat{v}_{e}}-\mathbf{\hat{v}}\) To keep the system states consistent with the desired dynamics, Eq. (13) should be equal to zero. This means the system states are on the sliding surface of the perfect tracking. At the same time, plug in the equation of the UV dynamic model (Eq. (2)), \[\mathbf{\hat{s}}=\mathbf{\tilde{e}_{v}}+2\lambda\mathbf{\hat{e}_{ v}}+\lambda^{2}\mathbf{e_{v}}=0\] \[\mathbf{\tilde{e}_{v}}+2\lambda(\mathbf{\hat{v}_{e}}-\mathbf{ \hat{v}})+\lambda^{2}\mathbf{e_{v}}=0\] \[\mathbf{\tilde{e}_{v}}+2\lambda(\mathbf{\hat{v}_{e}}-(\mathbf{ \tau}-\mathbf{Cv}-\mathbf{Dv}-\mathbf{g})\mathbf{M}^{-1})+\lambda^{2}\mathbf{ e_{v}}=0\] \[\mathbf{\tau}=\mathbf{M}(\mathbf{\hat{v}_{c}}+\frac{\mathbf{ \tilde{e}_{v}}}{2\lambda}+\frac{\lambda}{2}\mathbf{e_{v}})+\mathbf{Cv}+ \mathbf{Dv}+\mathbf{g}\,. \tag{14}\] The standard sliding mode control law is defined as \[\mathbf{\tau}=\mathbf{\tilde{\tau}}+\mathbf{\tau_{c}}\,, \tag{15}\] where \(\mathbf{\hat{\tau}}\) represents the major control law, which is continuous and model-based. It is designed to maintain the trajectory consistently on the sliding surface. \(\mathbf{\tau_{c}}\) represents the switching control law, dealing with the model uncertainty. When the trajectory is getting out of control, \(\mathbf{\tau_{c}}\) is used to push the trajectory back to the sliding surface and continue satisfactory tracking. For Eq. (III-B1), supposing a simplification \(\mathbf{\tilde{e}_{v}}\approx-k\mathbf{\hat{e}_{v}}\) based on the error acceleration feedback control to reduce computation complexity, the estimated item in the major control law \(\mathbf{\hat{\tau}}\) can be deducted as \[\mathbf{\hat{\tau}}=\mathbf{\hat{M}}(\mathbf{\hat{v}_{c}}+\frac{-k\mathbf{ \hat{e}_{v}}}{2\lambda}+\frac{\lambda}{2}\mathbf{e_{v}})+\mathbf{\hat{C}v}+ \mathbf{\hat{D}v}+\mathbf{\hat{g}}\,, \tag{16}\] where \(\mathbf{\hat{M}}\), \(\mathbf{\hat{C}}\), \(\mathbf{\hat{D}}\), \(\mathbf{\hat{g}}\) are the estimated values of \(\mathbf{M}\), \(\mathbf{C}\), \(\mathbf{D}\) and \(\mathbf{g}\); approximate values can be obtained from the practical case respectively [44]. The switching item \(\mathbf{\tau_{c}}\) in sliding mode control can be defined as \[\mathbf{\tau_{c}}=-\mathbf{K_{1}s}-\mathbf{K_{2}}|s|^{r}sign(\mathbf{s})\,, \tag{17}\] where \(sign(\mathbf{s})\) is the nonlinear sign function of \(\mathbf{s}\); \(\mathbf{K_{1}}\) and \(\mathbf{K_{2}}\) are positive coefficients, \(\mathbf{K_{1}}\geq\eta+F\) and \(\mathbf{K_{2}}\geq\eta+F\), \(\eta\) is the design parameter which is always chosen as a positive constant; \(0<r<1\); and \(F\) represents the upper bound of the difference between the system actual output and the estimation, \[F=|f(\mathbf{v})-\hat{f}(\mathbf{v})|. \tag{18}\] Additionally, an adaptive variation term \(\widetilde{\mathbf{\tau}}_{est}\) is added to the control law, where \(\widetilde{\mathbf{\tau}}_{est}=\Gamma\mathbf{s}\) and \(\Gamma\) represents a positive Fig. 3: Schematic of the proposed fault-tolerant trajectory tracking control designed for the UV. constant. Hence the final sliding mode control law is defined as \[\mathbf{\tau}=\mathbf{\hat{\tau}}+\widetilde{\mathbf{\tau}}_{est}+ \mathbf{\tau_{c}}\] \[=\mathbf{\hat{M}}(\mathbf{\hat{v}_{c}}+\frac{-k\mathbf{\hat{e}_{v }}}{2\lambda}+\frac{\lambda}{2}\mathbf{e_{v}})+\mathbf{\hat{C}v}+\mathbf{\hat{ D}v}+\mathbf{\hat{g}}\] \[+\widetilde{\mathbf{\tau}}_{est}-\mathbf{K_{1}s}-\mathbf{K_{2}}| s|^{\tau}\mathrm{sign}(\mathbf{s})\,. \tag{19}\] Detailed proof of the SMC stability can be found in Appendix B. ### _Component of Fault-tolerant Control_ The fault-tolerant control design is mainly built on the adjustment of the forces required by the thruster system, where the forces operate together and provide the torques as desired. The adjustment of the thruster forces is deducted by the grasshopper optimization algorithm (GOA), which efficiently eliminates the errors brought by the fault of the propulsion system after fault identification. Details of the control strategy are presented in this section. #### Iii-B1 Weighting Matrix To quantify the degree of damage in the fault cases for the multi-thruster system, a weighting matrix \(\mathbf{W}\) is introduced. The matrix \(\mathbf{W}\) decides the service condition of the thruster, which is usually defined as a diagonal matrix, \[\mathbf{W}=\begin{bmatrix}w_{1}&0&0&0&0&0&0&0\\ 0&w_{2}&0&0&0&0&0&0\\ 0&0&w_{3}&0&0&0&0&0\\ 0&0&0&w_{4}&0&0&0&0\\ 0&0&0&0&w_{5}&0&0&0\\ 0&0&0&0&0&w_{6}&0&0\\ 0&0&0&0&0&0&w_{7}&0\\ 0&0&0&0&0&0&0&w_{8}\end{bmatrix} \tag{20}\] where \(w_{j}>0\) is the weight of the \(j^{th}\) thruster. If all the thrusters are working in the desired condition with no power loss, \(\mathbf{W}\) will be a unit matrix, meaning all \(w_{j}=1\). If there is power loss for any of the thrusters, its corresponding weight will be reduced by the degree of the loss. For example, when \(T_{1}\) thruster attains 20% of power loss, \(w_{2}\) in the weighting matrix is assigned as 0.8 respectively. As the relation between the thruster forces and the vehicle torques at different states are defined and given in Eq. (7), the following transition between the torques and forces in the fault cases is defined as, \[\mathbf{\overline{\tau}}=\mathbf{\overline{B}W\overline{T}}, \tag{21}\] where \(\mathbf{\overline{T}}\) is the control parameters in the UV case, which is deducted by the optimization method, such as the GOA method used in this study. Additionally, as a comparison to the GOA method, the weighted pseudo-inverse matrix method is used, which is determined based on the defined weighting matrix, \[\mathbf{\overline{T}}=\mathbf{\overline{B}}_{w}^{+}\mathbf{\overline{\tau}}_ {d}=(\mathbf{W\overline{B}}^{T}(\mathbf{\overline{B}W\overline{B}}^{T})^{-1}) \mathbf{\overline{\tau}}_{d}, \tag{22}\] where \(\mathbf{\overline{B}}_{w}^{+}\) is the matrix that transmits the damage information to the propulsion system and meanwhile make the adjustment accordingly. Thus, the thruster force results under fault cases can be deducted. For example, if the thruster \(T_{1}\) can only provide 70% of power after encountering a power loss of 30%, the weighted pseudo-inverse matrix method will request larger output (\(\frac{1}{0.7}\times\) original force) from \(T_{1}\) such that the same force can be achieved after weakened by 30% of power loss. Then T-approximation or S-approximation methods are applied for achieving force results within the range of the thruster force maximum, which is generally denoted as pseudo-inverse (P-I) matrix approximation. T-approximation restricts all normalized forces \(\mathbf{\overline{T}}\) between [-1, 1] by subtracting/adding the excessive part of the states whose value is larger than 1 or smaller than -1, where \[\mathbf{\overline{T}}_{t}=\left\{\begin{array}{ll}\overline{T}_{i},& \overline{T}_{i}\in[-1,1]\\ 1,&\overline{T}_{i}>1\\ -1,&\overline{T}_{i}<-1\end{array}\right.\] S-approximation realizes the limits of [-1, 1] by timing the reciprocal ratio of the largest normalized force for all states, where \[\mathbf{\overline{T}}_{s}=\frac{1}{max(\overline{T}_{i})}\mathbf{\overline{T} },i=1,2,...,8.\] For example, in S-approximation, if the largest normalized force for one of the states reaches 2, all normalized forces will be multiplied by the ratio of \(\frac{1}{2}\) to guarantee they do not exceed the limits of -1 to 1. In the simulation section of this study, T-approximation method is used as the typical pseudo-inverse (P-I) matrix approximation to work as a comparison of the proposed GOA-based FTC. The T-approximation has wider application in practical cases of the underwater vehicle FTC due to it generally produces smaller errors compared to the S-approximation [34, 45]. #### Iii-B2 Grasshopper Optimization Algorithm The grasshopper optimization algorithm is newly raised in 2017 [36]. As a developed algorithm based on the theory of swarm intelligence that imitates the activity of grasshoppers, GOA shows better performance than the traditional swarm intelligence algorithms due to that it finds a satisfactory balance between fast speed of convergence and accuracy based on its form switch between "adults" and "larvae". The fast convergence is realized when GOA searches globally based on the position of each agent under its "adult" form, which explores on a large scale in an attractive manner among the agents; while the accuracy is achieved by shrinking the range and keeping a repulsive zone based on the best agent under "larvae" form, which avoids the local minimum. According to the movement of the grasshopper groups, a mathematical model can be defined to describe their swarming behavior [46, 47] \[X_{i}=\sum_{j=1,j\neq i}^{N}s(|x_{j}-x_{i}|)\frac{x_{j}-x_{i}}{d_{ij}}-G_{i}+A_ {i}, \tag{23}\] where \(X_{i}\) represents the next position of the \(i_{\mathrm{th}}\) grasshopper; \(s(r)\) is the social interaction function where it is optimized as \(s(r)=0.5e^{-r/1.5}-e^{-r}\). The item \(|x_{j}-x_{i}|\) is the distance between the current position of the \(i_{\mathrm{th}}\) and \(j_{\mathrm{th}}\) grasshopper, \((x_{j}-x_{i})/d_{ij}\) is the unit vector pointing from the position of the \(i_{\rm th}\) grasshopper to the \(j_{\rm th}\) grasshopper. \(G_{i}\) represents the Gravity force at the \(i_{\rm th}\) grasshopper; \(A_{i}\) is the wind advection that is assumed to be always towards the target, \(T_{i}\). Based on the assumptions made in this control case, where the gravity force is neglected and the wind force is always towards the target, Eq. (23) can be converted into \[X_{i}^{d}=c(\sum_{j=1,j\neq i}^{N}c\frac{ub_{d}-lb_{d}}{2}s(|x_{j}-x_{i}|)\frac {x_{j}-x_{i}}{d_{ij}})+T_{d} \tag{24}\] where \(c\) is a decreasing coefficient that shrinks the comfort zone, repulsion zone and attraction zone, which is determined as \(c=c_{\rm max}-l(c_{\rm max}-c_{\rm min})/L\), \(c_{\rm max}\) is the maximum value, \(c_{\rm min}\) is the minimum value, \(l\) indicates the current iteration, and \(L\) is the maximum number of iterations. In this work, we assign \(c_{\rm max}=1\) and \(c_{\rm min}=0.00001\) by trial and error. The variable \(ub_{d}\) represents the upper bound of the case while the \(lb_{d}\) represents the lower bound, which are 1 and -1 in this design. \(T_{d}\) is the desired solution of the current iteration. These parameters are used to attain the fast convergence of the optimization, by increasing the speed of updating the local solution in relation to the increment of the iteration times, thus leading to the efficient searching result of the GOA method [48]. The pseudocode of the GOA applied on the UV thruster forces reallocation can be concluded as follows (see Algorithm 1), with the fitness evaluation substituted by the error evaluation, given in Eq.s (8) and (9). ``` Input:\(\bar{\tau}_{d}\) desired normalized torque matrix W: weighting matrix (fault identification) Output:\(\bar{\tau}\): A vector containing the optimal allocation of the normalized forces for eight thrusters Initialize the swarm \(X\) (\(i\)=1,2,\(\rightarrow\),10) Initialize the \(c_{\rm max}\), \(c_{\rm min}\), and maximum number of iterations \(L\) Calculate the fitness of each search agent to address the best search agent \(\bar{\tau}\) with the minimum errors (Eq. (8) and (9) while(\(l<\!L\)) Update \(c\) = \(c_{\rm max}\) - \(l\) (c_{\rm min}\) - \(c_{\rm min}\)) / \(L\) for each search agent Update the value of the current search agent by Eq. (24) Retrieve the current search agent if it excess the limitation of [-1, 1] endfor Update \(\bar{\tau}\) if there is a better solution \(l\) = \(l\) + 1 endwhile Return \(\bar{\tau}\) ``` **Algorithm 1**GOA algorithm embedded in the fault-tolerant component The flow of the proposed GOA method can be concluded as follows: 1) First initialize the swarm, with 10 groups of eight random numbers between -1 and 1 representing the normalized eight-thruster group and each group is regarded as a search agent; 2) Next calculate the fitness of each search agent and address the agent with the minimum errors as the best based on the objective function combined by Eq. (8) and (9), which is \(||e||+||\theta_{e}||\,\rightarrow\,0\), the constraints for each agent are set between [-1, 1]; 3) Update the parameter c according to the iteration time to accelerate the convergence. If iteration time reaches maximum then stop, otherwise continue the update of c; 4) Update positions (values) of search agents based on Eq. (24) and compare the fitness of the updated agents with the agent of the best fitness. If the updated fitness turns out to be better, updates the position of the agents, otherwise do not update. 5) Update the iteration time, and repeat the loop from step 3). ### _Component of Perturbations_ As the "Yulong" UV model is designed for dam detection, which usually operates at the shoreside underwater condition, the perturbation of currents can be considered in a regular combination form of wave functions as [49, 50, 51] \[\tau_{p}=A_{p1}\cos(\omega_{p1}t)\sin(\omega_{p2}t)+A_{p2}\cos\left(\omega_{p 3}t\right)\sin\left(\omega_{p4}t\right), \tag{25}\] where \(A_{p1}\), \(A_{p2}\) and \(\omega_{p1}\) to \(\omega_{p4}\) are random coefficients, which are chosen to synthesize the randomness of the currents to appropriately address the underwater environmental perturbation. \(A_{p1}\) and \(A_{p2}\) are assigned within the range of \(10\%\) of the torque outputs at four axes, for example, if \(\tau_{x}\) output is about \(100N\), the assignment range will be \([-10,10]\). Coefficients \(\omega_{p1}\) to \(\omega_{p4}\) are chosen with the range of -1 to 1. In addition, considering the effect of environmental noise that produces perturbation to the data transfer at the stage of forming positions, an random error input is given in the simulation. The random error is supposed to be within [-0.1, 0.1] and filtered by sensors, which corresponds to the practical UV case. ## IV Simulation Results and Analysis In this section, the polygonal line and helix trajectory tracking simulation results of the proposed FTC and conventional approximation methods are presented and analyzed. Fault cases of single-fault (one thruster broken) and double-fault (two thrusters broken) are applied due to their frequent occurrence. ### _Helix Tracking_ In this section, one of the thrusters \(T_{1}\) is supposed to be broken, where 100% of power is lost. The initial position of the desired helix trajectory is set at \((0,0,0,0)\), while the initial position of the control trajectories is set at \((0,10,0,0)\). The difference of the initial positions is given to test the correction ability of the two tracking strategies when they start with a certain amount of deviation at one of the axes, i.e. the y axis. Assuming the desired trajectory is given as \(x_{d}=10\sin 0.2t\), \(y_{d}=10-10\cos 0.2t\), \(z_{d}=0.5t\) and \(\psi_{d}=0.2t\) with the simulation time continuing for 50 seconds. The GFTC (in red dash) tracking result quickly eliminates the initial error and follows the desired helix trajectory till the end of the simulation yet the \(T_{1}\) thruster is supposed to be completely broken (Fig. 4(a)). The P-I approximation (in blue) cannot coincide with the desired helix under the single-fault case, where the deviation of abrupt variations is created at the beginning and trajectory distortions are produced throughout the whole process. Therefore when the dynamic constraints are considered, the P-I method fails to compensate for the power loss of a single thruster, which induces increasing errors and excessive velocities with abrupt jumps given in Fig.s 4(b) and (c); while the GFTC achieves smooth error and control velocity curves that indicate the satisfactory tracking performance of the method. Moreover, the "GFTC-P" (in pink) results consider the effect of perturbations brought by the currents and environmental noise when addressing the position information for the vehicle. The GFTC-P result under the effect of perturbations sustains a similar tracking trajectory with the unaffected GFTC, which verifies the robustness of the proposed control. Tracking errors of the GFTC method (in red), P-I approximation-based FTC (in blue), single-fault case without FTC (in grey) and GFTC-P with the effect of perturbations (in pink) are given in Fig. 4(b). The error curve of the GFTC method eliminates the initial deviation and quickly converges to zero. While the error of the P-I approximation-based FTC presents obvious fluctuations and cannot be eliminated in all axes, furthermore, the P-I error curve attains even larger fluctuations compared to the case without FTC at the \(\psi\) axis, which supports the trajectory tracking performance given in Fig. 4(a). This shows the failure of P-I approximation FTC on keeping the desired helix tracking in the single-fault condition when dynamic constraints are applied, yet the GFTC method accomplishes the fault-tolerant trajectory tracking task with satisfactory kinematic outputs. This conclusion is also supported by the velocity variations in Fig. 4(c). The P-I method deducts largely excessive speeds at the x and y axes, where the maximum of 4.9719 m/s and -25 m/s are required, but the GFTC method satisfactorily restricts the velocity at x and y axes within the constraints of [-2, 2]m/s, with the maximum outputs at 2.0008 m/s and -1.8978 m/s (TABLE I). The GFTC also achieves much smoother velocity curves compared to the P-I method for all axes. Even when considering the perturbation effect in GFTC-P simulation, though small chattering is performed, the errors as well as the control velocities are successfully restricted in an acceptable range, with the maximum of 2.015m/s and -1.8944m/s at the x and y axes, and far less requirement of control velocity at the \(\psi\) axis. Hence the effectiveness of the proposed GFTC method in tracking the desired trajectory under the single-fault case is verified even when external perturbations are given. ### _3D Polygonal Line Tracking_ A 3D polygonal line is applied in this section as the reference tracking trajectory, as the "YuLong" UV usually navigates in a movement similar to the polygonal line to detect the dam damage. The initial position of the desired trajectory is set at \((0,0,0,0)\), while the initial position of the control trajectories is set at \((0,2.5,0,0)\). A specific polygonal line function is applied and the simulation continues for 20 seconds: \[x_{d} =t,0\leq t\leq 20,\] \[y_{d} =\left\{\begin{array}{ll}t,&0\leq t\leq 5\\ 5,&5<t\leq 10\\ t-5,&10<t\leq 15\\ 10,&15<t\leq 20\\ \end{array}\right.\] \[z_{d} =\left\{\begin{array}{ll}t,&0\leq t\leq 5\\ 5,&5<t\leq 10\\ t-5,&10<t\leq 15\\ 10,&15<t\leq 20\\ \end{array}\right.\] \[\psi_{d} =0.2,0\leq t\leq 20.\] #### Iv-B1 Single-fault Case One of the thrusters \(T_{\rm{S}}\) is supposed to be broken, with 100% of power lost. The tracking trajectory results are shown in Fig. 5(a). The GFTC (in red dash) has retained the polygonal line trajectory as desired after eliminating the initial error at the y axis, neglecting the power loss of the thruster. Moreover, the GFTC-P (in pink) results which consider the effect of perturbations sustain a similar tracking \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(u_{c}\) (m/s) & \(v_{c}\) (m/s) & \(w_{c}\) (m/s) & \(r_{c}\) (m/s) \\ \hline GFTC & 2.0008 & -1.8978 & 0.5002 & 0.2016 \\ \hline P-I & 4.9719 & -25 & 0.6125 & 1.0034 \\ \hline GFTC-P & 2.0150 & -1.8944 & 0.6404 & 0.2836 \\ \hline \end{tabular} \end{table} TABLE I: Maximum velocities of the GFTC with or without perturbations and the P-I based FTC under the single-fault case when tracking the helix Fig. 4: Helix tracking results using the GFTC with or without perturbations and the P-I approximation-based FTC under the single-fault case. (a) Comparison of trajectories, (b) Comparison of tracking errors, (c) Comparison of control velocities. trajectory with the unaffected condition, which verifies the robustness of the proposed tracking control. The P-I method (in blue) fails to catch up with the desired trajectory especially at the turning point where larger dynamic inputs are needed. The P-I method cannot make up for the loss of the propulsive force when physical constraints (torque/force maximum) are involved, thus producing errors with large fluctuations as well as excessive control velocities presented in Fig.s 5(b) and (c). The errors at four axes are presented in Fig. 5(b), where the GFTC (in red) successfully eliminates the tracking errors. While for the P-I approximation-based FTC (in blue), its result fails to eliminate the error once the sharp turning is required by the trajectory, as the excessive dynamic outputs deducted by the P-I method cannot be satisfied when the dynamic constraints are applied, thus inducing the large trajectory deviation at the turning section. Velocity variations at four axes are shown in Fig. 5(c), the P-I method performs a sharp fluctuation and fails to retain the control velocity within the desired range at the y axis (see y axis in TABLE II). The velocity at the y axis of P-I method reaches a dramatic value of -5.0713 m/s, largely exceeding the desired range of the vehicle that is preset at -2m/s to 2m/s. At the same time, the GFTC successfully limits the kinematic outputs within the constraints and presents a smooth curve of much smaller fluctuations. In addition, when considering the perturbations in the GFTC-P simulation (in pink), though chattering issues are presented, errors are constrained within an acceptable range and kinematic outputs perform a smaller range compared to the P-I method at most axes, with the maximum of 1.2410m/s and 1.2188m/s at the x and y axes. These results indicate the effectiveness and robustness of the proposed GOA-based FTC. #### Iv-B2 Double-fault Case In this section, the effect of the GFTC, P-I approximation based FTC and GFTC considering environmental perturbations are compared, supposing two thrusters (\(T_{1}\) and \(T_{8}\)) of the propulsion system encounter power loss of 100%. The GFTC method (in red dash) successfully eliminates the initial errors at the y axis and retains the tracking trajectory as desired till the end (Fig. 6(a)). The GFTC-P (in pink) results under the effect of perturbations sustain a similar tracking trajectory with the unaffected GFTC trajectory, and at the second turning section it performs more smooth tracking curves than the first one, which verifies its robustness for tracking the desired polygonal line even under the double-fault case. At the same time, the P-I approximation based FTC fails to track the desired trajectory and even presents a much larger deviation compared to its single-fault case (Fig. 5(a)). This demonstrates that the GFTC method is capable of balancing off the tracking errors whenever the damage degree of power loss in the thruster system differs, thus proving the robustness of the proposed FTC. Similarly, as under the single-fault case, the error curve of GFTC method under double-fault case eliminates the initial deviation and quickly converges to zero (Fig. 6(b)). However, the P-I method presents fierce error vibrations in x, y and \(\psi\) axes compared to the single-fault case, which is even worse than the performance of double-fault case without FTC. This shows that the P-I approximation-based FTC is heavily affected by the damage degree of the UV propulsion system and it cannot balance off the error produced by the excessive power loss of the thrusters, e.g. the double-fault case. This conclusion is also supported by the velocity variations in Fig. 6(c), where the P-I method cannot be limited within the allowable range due to the thruster power loss, with excessive maximum velocities arriving at 17.5815m/s for the x axis, 25.9814m/s for the y axis and 1.4485m/s for the \(\psi\) axis given in TABLE III, resulting in the complete tracking failure shown in Fig. 6(a). The velocities of the GFTC method maintain within the allowable domain throughout the whole process, neglecting the change of fault cases. The GFTC-P simulations that involve perturbations in a practical underwater environment also perform successful restriction of the errors and the control velocities within the supposed range, with the maximum of 1.2473m/s and 0.9646m/s at the x and y axes. Therefore, the effectiveness of the proposed GFTC method in tracking a desired polygonal line is verified whenever single-fault or double-fault cases are applied. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(u_{c}\) (m/s) & \(v_{c}\) (m/s) & \(w_{c}\) (m/s) & \(r_{c}\) (m/s) \\ \hline GFTC & 1.1738 & 0.8212 & 1.0012 & 0.0782 \\ \hline P-I & 1.7278 & -5.0713 & 1.2289 & 0.2 \\ \hline GFTC-P & 1.2410 & 1.2188 & 1.1192 & 0.1035 \\ \hline \end{tabular} \end{table} TABLE II: Maximum velocities under the single-fault case when tracking the polygonal Line Fig. 5: Polygonal line tracking results using the GFTC with or without perturbations and the P-I approximation-based FTC under the single-fault case. (a) Comparison of trajectories, (b) Comparison of tracking errors, (c) Comparison of control velocities. ## V Conclusion In this paper, the fault-tolerant trajectory tracking problem for the "Yulong" UV is resolved by a Grasshopper Optimization and backstepping & SMC-based cascade control (GFTC). The GFTC strategy applies a refined backstepping algorithm to restrict the kinematic outputs; and the Grasshopper Optimization Algorithm (GOA) is used to achieve optimized thruster force reallocation within the allowable domain. When encountering fault cases in tracking the polygonal line or helix, the trajectory tracking errors of the GFTC are largely alleviated and the actuator saturation problem is eliminated, compared to the traditional FTCs such as the weighted pseudo-inverse matrix approximation-based methods. In addition, the robustness of the proposed FTC is also verified when environmental perturbations are involved, which serves as the basis of the experimental study on practical applications that will be extended in the future. ## Appendix A Proof of the Error Restricted Backstepping Control Stability According to the Lyapunov stability theory, a special Lyapunov function \(\Gamma_{0}\) is chosen, \[\Gamma_{0}=\frac{1}{2}(e_{x}^{2}+e_{y}^{2}+e_{z}^{2}+e_{\psi}^{2})\,. \tag{26}\] By Eq.s (1) and (11), the derivative of Eq. (25) can be obtained to prove the stability of the backstepping system, \[\dot{\Gamma}_{0}= e_{x}\,\dot{e}_{x}+e_{y}\,\dot{e}_{y}+e_{z}\,\dot{e}_{z}+e_{\psi} \,\dot{e}_{\psi} \tag{27}\] \[=e_{x}\,(x_{d}-\dot{x})+e_{y}\,(\dot{y}_{d}-\dot{y})\] \[+e_{z}(\dot{z}_{d}-\dot{z})+e_{\psi}\,(\dot{y}_{d}-\dot{\psi})\] \[=e_{x}\,[(\cos\psi_{d}u_{d}-\sin\psi_{d}v_{d})-(\cos\psi u_{c}- \sin\psi v_{c})]\] \[+e_{y}\,[(\sin\psi_{d}u_{d}+\cos\psi_{d}v_{d})-(\sin\psi u_{c}+ \cos\psi v_{c})]\] \[+e_{z}(w_{d}-w_{c})+e_{\psi}\,(r_{d}-r_{c})\] \[=e_{x}\,[(\cos\psi_{d}u_{d}-\sin\psi_{d}v_{d})\] \[-(kv_{ex}+u_{d}(\cos\psi\cos v_{e\psi}-\sin\psi\sin v_{e\psi})\] \[+v_{d}(\sin\psi\cos v_{e\psi}-\cos\psi\sin v_{e\psi}))]\] \[+e_{y}\,[(\sin\psi_{d}u_{d}+\cos\psi_{d}v_{d})\] \[-(kv_{ey}+u_{d}(\sin\psi\cos v_{e\psi}+\cos\psi\sin v_{e\psi})\] \[-v_{d}(\sin\psi\sin v_{e\psi}-\cos\psi\cos v_{e\psi}))]\] \[+e_{z}(-k_{z}v_{ez})+e_{\psi}\,(-k_{\psi}v_{e\psi})\] \[\leq-ke_{x}v_{ex}-ke_{y}v_{ey}-k_{z}e_{z}v_{ez}-k_{\psi}e_{\psi}v _{e\psi}\,.\] According to the definition of \(\mathbf{v_{e}},\mathbf{e}(t)\) are of the same sign (see definition of Eq. (10)); \(k\), \(k_{z}\), \(k_{\psi}\) are positive constants. The result of Eq. (26) is believed to be less than and equal to zero, which demonstrates the stability of the designed refined backstepping controller. ## Appendix B Proof of the SMC stability To prove the stability of the SMC, construct a Lyapunov function, \[\mathbf{V}=\frac{1}{4\lambda}\mathbf{s}^{T}\mathbf{M}\mathbf{s}+\frac{1}{2} \mathbf{Q}^{T}\Gamma^{-1}\mathbf{Q}\,, \tag{28}\] where \(\mathbf{Q}=\widetilde{\mathbf{\tau}}_{r}-\widetilde{\mathbf{\tau}}_{est}\) and \(\widetilde{\mathbf{\tau}}_{r}=\widetilde{\mathbf{M}}\hat{\mathbf{v}}_{r}+ \widetilde{\mathbf{C}}\mathbf{v}_{r}+\widetilde{\mathbf{D}}\mathbf{v}+ \widetilde{\mathbf{g}}\). Previously we have given \(\mathbf{e}_{\mathbf{v}}=\mathbf{v}_{\mathbf{e}}-\mathbf{v}\), and \(\hat{\mathbf{s}}\hat{\mathbf{e}}_{\mathbf{v}}+2\lambda\mathbf{e}_{\mathbf{v} }+\lambda^{2}\int\mathbf{e}_{\mathbf{v}}dt\), such that two equations can be deducted as, \[\mathbf{v}=\mathbf{v}_{\mathbf{e}}-\frac{\mathbf{s}-\hat{\mathbf{e}}_{\mathbf{ v}}-\lambda^{2}\int\mathbf{e}_{\mathbf{v}}dt}{2\lambda}, \tag{29}\] \[\hat{\mathbf{v}}=\hat{\mathbf{v}}_{\mathbf{e}}-\frac{\hat{\mathbf{s}}-\hat{ \mathbf{e}}_{\mathbf{v}}-\lambda^{2}\mathbf{e}_{\mathbf{v}}}{2\lambda}, \tag{30}\] therefore the following items can be defined, \[\mathbf{v}_{\mathbf{r}}=\mathbf{v}_{\mathbf{e}}+\frac{\hat{\mathbf{e}}_{ \mathbf{v}}+\lambda^{2}\int\mathbf{e}_{\mathbf{v}}dt}{2\lambda}, \tag{31}\] \[\hat{\mathbf{v}}_{\mathbf{r}}=\hat{\mathbf{v}}_{\mathbf{e}}+\frac{\hat{\mathbf{ e}}_{\mathbf{v}}+\lambda^{2}\mathbf{e}_{\mathbf{v}}}{2\lambda}. \tag{32}\] Fig. 6: Polygonal line tracking results using the GFTC with or without perturbations and the P-I approximation-based FTC under the double-fault case. (a) Comparison of trajectories, (b) Comparison of tracking errors, (c) Comparison of control velocities. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(u_{c}\) (m/s) & \(v_{c}\) (m/s) & \(w_{c}\) (m/s) & \(r_{c}\) (m/s) \\ \hline GFTC & 1.1710 & 0.8158 & 0.9989 & 0.0782 \\ \hline P-I & 17.5815 & 25.9814 & 1.2289 & 1.4485 \\ \hline GFTC-P & 1.2473 & 0.9646 & 1.0367 & -0.0885 \\ \hline \end{tabular} \end{table} TABLE III: Maximum velocities under the double-fault case when tracking the polygonal line By substituting into Eq. (2), \[\mathbf{M}\frac{\hat{\mathbf{s}}}{2\lambda}+\mathbf{C}\frac{ \mathbf{s}}{2\lambda}\] \[=\mathbf{M}(\dot{\mathbf{v}}_{\mathrm{c}}+\frac{\mathbf{\tilde{e}}_ {\mathbf{v}}+\lambda^{2}\mathbf{e}_{\mathbf{v}}}{2\lambda})+\mathbf{C}( \mathbf{v}_{\mathrm{c}}+\frac{\hat{\mathbf{e}}_{\mathbf{v}}+\lambda^{2}\int \mathbf{e}_{\mathbf{v}}dt}{2\lambda})\] \[+\mathbf{D}\mathbf{v}+g-\mathbf{\tau}=\mathbf{M}\dot{\mathbf{v}}_ {\mathrm{r}}+\mathbf{C}\mathbf{v}_{\mathrm{r}}+\mathbf{D}\mathbf{v}+g-\mathbf{ \tau}. \tag{33}\] Based on previous definitions, the derivative of Eq. (27) can be simplified as, \[\dot{\mathbf{V}}=\frac{1}{4\lambda}(\mathbf{s}^{T}\dot{\mathbf{M} }\mathbf{s}+\mathbf{\hat{s}}^{T}\mathbf{M}\mathbf{s}+\mathbf{s}^{T}\dot{ \mathbf{M}}\mathbf{\hat{s}})\] \[+\frac{1}{2}\dot{\mathbf{Q}}^{T}\Gamma^{-1}\mathbf{Q}+\frac{1}{ 2}\mathbf{Q}^{T}\Gamma^{-1}\dot{\mathbf{Q}}\] \[=\frac{1}{2\lambda}\mathbf{s}^{T}(\dot{\mathbf{M}}\mathbf{\hat{s} }+\mathbf{C}\mathbf{s})+\frac{1}{2}\dot{\mathbf{Q}}^{T}\Gamma^{-1}\mathbf{Q} +\frac{1}{2}\mathbf{Q}^{T}\Gamma^{-1}\dot{\mathbf{Q}}\] \[=\mathbf{s}^{T}(\mathbf{M}\dot{\mathbf{v}}_{\mathrm{r}}+\mathbf{C} \mathbf{v}_{\mathrm{r}}+\mathbf{D}\mathbf{v}+g-\mathbf{\tau})+\dot{\mathbf{Q} }^{T}\Gamma^{-1}\mathbf{Q}. \tag{34}\] By substituting Eq. (19), \[\dot{\mathbf{V}}=\mathbf{s}^{T}(\mathbf{M}\dot{\mathbf{v}}_{ \mathrm{r}}+\mathbf{C}\mathbf{v}_{\mathrm{r}}+\mathbf{D}\mathbf{v}+g-\mathbf{ \tau})\] \[+(\dot{\widetilde{\mathbf{r}}}_{\mathrm{r}}-\dot{\widetilde{ \mathbf{r}}}_{est})^{T}\Gamma^{-1}\mathbf{Q}\] \[=-\mathbf{s}^{T}(\mathbf{K}_{1}\mathbf{s}+\mathbf{K}_{2}|\mathbf{ s}|^{\gamma}sign(\mathbf{s}))+(\dot{\widetilde{\mathbf{r}}}_{\mathrm{r}})^{T} \Gamma^{-1}\mathbf{Q}. \tag{35}\] The dynamic item \(\widetilde{\mathbf{\tau}}_{r}\) is bounded due to the slow velocity of the underwater vehicle and \(\mathbf{s}^{T}(\mathbf{K}_{1}\mathbf{s}+\mathbf{K}_{2}|\mathbf{s}|^{r}\mathrm{ sign}(\mathbf{s}))\geq(\dot{\widetilde{\mathbf{r}}}_{\mathrm{r}})^{T} \Gamma^{-1}\mathbf{Q}\). When \(\mathbf{K}_{1}\), \(\mathbf{K}_{2}\) and \(\Gamma\) are assigned with large enough values at the design step, \(\dot{\mathbf{V}}\leq 0\) can be achieved and \(\mathbf{V}\) is ensured to be bounded, thus leading to the conclusion that \(\mathbf{Q}\) is bounded. Then design a new Lyapunov function as \[\mathbf{V}_{2}=\frac{1}{4\lambda}\mathbf{s}^{T}\mathbf{M}\mathbf{s}, \tag{36}\] whose derivative can be deducted as, \[\dot{\mathbf{V}}_{2}=\mathbf{s}^{T}(\mathbf{Q}-\mathbf{K}_{1}\mathbf{s}- \mathbf{K}_{2}|s|^{r}sign(s)), \tag{37}\] where \(0<r<1\). Suppose \(||Q||<a\), \[\dot{\mathbf{V}}_{2}\leq\frac{1}{2}||s||^{2}+\frac{1}{2}a-\lambda_{min}( \mathbf{K}_{1})||s||^{2}-\lambda_{min}(\mathbf{K}_{2})||s||^{1+r}, \tag{38}\] choose \(\mathbf{K}_{1}\) when \(\lambda_{min}(\mathbf{K}_{1})>\frac{1}{2}+\beta\), where \(\beta>0\), \[\dot{\mathbf{V}}_{2}\leq-\beta||s||^{2}-\lambda_{min}(\mathbf{K}_{2})||s||^{1+ r}+\frac{1}{2}a, \tag{39}\] which induces that the Lyapunov function converges to a range close to zero in a finite time and \(\mathbf{s}\) converges to a range close to zero in a finite time. Therefore, the supposed condition of the Lyapunov theorem can be regarded as satisfied, thus proving the stability of the designed SMC.
2308.02825
Burning a binary tree and its generalization
Graph burning is a graph process that models the spread of social contagion. Initially, all the vertices of a graph $G$ are unburnt. At each step, an unburnt vertex is put on fire and the fire from burnt vertices of the previous step spreads to their adjacent unburnt vertices. This process continues till all the vertices are burnt. The burning number $b(G)$ of the graph $G$ is the minimum number of steps required to burn all the vertices in the graph. The burning number conjecture by Bonato et al. states that for a connected graph $G$ of order $n$, its burning number $b(G) \leq \lceil \sqrt{n} \rceil$. It is easy to observe that in order to burn a graph it is enough to burn its spanning tree. Hence it suffices to prove that for any tree $T$ of order $n$, its burning number $b(T) \leq \lceil \sqrt{n} \rceil$ where $T$ is the spanning tree of $G$. It was proved in 2018 that $b(T) \leq \lceil \sqrt{n + n_2 + 1/4} +1/2 \rceil$ for a tree $T$ where $n_2$ is the number of degree $2$ vertices in $T$. In this paper, we provide an algorithm to burn a tree and we improve the existing bound using this algorithm. We prove that $b(T)\leq \lceil \sqrt{n + n_2 + 8}\rceil -1$ which is an improved bound for $n\geq 50$. We also provide an algorithm to burn some subclasses of the binary tree and prove the burning number conjecture for the same.
Sandip Das, Sk Samim Islam, Ritam M Mitra, Sanchita Paul
2023-08-05T09:05:10Z
http://arxiv.org/abs/2308.02825v2
# Burning a binary tree and its generalization ###### Abstract Graph burning is a graph process that models the spread of social contagion. Initially, all the vertices of a graph G are unburnt. At each step, an unburnt vertex is put on fire and the fire from burnt vertices of the previous step spreads to their adjacent unburnt vertices. This process continues till all vertices are burnt. The burning number \(b(G)\) of the graph \(G\) is the minimum number of steps required to burn all the vertices in the graph. The burning number conjecture by Bonato et al. states that for a connected graph G of order n, its burning number \(b(G)\leq\lceil\sqrt{n}\rceil\). It is easy to observe that in order to burn a graph it is enough to burn its spanning tree. Hence it suffices to prove that for any tree T of order n, its burning number \(b(T)\leq\lceil\sqrt{n}\rceil\) where \(T\) is the spanning tree of \(G\). It was proved in 2018 that \(b(T)\leq\lceil\sqrt{n+n_{2}+1/4}+1/2\rceil\) for a tree \(T\) where \(n_{2}\) is the number of degree 2 vertices in \(T\). In this article, we give an algorithm to burn a tree and we improve the existing bound using this algorithm. We prove that \(b(T)\leq\lceil\sqrt{n+n_{2}+8}\rceil-1\) which is an improved bound. Also, under certain restriction of degree 2 vertices, we have improved upon the result of Bonato et al.(in 2021). We also provide an algorithm to burn a binary tree and prove the burning number conjecture for the same. ## 1 Introduction Graph burning is a process that captures the spread of social contagion and is introduced by [3]. Below, we describe the process of burning a simple graph \(G(V,E)\) where \(|V|=n\). Graph burning consists of discrete steps. After each step, each vertex is either burnt or unburnt, once a vertex is burnt it remains burnt till the end. Initially, all the vertices are unburnt. In the first step we burn a vertex. At each subsequent step, first, a new unburnt vertex is burnt; second, the fire spreads from each burnt vertex of the previous step to its neighboring vertices and burns the unburnt vertices. The process ends when all the vertices are burnt. The _burning number_, denoted by \(b(G)\), is the minimum number of steps taken for this process to end. The burning problem asks, given a graph G and an integer \(k\geq 2\), whether \(b(G)\leq k\). An intuitive way to look at this process is to cover the vertices of the graph \(G\) by \(k\) balls of radius \(0,1,\ldots,k-1\), placed at appropriate vertices such that \(b(G)\) is minimized. A ball of radius \(r\) placed at a vertex \(v\) can cover vertices that are at a distance of at most \(r\) from \(v\). For example, it is straightforward to see that \(b(K_{n})=2\). However, even for a relatively simple graph such as the path \(P_{n}\) on \(n\) nodes, computing the burning number is more complex; in fact, \(b(P_{n})=\lceil\sqrt{n}\rceil\) as [3] proved. Suppose that in the process of burning a graph \(G\), we eventually burnt the whole graph \(G\) in \(k\) steps, and for each \(i\), \(1\leq i\leq k\), we denote the node where we set the fire in the \(i\)th step by \(x_{i}\). We call such a node simply a source of fire. The sequence \((x_{1},x_{2},\ldots,x_{k})\) is called a burning sequence for \(G\). Such a burning sequence is referred to as _optimal_[4]. A lot of studies have already been done on the burning problem over several graphs such as burning spider graphs [5], caterpillar graphs [6], and many more. The current article proves the bounds on burning numbers of binary trees. First, we do our findings for the perfect binary tree and complete binary tree respectively. Next, we develop a method to construct a maximal full binary tree that can be burnt in \(k\) steps. Finally, we give an algorithm to burn a full binary tree which proves the conjecture given by [4]. We give yet another algorithm that improves the conjecture for a binary tree. Eventually, the bound also improves upon the result given by [1] for general graphs. Also, under a restricted number of degree 2 vertices it is an improvement upon the result given by [2]. The notations in the article hold their regular meaning unless specified otherwise. Given a positive integer \(k\), the \(k\)th closed neighborhood of \(v\) is defined to be the set \(\{u\in V(G):d(u,v)\leq k\}\) and is denoted by \(N_{k}[v]\). Suppose that \((x_{1},x_{2},\ldots,x_{k})\) is a burning sequence for a given graph \(G\). For \(1\leq i\leq k\), the fire started at \(x_{i}\) will burn only all the nodes within distance \(k-i\) from \(x_{i}\) by the end of the \(k\)th step. On the other hand, every node \(v\in V(G)\) must be either a source of fire, or burnt from at least one of the sources of fire by the end of the \(k\)th step. In other words, any node of \(G\) must be an element of \(N_{k-i}[x_{i}]\), for some \(1\leq i\leq k\). ## 2 Perfect Binary Tree A perfect binary tree is a special type of binary tree in which all the leaf nodes are at the same depth, and all non-leaf nodes have two children. In simple terms, this means that all leaf nodes are at the maximum depth of the tree, and the tree is completely filled with no gaps. **Theorem 2.1**.: _Let \(T\) be a perfect binary tree of height \(h\), then \(b(T)=h+1\)._ Proof.: Let \(c\) be the root of \(T\) and \(L\) be the rightmost branch of \(T\). As the tree \(T\) is of height \(h\), there are exactly \(k=h+1\) vertices on \(L\). Now starting with \(c\), we name the vertices on \(L\) as \(x_{i}\), where \(1\leq i\leq k\). Now take the burning sequence as \((x_{1},x_{2},\ldots,x_{k})\). Now it is easy to observe that we can partition the tree \(T\) into \(k\) rooted trees \(T_{1},T_{2},\ldots,T_{k}\) having their roots at \(x_{i},1\leq i\leq k\), with heights atmost \(k-1,k-2,\ldots,0\) such that \(d(x_{i},x_{j})\geq|i-j|\). Thus \(k\) steps are sufficient to burn \(T\), \(b(T)\leq k\). Now we will show that \(b(T)\) exactly attains its upper bound, i.e. \(k\) steps are necessary. It is to be noted that there is exactly \(2^{h}\) number of leaf nodes in \(T\). Let \((x_{1},x_{2},\ldots,x_{k})\) be a burning sequence of \(T\) considering \(b(T)=k\leq h\). Then each \(x_{i}\) can burn at most \(2^{k-i}\) number of leaves after \(k\) th step. Therefore the maximum number of leaves that will be burnt after \(k\) th step become \(\sum_{i=1}^{k}2^{k-i}=2^{k}-1\leq 2^{h}-1\). Hence at least \((2^{h}-(2^{h}-1))=1\) leaf is left unburnt after \(k\) th step. Therefore \(b(T)=h+1\). ## 3 Complete Binary Tree A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes in the last level are as far left as possible. It can have between 1 and \(2h\) nodes at the last level. **Proposition 3.1**.: _Let \(T\) be a complete binary tree of height \(h\), which does not have exactly one leaf in \(T\)'s last level. Then \(b(T)=h+1\)._ Proof.: It is to be noted that as the radius of \(T\) is \(h\), \(b(T)\leq h+1\). We will now show that \(b(T)\) can not be burnt in \(h\) steps. One can verify that there is exactly \(2^{h}-1\) number of leafs and \(2^{h-1}\) number of internal vertices at \((h-1)\) th level. On contrary, let \((x_{1},x_{2},\ldots,x_{h})\) be a burning sequence of \(T\), i.e., \(b(T)=h\). Note that, \(x_{1}\) can not be a root of \(T\). Therefore it can burn atmost \((2^{h-1}+2^{h-2})\) number of vertices (leaves and nodes of \((h-1)\) th level). Note that \(x_{i}\) can neither be the root nor it is in the \(i\) th level from the root. Therefore the maximum number of leaves and nodes of \((h-1)\) th level will be burnt after \(k\) th step become \(\sum_{i=1}^{h}2^{h-1}+2(2^{h-2}+\ldots+1)=2^{h-1}+2^{h-2}-2\). Therefore at least \((2^{h}-1+2^{h-1})-(2^{h-1}+2^{h}-2)=1\) vertex (leaf/node from \((h-1)\) th level) is left unburnt after \(k=h\) th step. Therefore \(b(T)=h+1\). **Theorem 3.2**.: _Let \(T\) be a complete binary tree of height \(h\). Then \(b(T)=h\)._ Proof.: First, we show that \(h\) steps are sufficient to burn \(T\), \(b(T)\leq h\). Two situations may arise here. Let \(c\) be the root of \(T\) and \(T_{1},T_{2}\) be two subtrees of \(T\) rooted at the two of the neighbors of \(c\). Note that, as \(T\) is a complete binary tree, at least one among \(T_{1},T_{2}\) (say \(T_{1}\)) is of height \(h-1\) and the other (say \(T_{2}\)) is of height \(h-2\) or \(h-1\). Let \(c_{1},c_{2}\) be the roots of \(T_{1},T_{2}\) and \(u_{1},u_{2},\ldots,u_{h-1}\) be the vertices on the rightmost branch of \(T_{2}\) satisfying \(d_{T}(c,u_{i})=i\). Let \(x_{i}\) be the sibling node of \(u_{i}\)(they share the same parent \(u_{i-1}\)). Then \(d_{T}(c,x_{i})=i\). Let \(v_{1}=c_{1},v_{i}=x_{i},2\leq i\leq h-2\) and \(v_{h}=u_{h-1}\). Consider the burning sequence \((v_{1},v_{2},\ldots,v_{h-1},v_{h})\). As every subtree rooted at \(v_{i},1\leq i\leq h-1\) is of height \(h-i\) or \(h-i-1\) and \(v_{h}\) is the rightmost leaf node of \(T\). Therefore, after \(h\) steps \(T\) must be burnt. Now we will show that \(T\) can not be burnt in \(h-1\) steps. Let \(T^{\prime}\) be perfect binary subtree of \(T\) having height \(h-1\). Assume on the contrary that we burn \(T\) in \(h-1\) steps. Then \(T^{\prime}\) should also be burnt in \(h-1\) steps. But from Theorem 2.1 it follows that \(b(T^{\prime})=h\), which introduces contradiction. Hence the result follows. ## 4 Full Binary Tree A full binary tree is a binary tree in which all of the nodes have either \(0\) or \(2\) child. **Theorem 4.1**.: _Let \(T\) be a full binary tree of height \(h\), which is not perfect, then \(b(T)\leq h\)._ Proof.: Let \(c\) be the root of the tree \(T\). Let \(x_{1},x_{2}\) be two neighbours of \(c\) in \(T\). We assume that the result is true for all such above-described trees of height less than \(h\). Now consider the subtrees \(T_{1},T_{2}\) of \(T\) having \(x_{1},x_{2}\) as their roots. Clearly \(T_{1},T_{2}\) are two disjoint trees of height less than \(h\). Hence using induction they can be burnt in less than \(h-1\) steps. Consider that \(b(T_{1})=k_{1},b(T_{2})=k_{2}\). Clearly, \(k=\max\{k_{1},k_{2}\}\leq h-1\). Without loss of generality we assume \(b(T_{2})<b(T_{1})\). We consider the burning sequence of \(T_{1}\) as \((y_{1},y_{2},\ldots,y_{k_{1}})\). Setting fire at \(x_{2}\) in step \(1\) burns \(b(T_{2})\cup x_{2}\) in \(h\) steps. Then it is clear that \((x_{2},y_{1},y_{2},\ldots,y_{k_{1}})\) is a burning sequence of \(T\). Therefore \(b(T)=k_{1}+1\leq(h-1)+1=h\). Figure 1: Complete Binary Tree t is to be noted that if \(d\) be the diameter and \(h\) be the height of a full binary tree \(T\), then \(h\leq d\leq 2h\). For convenience we denote a full binary tree which is not perfect by the notation **FBTNP**. ### Maximum Burning **Theorem 4.2**.: _Given any positive integer \(k\) there exists a maximal FBTNP of order \(3(2^{k}-1)-2k\), that can be burnt in \(k\) steps._ Proof.: We will create a burning sequence \((v_{1},v_{2},\ldots,v_{k})\) in such a manner so that \(N_{k-1}[v_{1}]\cup N_{k-2}[v_{2}]\cup\ldots N_{0}[v_{k}]=V(T)\) for some FBTNP tree \(T\) and \(|V(T)|=3(2^{k}-1)-2k\). Maximality can be ensured once we can avoid all the double-burning situations in \(T\), i.e., if \(T\) can be constructed in such a way so that \(N_{k-i}[v_{i}]\cap N_{k-j}[v_{j}]=\emptyset\) for all \(i\neq j\). First we construct a perfect binary tree of height \(k-1\), say \(T(v_{1})\) and the root be the vertex \(v_{1}\). \(v_{1}\) is burnt in the first step. Note that by Theorem 2.1, \(v_{1}\) burns exactly \(T(v_{1})\) in \(k\) steps. \(v_{1}\)'s two neighbours are the children of \(v_{1}\) in \(T(v_{1})\). Let the \(3rd\) neighbor of \(v_{1}\) be \(v^{\prime}_{1}\), such that \(v^{\prime}_{1}\notin V(T(v_{1}))\). Next we create a perfect binary tree of height \(k-2\) rooted at \(v^{\prime}_{1}\), say \(T(v^{\prime}_{1})\) disjoint from \(T_{v_{1}}\). Note that, \(T(v_{1})\) and \(T(v^{\prime}_{1})\) are two separate perfect binary trees rooted at \(v_{1}\) and \(v^{\prime}_{1}\) respectively, which are connected by the edge \(v_{1}v^{\prime}_{1}\). Let, \(T_{1}=T(v_{1})\cup T(v^{\prime}_{1})\) (\(T_{1}\) is not a binary tree but a subtree of \(T\)). It is easy to verify that \(|N_{k-1}[v_{1}]|=|V(T_{1})|=|V(T(v_{1}))|+|V(T(v_{1}{}^{\prime}))|=(2^{k}-1)+( 2^{k-1}-1)=2^{k}+2^{k-1}-2\). Refer to figure 2a. We construct subtrees sequentially \((T_{2},T_{3},\ldots T_{k})\) in the said way. Note that, \(V(T_{k})=v_{k}\) and \(|V(T_{k})|=1\). Thus, \(T_{i}=N_{k-i}[v_{i}]\) and \(|T_{i}|=|N_{k-i}[v_{i}]|=2^{(k-i)+1}+2^{k-i}-2\). We create \(T=T_{1}\cup T_{2}\cup T_{3}\ldots T_{k}\), where \(T\) is a full binary tree having root as one of the nodes of \(T(v^{\prime}_{1})\). Refer to figure 2b. Hence the total number vertices of \(T\) is \((2^{k}-1)+2(2^{k-1}+2^{k-2}+\ldots 2^{k-(k-1)})-2(k-1)-1=2^{k}-2k+2^{k+1}-3=3(2^{k}- 1)-2k\). ### The Algorithm to burn a FBTNP Let \(P_{d+1}=(u_{0},u_{1},\ldots,u_{d})\) be the diameter path and \(h\) be the height of \(T\). We consider the following cases, depending on which we develop a method to burn the tree \(T\) in \(k\) steps. Let \(k\) be the burning number of T and the burning sequence be \(<v_{1},v_{2},\ldots v_{k}>\). Let \(T(u_{i})\) be the subtree rooted at \(u_{i}\), \(0\leq i\leq d\) formed by the branches of \(T\) attached to the vertices \(u,u_{1},\ldots,u_{i}\) where \(u=u_{0}\) or \(u_{d}\) is an end point of \(P_{d+1}\). Let \(p_{i}\) be the neighbour of \(u_{i}\), where \(p_{i}\neq u_{i-1}\) or \(u_{i+1}\)(i.e. the first vertex in the branch from \(u_{i}\)). Figure 2: Maximal Burning Let \(k\geq h\), from Theorem 4.1, it is trivial to observe that \(T\) will burn after \(h\) steps. Otherwise, when \(k<h\), follow the algorithm from the maximum end of \(P_{d}\), calculated from the root of \(T\). If both of the endpoints of \(P_{d}\) are of the same distance from the root, then we can start our algorithm from any end. **Step I:** Consider the diameter path \(P_{d+1}\). The following cases may arrive:- **Case I:** a) Let either \(T(u_{k-2})\) or \(T(u_{d-k+1})\) or both have order \(\geq 2k-1\). Then the order of \(T(u_{k-2})\cup\{p_{k-1}\}\) (or \(T(u_{d-k+1})\cup\{p_{d-k}\})\geq 2k\). Note that, for this to happen the branch from a vertex \(u_{m}\) has height \(\geq 2\), for \(1\leq m\leq k-2\) (similarly the branch from a vertex \(u_{n}\) has height \(\geq 2\), for \(d-k+1\leq n\leq d\)). We may place \(v_{1}\) at \(u_{k-1}\) (or \(u_{d-k}\)) to burn \(\geq 2k\) many vertices. **Step I.a.:** Without loss of generality, we place \(v_{1}\) at \(u_{k-1}\) in order to burn the vertices in \(T(u_{k-2})\cup\{p_{k-1}\}\). **Step I.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k-1}\) in the updated diameter path \(P_{d^{\prime}}\) so that the modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 3(a). b) Let both \(T(u_{k-1})\cup\{p_{k}\}\) and \(T(u_{d-k})\cup\{p_{d-k-1}\}\) are of order \(\geq 2k\) and the branch attached to \(u_{k}\) (or \(u_{d-k-1}\)) is of maximum height \(k-2\). Then we may place \(v_{1}\) at \(u_{k-1}\) (or \(u_{d-k}\)) to burn \(\geq 2k\) many vertices. **Step II.a.:** Without loss of generality, we place \(v_{1}\) at \(u_{k-1}\). **Step II.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k}\) in the updated diameter path \(P_{d^{\prime}}\) so that the modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 3(b). **Case II:** Let both \(T(u_{k-1})\cup\{p_{k}\}\) and \(T(u_{d-k})\cup\{p_{d-k-1}\}\) are of order \(\geq 2k\) and the branch attached to \(u_{k}\) (or \(u_{d-k-1}\)) be of height \(k-1\) or \(k\). Let \(p_{m}\) be the last vertex in the branch from \(u_{k}\), where \(m=k\) or \(k-1\). We place \(v_{1}\) at \(p_{1}\) and \(v_{2}\) at \(u_{k-2}\) (or \(v_{1}\) at \(p_{1}\) and \(v_{2}\) at \(u_{d-k+1}\)). Therefore the number of vertices burnt after Figure 3: Cases in Full Binary Tree two steps is \(\geq 4k-4\). **Step III.a.:** Without loss of generality, we place \(v_{1},v_{2}\) at the vertices \(p_{1},u_{k-2}\) respectively. **Step III.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k}\) in the updated diameter path \(P_{d^{\prime}}\) so that the modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 3(c). **Step IV:** Repeat **Step I**. **Observation 4.3**.: _Let \(P_{d+1}=(u_{0},u_{1},\ldots,u_{d})\) be the diameter path before the algorithm starts. If it starts burning the tree from \(u_{0}\)(similarly for \(u_{d}\)) then the subgraph \(T(u_{2})\cup\{u_{2}\}\) (similarly for \(T(u_{d-2})\cup\{u_{d-2}\}\)) will remain connected and unburnt after the \((k-2)\)th step._ ### The Conjecture for FBTNP **Theorem 4.4**.: _Let \(T\) be a full binary tree of order \(n\), which is not perfect. Then \(b(T)\leq\lceil\sqrt{n}\rceil\)._ Proof.: Let \(P_{d}=(u_{0},u_{1},\ldots,u_{d})\) be the diameter path of \(T\). Follow the given algorithm to burn \(T\) in k steps. We will use induction on \(|V(T)|\) to prove \(k=\lceil\sqrt{n}\rceil\). **Induction Hypothesis :** Suppose the statement is true for all FBTNP trees having order less than equal \(t^{2}\), i.e. \(|V(T)|\leq t^{2}\). Consider a tree \(T\) of size \(m\) such that \(t^{2}<m\leq(t+1)^{2}\). **Case I:** a) Number of vertices burnt \(\geq 2(t+1)\geq 2t+2\). After burning, the size of the modified tree \(T^{\prime}=m-2t-2\). As \(m\leq(t+1)^{2}\), \(T^{\prime}\leq(t+1)^{2}-2t-2\leq t^{2}-1\). Thus, \(T^{\prime}\) can be burnt in \(t\) steps by Induction Hypothesis. b) Number of vertices burnt \(\geq 2(t+1)\geq 2t+2\). After burning, the size of the modified tree \(T^{\prime}=m-2t-2\). As \(m\leq(t+1)^{2}\), \(T^{\prime}\leq(t+1)^{2}-2t-2\leq t^{2}-1\). Thus, \(T^{\prime}\) can be burnt in \(t\) steps by Induction Hypothesis. **Case II:** Number of vertices burnt \(\geq 4(t+1)-4\geq 4t\). After burning, the size of the modified tree \(T^{\prime}=m-4t\). As \(m\leq(t+1)^{2}\), \(T^{\prime}\leq(t+1)^{2}-4t\leq(t-1)^{2}\). Thus, \(T^{\prime}\) can be burnt in \(t-1\) steps by Induction Hypothesis. Thus, by induction, \(T\), having \(|V(T)|\leq(t+1)^{2}\) can be burnt in \((t+1)\) steps. Hence \(b(T)\leq\lceil\sqrt{n}\ \rceil\). ### Improved Bound algorithm to burn a FBTNP Let \(P_{d+1}=(u_{0},u_{1},\ldots,u_{d})\) be the diameter path and \(h\) be the height of \(T\). We consider the following cases, depending on which we develop a method to burn the tree \(T\) in \(k\) steps. Let \(k\) be the burning number of T and the burning sequence be \(<v_{1},v_{2},\ldots v_{k}>\). Consider the diameter path \(P_{d+1}\). The following cases may arrive:- **Case I:** **a)** Let either \(T(u_{k-1})\) have order \(\geq 2k+1\). Then the order of \(T(u_{k-1})\cup p_{k}\geq 2k+2\). Note that, for this to happen the branch from a vertex \(u_{i}\) has height \(\geq 2\), for \(1\leq i\leq k-1\). **Step I.a.:** Without loss of generality, we place \(v_{1}\) at \(u_{k-1}\) in order to burn the vertices in \(T(u_{k-1})\cup\{p_{k}\}\) to burn \(\geq 2k+2\) vertices. **Step I.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k}\) in the updated diameter path \(P_{d^{\prime}}\) so that the modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 4a. **b)** Let both \(T(u_{k-1})\cup\{p_{1}\}\) are of order \(\geq 2k+2\) and the branch attached to \(u_{k}\) is of height \(k\). Then we may place \(v_{1}\) at \(p_{1}\) and \(v_{2}\) at \(u_{k-2}\) to burn \(\geq 4k\) many vertices. **Step II.a.:** Without loss of generality, we place \(v_{1}\) at \(p_{1}\) and \(v_{2}\) at \(u_{k-2}\). **Step II.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k}\) in the updated diameter path \(P_{d^{\prime}}\) so that the modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 4b. **c)** Let both \(T(u_{k-1})\cup\{p_{1}\}\) are of order \(\geq 2k+2\) and the branch attached to \(u_{k}\) is of height \(k-1\). **Step I.a** We place \(v_{1}\) at \(u_{k-1}\) to burn \(\geq 4k-5\) vertices. Note that we do not include \(u_{k},p_{k-2}\) among these burnt vertices. We place \(v_{k-1}\) at \(p_{k-2}\) to burn \(T(p_{k-2})\). Also the modified tree will be burnt by \(\{v_{2},\ldots,v_{k-2}\}\cup\{v_{k}\}\) using the algorithm 4.2. **Step II.b.:** We update the diameter path \(P_{d+1}\) of \(T\). While updating we include the vertex \(u_{k}\) in the updated diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). **Case II:** **a)** Let \(T(u_{k})\) has order \(\geq 2k+1\). Then \(T(u_{k})\cup p_{k+1}\) has order \(\geq 2k+2\). Note that, for this to happen the branch from a vertex \(u_{i}\) has height \(=1\), for \(1\leq i\leq k\). **Step I.a.:** Without loss of generality, we place \(v_{1}\) at \(u_{k-1}\) in order to burn the vertices in \(T(u_{k})\cup\{p_{k+1}\}\) to burn \(\geq 2k+2\) vertices. **Step I.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k+1}\) in the updated diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 5a. Note that, if the height of the branch from \(u_{k+1}\) is \(\leq k-3\) then placement of \(v_{1}\) ensures that no vertex remains unburnt and the tree stays connected. Else we may have the following cases. **b)** When the height of the branch from \(u_{k+1}\) is \(k+1\), \(T(u_{k})\cup T(p_{1})\) has order \(\geq 4k+2\). **Step II.a.** We place \(v_{1}\) at \(p_{2}\) and \(v_{2}\) at \(u_{k-2}\) to burn \(\geq 4k+2\) vertices. **Step II.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k+1}\) in the updated diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 5b. Note that, any branch from \(p_{i}\) of height \(k-i-1\) then we replace the path \((u_{0},\ldots,u_{k})\) of the diameter path \(P_{d+1}\) with the path \((p_{1},\ldots,p_{k+1}\). Thus it falls under **Case I.a.** and **Case I.b.**. **c)** When the height of the branch from \(u_{k}\) is \(k\), \(T(u_{k})\cup T(p_{1})\) has order \(\geq 4k\). **Step III.a.** We place \(v_{1}\) at \(p_{1}\) and \(v_{2}\) at \(u_{k-2}\). **Step III.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k+1}\) in the updated Figure 4: Case 1 diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 5c. Note that, if any branch from \(p_{i}\) has height \(k-i-1\) then our placement suffices. **d)** When the height of the branch from \(u_{k+1}\) is \(k-1\), \(T(u_{k})\cup T(p_{1})\) has order \(\geq 4k-2\geq 2k+2\). **Step IV.a.** We place \(v_{1}\) at \(u_{k-1}\). The branch from \(u_{k+1}\) burns till \(p_{k-3}\). We burn \(T(p_{k-3})\) using \((v_{k-1},v_{k})\). **Step IV.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k+1}\) in the updated diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 5d. Note that, if any branch from \(p_{i}\) has height \(k-i-1\) then we place \(v_{2}\) at \(p_{1}\) and \(v_{1}\) at \(u_{k-1}\). Thus it burns \(\geq 4k\) vertices. **e)** When the height of the branch from \(u_{k+1}\) is \(k-2\), \(T(u_{k})\cup T(p_{1})\) has order \(\geq 4k-4\geq 2k+2\). **Step V.a.** We place \(v_{1}\) at \(u_{k-1}\). The branch from \(u_{k+1}\) burns till \(p_{k-3}\). We burn \(T(p_{k-3})\) using \((v_{k-1})\). **Step V.b.:** We update the diameter path \(P_{d+1}\) of \(T\). Note that, we include the vertex \(u_{k+1}\) in the updated diameter path \(P_{d^{\prime}}\) so that modified tree \(T^{\prime}\) retains all properties of the initial tree \(T\). Refer to figure 5e. Note that, any branch from \(p_{i}\) of height \(k-i-1\) then we place \(v_{1}\) at \(u_{k}\) to burn \(T(p_{1})\) and burn \(T(u_{k})\) upto \(u_{1}\). Thus we burn the remaining 3 vertices i.e. \(T(u_{1})\) by placing \(v_{k-1}\). **Theorem 4.5**.: _Let \(T\) be a FBTNP of order \(n\), then \(b(T)\leq\lceil\sqrt{n+9}\rceil-1\)._ Proof.: Let \(P_{d+1}=(u_{0},u_{1},\ldots,u_{d})\) be the diameter path of \(T\). Follow the given algorithm to burn \(T\) in k steps. We will use induction on \(|V(T)|\) to prove \(k=\lceil\sqrt{n+9}\rceil-1\). **Induction Hypothesis :** Suppose, all FBTNP trees having order less than equal \((t+1)^{2}-9\), i.e. \(|V(T)|\leq(t+1)^{2}-9\). can be burnt in \(t\) steps. Consider a tree \(T\) of size \(m\) such that \((t+1)^{2}-9<m\leq(t+2)^{2}-9\) Figure 5: Case 2 **Case I:** a) Number of vertices burnt vertices in the first step is \(\geq 2t+4\). After burning, the size of the modified tree \(T^{\prime}\) is \(m-(2t+4)\). Since \(m\leq(t+2)^{2}-9\), \(V(T^{\prime})\leq(t+2)^{2}-9-(2t+4)=(t+1)^{2}-10\). Therefore, \(T^{\prime}\) can be burnt in \(t\) steps by Induction Hypothesis. b) Number of vertices burnt in the first and second steps is \(\geq 4t+4\). Hence the size of the modified tree \(T^{\prime}\) is \(m-(4t+4)\leq(t+2)^{2}-9-4t-4=t^{2}-9\). Therefore, \(T^{\prime}\) can be burnt in \(t-1\) steps by Induction Hypothesis. c) Number of vertices burnt vertices in the first step is \(\geq(4t-1)\). Note that, \(|V(T(p_{k-2}))|=3\). Hence, we get size of the modified tree \(T^{\prime}=m-(4t-1)-3\leq(t+2)^{2}-9-4t-2=t^{2}-7<t^{2}\). By Theorem 4.4, we can burn \(T^{\prime}\) in \(t\) steps. Let the corresponding burning sequence be \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t})\). It is easy to observe that \((v_{2},v_{3},\ldots,v_{t+1})\equiv(v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{ \prime}_{t})\). For a binary tree of size \(t^{2}\), if we follow algorithm 4.2, it is easy to observe that after \((t-2)\)th step of burning, the modified tree (say \(T_{1}\)) will be of the form (see Figure 6a)), which contains at most \(5\) vertices, which is burnt by \(v^{\prime}_{t-1},v^{\prime}_{t}\) respectively. Thus a tree \(T\) of size, \(t^{2}-5\), can definitely be burnt by \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t-2})\). Thus \(T^{\prime}\) of order \(t^{2}-7\) can also be burnt by \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t-2})\). **Case II:** a) Number of vertices burnt vertices in the first step is \(\geq 2t+4\). After burning, the size of the modified tree \(T^{\prime}\) is \(m-(2t+4)\). Since \(m\leq(t+2)^{2}-9\), \(V(T^{\prime})\leq(t+2)^{2}-9-(2t+4)=(t+1)^{2}-10\). Therefore, \(T^{\prime}\) can be burnt in \(t\) steps by Induction Hypothesis. b) Number of vertices burnt in the first and second steps is \(\geq 4t+6\). Hence the size of the modified tree \(T^{\prime}\) is \(m-(4t+6)\leq(t+2)^{2}-9-4t-6=t^{2}-11\leq t^{2}-9\). Therefore, \(T^{\prime}\) can be burnt in \(t-1\) steps by Induction Hypothesis. c) Number of vertices burnt in the first and second steps is \(\geq 4t+4\). Hence the size of the modified tree \(T^{\prime}\) is \(m-(4t+4)\leq(t+2)^{2}-9-4t-4=t^{2}-9\). Therefore, \(T^{\prime}\) can be burnt in \(t-1\) steps by Induction Hypothesis. d) Number of vertices burnt vertices in the first step is \(\geq(4t-1)\). Note that, \(|V(T(p_{k-3}))|=5\). Hence, we get size of the modified tree \(T^{\prime}=m-(4t-1)-5\leq(t+2)^{2}-9-4t-4=t^{2}-9<t^{2}\). By Theorem 4.4, we can burn \(T^{\prime}\) in \(t\) steps. Let the corresponding burning sequence be \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t})\). It is easy to observe that \((v_{2},v_{3},\ldots,v_{t+1})\equiv(v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{ \prime}_{t})\). For a binary tree of size \(t^{2}\), if we follow algorithm 4.2, it is easy to observe that after \((t-2)\)th step of burning, the modified tree (say \(T_{1}\)) will be of the form (see Figure 6a)), which contains at most \(5\) vertices, which is burnt by \(v^{\prime}_{t-1},v^{\prime}_{t}\) respectively. Thus a tree \(T\) of size, \(t^{2}-5\), can definitely be burnt by \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t-2})\). Thus \(T^{\prime}\) of order \(t^{2}-7\) can also be burnt by \((v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{t-2})\). Note that, when \(v_{1}\) and \(v_{2}\) both are placed then \(4t+4\) vertices are burnt and thus the proof follows from **Case II.b.**. e) Number of vertices burnt vertices in the first step is \(\geq(4t-1)\). Note that, \(|V(T(p_{k-3}))|=3\). Hence, we Figure 6: Unburnt Subtrees get size of the modified tree \(T^{\prime}=m-(4t-1)-3\leq(t+2)^{2}-9-4t-2=t^{2}-7<t^{2}\). By Theorem 4.4, we can burn \(T^{\prime}\) in \(t\) steps. Let the corresponding burning sequence be \((v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{t}^{\prime})\). It is easy to observe that \((v_{2},v_{3},\ldots,v_{t+1})\equiv(v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{t}^ {\prime})\). For a binary tree of size \(t^{2}\), if we follow algorithm 4.2, it is easy to observe that after \((t-2)\)th step of burning, the modified tree (say \(T_{1}\)) will be of the form (see Figure 6b)), which contains at most \(3\) vertices, which is burnt by \(v_{t-1}^{\prime}\). Thus a tree \(T\) of size, \(t^{2}-3\), can definitely be burnt by \((v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{t-2}^{\prime})\cup\{v_{t}^{\prime}\}\). Thus \(T^{\prime}\) of order \(t^{2}-7\) can also be burnt by \((v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{t-2}^{\prime})\cup\{v_{t}^{\prime}\}\). ## 5 General Tree We now extend our results to the more general tree. An _k-ary tree_ is a rooted tree in which degree of each node \(v_{i}\) is \(d(v_{i})\) such that \(3\leq d(v_{i})\leq k\). Also \(v_{i}\neq c\) where \(c\) is the root of the tree. Note that a _k-ary tree_ is a general tree with exactly \(1\) node of degree \(2\). **Theorem 5.1**.: _Let \(T\) be a tree of order n and \(n_{2}\) be the number of degree 2 vertices. Then \(b(T)\leq\lceil\sqrt{n+n_{2}+8}\rceil-1\)._ Proof.: Let a general tree \(T\) of order \(n\) has \(n_{2}\) nodes of degree \(2\). We add a pendent vertex to each of \(n_{2}-1\) nodes leaving out \(1\) node as the root, thus transforming \(T\) to a _k-ary tree_\(T^{\prime}\) of order \(n+n_{2}-1\). By Theorem 4.4 we can burn \(T^{\prime}\) in \(\lceil\sqrt{(n+n_{2}-1)+9}\rceil-1=\lceil\sqrt{n+n_{2}+8}\rceil-1\) steps. The burning number of a connected graph G equals the burning number of a spanning tree of G; see [4]. Hence, we derive the following results for a connected graph. As, \(n_{2}\leq n-2\), thus a tree of order \(n\) can be burnt in \(\lceil\sqrt{2n+6}\rceil-1\) steps which is an improvement over [1]. Also, when \(n_{2}\leq n/3\), then \(b(T)\leq\lceil\sqrt{4n/3+8}\rceil-1\) which is an improvement over [2] under the given condition. ## 6 Conclusion To sum up, the burning number conjecture has been proved for binary trees using an algorithmic approach. For a FBTNP, an improved algorithm has been given which in turn has improved the bound from the original conjecture. We have also improved upon bounds given by [1] for general graphs. After restricting the number of degree \(2\) vertices in a graph, our bound is the best-known result under the condition.
2306.03729
Does the metric play a fundamental role in the building of gravitational models?
The idea that General Relativity could be an effective model, of a yet unknown theory of gravity, has gained momentum among theoretical physicists. The polynomial affine model of gravity is an alternative model of affine gravity that possesses many desirable features to pursue a quantum theory of gravitation. In this paper we argue that such features are a consequence of the lack of a metric structure in the building of the model, even though a emergent metric could be defined. The model introduces additional degrees of freedom associated to the geometric properties of the space, which might shed light to understand the nature of the dark sector of the Universe. When the model is coupled to a scalar field, it is possible to define inflationary scenarios.
Oscar Castillo-Felisola, Jose Perdiguero
2023-06-06T14:47:09Z
http://arxiv.org/abs/2306.03729v1
# Does the metric play a fundamental role in the building of gravitational models? ###### Abstract The idea that General Relativity could be an effective model, of a yet unknown theory of gravity, has gained momentum among theoretical physicists. The _polynomial affine model of gravity_ is an alternative model of affine gravity that possesses many desirable features to pursue a quantum theory of gravitation. In this paper we argue that such features are a consequence of the lack of a metric structure in the building of the model, even though a _emergent_ metric could be defined. The model introduces additional degrees of freedom associated to the geometric properties of the space, which might shed light to understand the nature of the dark sector of the Universe. When the model is coupled to a scalar field, it is possible to define inflationary scenarios. **Keywords:** Purely affine gravity, Cosmological model, Alternative gravitational models, Affine connection ## 1 Introduction Nowadays the gravitational interaction is understood as the effect of geometric properties of the spacetime. The conceptual break through from the Newtonian interpretation of gravitational force to this modern approach is due to Albert Einstein, who obtained the field equations of General Relativity in an attempt to make the Newtonian gravity compatible with the ideas of Special Relativity. The Einstein field equations for gravity are expressed as \[\mathcal{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\mathcal{R}+\Lambda g_{\mu\nu}=8\pi G_{ N}T_{\mu\nu}, \tag{1}\] where \(g_{\mu\nu}\) is the metric of the spacetime, \(G_{N}\) and \(\Lambda\) are the Newton's gravitational and cosmological constants respectively, the \(T_{\mu\nu}\) is the energy-momentum tensor of the matter populating the Universe, and finally the \(\mathcal{R}\) quantities are curvatures (which are calculable for a given metric). David Hilbert obtained the field equations in Eq. (1) through the optimisation of an action functional, dubbed the Einstein-Hilbert action, \[S_{\text{EH}}=\int\mathrm{d}^{4}x\,\sqrt{|g|}\left(\frac{1}{2\kappa}\left(g^{ \mu\nu}\mathcal{R}_{\mu\nu}-2\Lambda\right)+\mathcal{L}_{m}\right), \tag{2}\] with \(\kappa=8\pi G_{N}\), \(g^{\mu\nu}\) is the _inverse_ of the metric in the sense that \(g^{\mu\nu}g_{\nu\lambda}=\delta^{\mu}_{\lambda}\), the symbol \(g\) denotes the determinant of the metric tensor field, and finally the \(\mathcal{L}_{m}\) refers to the Lagrangian of the matter content, and it is related to the energy-momentum tensor by the relation \[T_{\mu\nu}=-\frac{2}{\sqrt{|g|}}\frac{\partial}{\partial g^{\mu\nu}}\sqrt{|g| }\mathcal{L}_{m}. \tag{3}\] The above formulation of gravity relies on the premise that the spacetime is modelled by a pseudo-Riemannian manifold whose metric is Lorentzian with signature 2. Consequently, the curvatures involved in Eqs. (1) and (2) are obtained from the Riemannian curvature defined by the Levi-Civita connection, which is completely determined by the metric (the explicit formulas can be found in every textbook on General Relativity). Despite its success explaining a large amount of observations [1, 2, 3, 4], from a theoretical perspective, two of the major unappealing points of General Relativity are that it fails to be consistently quantised and that--in order to match the cosmological observations--requires a large amount of (about 96% of the total matter/energy of the Universe) matter/energy whose nature is almost completely unknown, grouped under the denomination of _dark sector_. These issues serve as motivation to explore alternative models of gravity. A customary way to propose generalisations to the Einstein-Hilbert formulation of gravity is to follow the assertion of the Lagrangian formulation of classical mechanics (classical field theory, and in particular effective theories), "All terms allowed by the symmetries of your system should be included in the action functional". This method has lead us to consider models whose Lagrangians include higher powers in curvature, such as Lanczos-Lovelock [5, 6, 7], or the \(f(\mathcal{R})\)[8, 9], and other extensions [10, 11, 12]. There are other type of extensions, guided by the proposal by Palatini, in which the metric and the connection are considered as independent fields [13]. Formally speaking, the approach _a la Palatini_ is supported by the fact that on a manifold the affine and metric structures are not related in general. We could use an analogy with the classical plane geometry, where the compass and ruler are the tools defining the geometry. Continuing with the analogy, the compass yields the notion of distance (i.e. it is the equivalent to the metric in differential geometry), while the ruler is used to define parallelism (i.e. it is the analogous to the affine connection in differential geometry). Despite the fact that we could use a graduated ruler to build a geometry with a single instrument--this would be the analogous to Riemannian geometry--, it is possible to relax the conditions and consider different pairs of instruments, including the possibility of using only one of them (such are the cases of _inversive_ and _projective_ geometry). A manifold endowed with an affine connection, \((\mathcal{M},\hat{\Gamma})\), is called an _affinely connected manifold_. In addition, if a metric structure is allowed on the manifold, \((\mathcal{M},\hat{\Gamma},g)\), it is said to be metric-affine. Any pair \((\hat{\Gamma},g)\) is characterised by the curvature of the connection, \(\mathcal{R}\), whether the connection possesses an skew-symmetric part (also known as torsion tensor), \(\mathcal{T}\), and the failure of compatibility \(\nabla^{\hat{\Gamma}}g=\mathcal{Q}\). There are eight types of metric-affine manifolds, e.g. Weyl, Weyl-Cartan, Weitzenbock, Riemann-Cartan, Riemann, Minkowski and others (see Refs. [14; 15]). The commutative diagram in Eq. (4) illustrate the diversity of metric-affine manifold, which can be obtained from the class of affinely connected ones. (4) In the literature, one can find a plethora of proposals in which the spacetime is modelled by any of the kind of metric-affine manifolds. Just to mention a few of them, Weyl considered a spacetime with vectorial non-metricity [16; 17; 18; 19], Cartan considered a connection with nonvanishing torsion [20; 21; 22; 23], Einstein proposed a version of General Relativity with a flat connection with torsion (a Weitzenbock connection) [24; 25; 26], extension of the Lanczos-Lovelock to Riemann-Cartan manifolds [27], models with higher powers in curvature, torsion and/or non-metricity [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. Affine formulations of gravity, i.e. models built up on \((\mathcal{M},\hat{\Gamma})\),1 bringing the gravitational interaction to the _same footage_ as gauge theories--since their fundamental field is a connection--,2 and therefore are believed to play a major role in the development of a quantum theory of gravity. Moreover, the mismatch in the number of independent components (metric to connection) could generate novel gravitational degrees of freedom, which might provide _geometrical_ candidates of dark matter/energy.3 Hence, from a theoretical point of view, the analysis of affine models of gravity is very interesting. Historically, the first affine models were built by Einstein, Eddington and Schrodinger [43, 44, 45, 46], but more recent models were considered by Kijowski [47, 48], Poplawski [49, 50, 51], Krasnov [52, 53, 54, 55], and ourselves [56, 57]. Footnote 2: The gauge symmetry is a key ingredient in the quantisation and renormalisation properties of the other fundamental interactions. Footnote 3: In a \(D\)-dimensional space(time), the number of components of the affine connection is \(D^{3}\), which supersize the number of components of a standard metric (\(\frac{D(D+1)}{2}\)) or even _generalised_ metrics (\(D^{2}\)). ## 2 Formulation of Polynomial Affine Gravity The polynomial affine model of gravity is built up with powers of the affine connection, while the role of the metric field is dismisses by keeping the action independent of it (similar to a Schwarz type topological theory [58]) [56, 57]. The action for this model is the most general functional built up with the _irreducible_ components of the affine connection and a volume form. Mathematically, a general affine connection \(\hat{\Gamma}_{\mu}{}^{\lambda}{}_{\nu}\) decomposes as follows,4 Footnote 4: Without evoking to a metric, it is not possible to separate the Levi-Civita and non-metricity contributions from the symmetric connection. \[\hat{\Gamma}_{\mu}{}^{\lambda}{}_{\nu}=\hat{\Gamma}_{(\mu}{}^{\lambda}{}_{\nu )}+\hat{\Gamma}_{[\mu}{}^{\lambda}{}_{\nu]}=\Gamma_{\mu}{}^{\lambda}{}_{\nu} +\mathcal{B}_{\mu}{}^{\lambda}{}_{\nu}+\mathcal{A}_{[\mu}\delta_{\nu]}^{ \lambda}, \tag{5}\] where \(\Gamma_{\mu}{}^{\lambda}{}_{\nu}\) denotes the symmetric part of the connection, and \(\mathcal{B}_{\mu}{}^{\lambda}{}_{\nu}\) and \(\mathcal{A}_{\mu}\) associated with the traceless and trace components of the torsion.5 Since the symmetric connection is the sole _irreducible_ piece that is not a tensor, it enters into the action functional through the covariant derivative or Chern-Simons topological terms. Furthermore, we shall consider a _canonical_ volume form (i.e. an everywhere nonvanishing \(m\)-form on an \(m\)-dimensional affine manifold) defined by, \(\mathrm{d}V^{\mu_{1}\cdots\mu_{n}}=\mathrm{d}x^{\mu_{1}}\wedge\cdots\wedge \mathrm{d}x^{\mu_{m}}\). Footnote 5: The symmetric connection might be split further into irreducible components, but such separation is not necessary for the subsequent analysis. Hence, with the aids of an _index structural analysis_,6 the most general actions have been built up in three and four dimensions (see Refs. [56, 57, 59, 60, 61]), and up to topological and boundary terms.7 In the following we shall restrict ourselves to the four-dimensional case, whose action is \[S_{(4)}=\int\mathrm{d}V^{\alpha\beta\gamma\delta}\,\bigg{[}B_{1} \mathcal{R}_{\mu\nu}{}^{\mu}{}_{\rho}\mathcal{B}_{\alpha}{}^{\nu}{}_{\beta} \mathcal{B}_{\gamma}{}^{\rho}{}_{\delta}+B_{2}\mathcal{R}_{\alpha\beta}{}^{ \mu}{}_{\rho}\mathcal{B}_{\gamma}{}^{\nu}{}_{\delta}\mathcal{B}_{\mu}{}^{\rho }{}_{\nu}+B_{3}\mathcal{R}_{\mu\nu}{}^{\mu}{}_{\alpha}\mathcal{B}_{\beta}{}^{ \nu}{}_{\gamma}\mathcal{A}_{\delta}\] \[\qquad\qquad+B_{4}\mathcal{R}_{\alpha\beta}{}^{\sigma}{}_{\rho} \mathcal{B}_{\gamma}{}^{\rho}{}_{\delta}\mathcal{A}_{\sigma}+B_{5}\mathcal{R} _{\alpha\beta}{}^{\rho}\mathcal{B}_{\gamma}{}^{\sigma}{}_{\delta}\mathcal{A}_ {\sigma}+C_{1}\mathcal{R}_{\mu\alpha}{}^{\mu}{}_{\nu}\nabla_{\beta}\mathcal{B} _{\gamma}{}^{\nu}{}_{\delta}\] \[\qquad\qquad+C_{2}\mathcal{R}_{\alpha\beta}{}^{\rho}{}_{\rho} \nabla_{\sigma}\mathcal{B}_{\gamma}{}^{\sigma}{}_{\delta}+D_{1}\mathcal{B}_{ \nu}{}^{\mu}{}_{\lambda}\mathcal{B}_{\mu}{}^{\nu}{}_{\alpha}\nabla_{\beta} \mathcal{B}_{\gamma}{}^{\lambda}{}_{\delta}+D_{2}\mathcal{B}_{\alpha}{}^{\mu} {}_{\beta}\mathcal{B}_{\mu}{}^{\lambda}{}_{\nu}\nabla_{\lambda}\mathcal{B}_{ \gamma}{}^{\nu}{}_{\delta}\] \[\qquad\qquad+D_{3}\mathcal{B}_{\alpha}{}^{\mu}{}_{\nu}\mathcal{B} _{\beta}{}^{\lambda}{}_{\gamma}\nabla_{\lambda}\mathcal{B}_{\mu}{}^{\nu}{}_{ \delta}+D_{4}\mathcal{B}_{\alpha}{}^{\lambda}{}_{\beta}\mathcal{B}_{\gamma}{}^{ \sigma}{}_{\delta}\nabla_{\lambda}\mathcal{A}_{\sigma}+D_{5}\mathcal{B}_{ \alpha}{}^{\lambda}{}_{\beta}\mathcal{A}_{\sigma}\nabla_{\lambda}\mathcal{B}_{ \gamma}{}^{\sigma}\] \[\qquad\qquad+D_{6}\mathcal{B}_{\alpha}{}^{\lambda}{}_{\lambda} \mathcal{A}_{\gamma}\nabla_{\lambda}\mathcal{A}_{\delta}+D_{7}\mathcal{B}_{ \alpha}{}^{\lambda}{}_{\beta}\mathcal{A}_{\lambda}\nabla_{\gamma}\mathcal{A}_ {\delta}+E_{1}\nabla_{\rho}\mathcal{B}_{\alpha}{}^{\rho}{}_{\beta}\nabla_{ \sigma}\mathcal{B}_{\gamma}{}^{\sigma}{}_{\delta}\] \[\qquad\qquad\qquad+E_{2}\nabla_{\rho}\mathcal{B}_{\alpha}{}^{\rho }{}_{\beta}\nabla_{\gamma}\mathcal{A}_{\delta}+F_{1}\mathcal{B}_{\alpha}{}^{ \mu}{}_{\beta}\mathcal{B}_{\gamma}{}^{\sigma}{}_{\delta}\mathcal{B}_{\mu}{}^{ \lambda}{}_{\rho}\mathcal{B}_{\sigma}{}^{\rho}{}_{\lambda}+F_{2}\mathcal{B}_{ \alpha}{}^{\mu}{}_{\beta}\mathcal{B}_{\gamma}{}^{\nu}{}_{\lambda}\mathcal{B}_{ \delta}{}^{\lambda}{}_{\rho}\mathcal{B}_{\mu}{}^{\rho}{}_{\nu}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ discussion on Sec. 3 of Ref. [57]). In four dimensions, the effective field equations in the torsion-less sector are \[\nabla_{[\lambda}\mathcal{R}_{\mu]\nu}=0,\text{ or equivalently }\nabla_{\lambda}\mathcal{R}_{\mu\nu}{}^{\lambda}{}_{\rho}=0. \tag{7}\] The first expression in Eq. (7) is a Codazzi-like condition, and therefore the Ricci tensor is said to be a Codazzi tensor [77, 78]. The second expression in Eq. (7) is obtained from the former after applying the differential Bianchi identity, and the curvature is said to be harmonic [79, 80, 81]. In addition, the equations in (7) are well-known generalisations to the Einstein field equations, which appear as part of the dynamics of a model of Stephenson-Kilmister-Yang model of gravity [82, 83, 84], which is a Yang-Mils formulation of gravity. Although the original formulation of the Stephenson-Kilmister-Yang model have structural incompatibilities with General Relativity, such issue is introduced by the field equation for the metric field, and hence from the point of view of polynomial affine gravity the problem is bypasses. The cosmological solutions to the Eq. (7) are obtained by proposing an ansatz for the connection [59, 60, 85], and the solutions might be classified in three types: (R.i) solutions whose Ricci tensor vanishes; (R.ii) solutions whose Ricci tensor is non-trivial and parallel; and (R.iii) solutions whose Ricci tensor is nontrivial, non-parallel but it is a Codazzi tensor. Since the symmetric part of the Ricci tensor (when no-degenerated) can be interpreted as a metric tensor field [86],9 it is clear that the cosmological solutions of type (R.ii) are equivalent to Einstein manifolds, i.e. solutions to the Einstein field equations in the vacuum, while the solutions of type (R.iii) extend the gravitational sector of General Relativity by allowing non-metricity. Footnote 9: It is worth mentioning that when the torsion is nonvanishing, there is another tensor (derived from the connection) which can play the role of a metric. We refer to such geometrical object as Poplawski metric tensor, in honour to the person who discover its relevance [51]. The consequences of the existence of such _emergent_ metric are vast. Mainly, it introduce a notion of distance in the model, bringing it closer to the physical world, on which we are able to give a _global_ notion of distances between events.10 In addition, the metric structure provides a notion of norm of vectors. In particular, if we consider the norm of vectors which generate self-parallel curves, their norm will differentiate the orbits corresponding to massless or massive test particles. Footnote 10: In affinely connected spaces without metric, it is possible to define distances along self-parallel curves, however, it is not possible to compare the distances measured on different self-parallel curves (see for example Ref. [46]). This quality validate the use of the term _global_ in the previous sentence. Intriguingly, the signature of the emergent metrics is not fixed, permitting the possibility of changes in the signature of the metric (in cosmological scenarios as a function of time). Even though these jumps in the signature require the degeneracy of the _metric_ on some hyper-surface of the manifold, it remains to be analyses if such kind of degeneracy is spurious (due to the choice of coordinate system for example). However, such flexibility of the emergent metric play a fundamental role if one would like to analyse the structure of the saddle point of the path integral of gravity, in the spirit of Witten's proposal [87]. ## 3 Coupling a scalar field Furthermore, in four dimensions the polynomial affine model of gravity can be coupled to a scalar field \(\phi\)[57], through the geometrical object defined as \[\mathrm{g}^{\mu\nu}=(\alpha\nabla_{\lambda}\mathcal{B}_{\rho}{}^{\mu}{}_{\sigma }+\beta\mathcal{A}_{\lambda}\mathcal{B}_{\rho}{}^{\mu}{}_{\sigma})\,\mathrm{d}V ^{\nu\lambda\rho\sigma}+\gamma\mathcal{B}_{\kappa}{}^{\mu}{}_{\lambda} \mathcal{B}_{\rho}{}^{\nu}{}_{\sigma}\mathrm{d}V^{\kappa\lambda\rho\sigma}. \tag{8}\] Using the above expression we can define the kinetic term \[S_{\phi}=-\int\mathrm{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi, \tag{9}\] which contribute to the field equations of the model even in the torsion-free sector. Furthermore, we can introduce a non-minimal coupling by the re-scaling of the volume form,11 Footnote 11: This scaling is allowed since any non-degenerated 4-form is a valid volume form. In addition, the scaling by a function of the scalar field, \(\phi\), could be interpreted as an analogous to the deformation of the metric that allows to define non-linear sigma models, i.e. \(g_{\mu\nu}\to g_{\mu\nu}(\phi)\). \[\mathrm{d}V^{\mu\nu\lambda\rho}\mapsto\frac{\mathrm{d}V^{\mu\nu\lambda\rho}}{ \mathcal{V}(\phi)}. \tag{10}\] The \(\mathcal{V}(\phi)\) function could be related to a _self-interacting_ scalar field potential, turning this framework into a natural ground for inflationary scenarios [88, 89, 48]. The effective field equations in the torsion-free sector coupled with a scalar field its given by \[\nabla_{[\lambda}\mathcal{S}_{\mu]\nu}=0, \tag{11}\] where the \(\mathcal{S}\) tensor its a symmetric tensor defined by \[\mathcal{S}_{\mu\nu}=\frac{\mathcal{R}_{\mu\nu}-C\partial_{\mu}\phi\partial_{ \nu}\phi}{\mathcal{V}(\phi)}, \tag{12}\] where the constant \(C\) measures the ratio between the original coupling constants. Just like the we mention before, we can classify solutions in three different categories: (S.i) solutions whose \(\mathcal{S}\) tensor vanishes; (S.ii) solutions where the \(\mathcal{S}\) tensor its covariantly constant; and (S.iii) solutions where the \(\mathcal{S}\) tensor its a codazzi tensor. As a remark, when \(\mathcal{S}\) tensor its non-degenerate it endows the manifold with an emergent metric tensor, however, it depends on the scalar field, \(\phi\), similar as in the case of non-linear sigma models. Note that if the Ricci tensor degenerates in a _spatial_ component, such degeneracy would be inherited by the \(\mathcal{S}\) tensor. ## 4 Conclusion Summarising, the polynomial affine gravity seems to be a viable model of gravitational interactions, even if its formulation does not require a metric structure. However, it is possible to induce a metric structure using the affine connection, allowing a _fair_ way to compare with the cosmological observations. Moreover, the lack of a functional metric in the foundation of the model endows it with desirable features, such as the _rigidity_. We would like to emphasise that other alternative models of gravity do not have restrictions to the number of terms to be added to the action, and the arguments use to justify why some terms should be ignored are bound only by the criteria of the researchers, instead of driven by the theory itself. Also, the dark sector of the Universe could be _naturally_ explained as by extra degrees of freedom, which lie in the non-Riemannian part of the connection. The cosmological constant in polynomial affine gravity is introduced as an integration constant, changing the paradigm of its interpretation similar to what happens in unimodular gravity and other affine models (e.g. Einstein-Eddington-Schrodinger). In addition, the laxity of the emergent metric might be important, in the light of the proposal by Witten [87], in a future programme to quantise the model. Finally, given that there is a _natural_ way to couple scalar fields (and enrich them with a potential), it turns interesting to investigate inflationary scenarios within the context of polynomial affine gravity. A couple of interesting cases to consider are the power-law potential--for its simplicity, and power structure like the gravitational sector--, and the Starobinsky potential--given its success. Acknowledgments.The work OCF is sponsored by the "Centro Cientifico y Tecnologico de Valparaiso" (CCTVal), funded by the Chilean Government through the Centers of Excellence Base Financing Program of Agencia Nacional de Investigacion y Desarrollo (ANID), by grant ANID PIA/APOYO AFB220004.
2302.09095
The Messy Nature of Fiber Spectra: Star-Quasar Pairs Masquerading as Dual Type 1 AGNs
Theoretical studies predict that the most significant growth of supermassive black holes occurs in late-stage mergers, coinciding with the manifestation of dual active galactic nuclei (AGNs), and both major and minor mergers are expected to be important for dual AGN growth. In fact, dual AGNs in minor mergers should be signposts for efficient minor merger-induced SMBH growth for both the more and less massive progenitor. We identified two candidate dual AGNs residing in apparent minor mergers with mass ratios of $\sim$1:7 and $\sim$1:30. SDSS fiber spectra show broad and narrow emission lines in the primary nuclei of each merger while only a narrow [O III] emission line and a broad and prominent H$\alpha$/[N II] complex is observed in the secondary nuclei. The FWHMs of the broad H$\alpha$ lines in the primary and secondary nuclei are inconsistent in each merger, suggesting that each nucleus in each merger hosts a Type 1 AGN. However, spatially-resolved LBT optical spectroscopy reveal rest-frame stellar absorption features, indicating the secondary sources are foreground stars and that the previously detected broad lines are likely the result of fiber spillover effects induced by the atmospheric seeing at the time of the SDSS observations. This study demonstrates for the first time that optical spectroscopic searches for Type 1/Type 1 pairs similarly suffer from fiber spillover effects as has been observed previously for Seyfert 2 dual AGN candidates. The presence of foreground stars may not have been clear if an instrument with more limited wavelength range or limited sensitivity had been used.
Ryan W. Pfeifle, Barry Rothberg, Kimberly A. Weaver, Remington O. Sexton, Jenna M. Cann, Nathan J. Secrest, Michael A. Reefe, Thomas Bohn
2023-02-17T19:00:11Z
http://arxiv.org/abs/2302.09095v1
# The Messy Nature of Fiber Spectra: Star-Quasar Pairs Masquerading as Dual Type 1 AGNs ###### Abstract Theoretical studies predict that the most significant growth of supermassive black holes occurs in late-stage mergers, coinciding with the manifestation of dual active galactic nuclei (AGNs), and both major and minor mergers are expected to be important for dual AGN growth. In fact, dual AGNs in minor mergers should be signposts for efficient minor merger-induced SMBH growth for both the more and less massive progenitor. We identified two candidate dual AGNs residing in apparent minor mergers with mass ratios of \(\sim\)1:7 and \(\sim\)1:30. SDSS fiber spectra show broad and narrow emission lines in the primary nuclei of each merger while only a narrow [O iii] emission line and a broad and prominent H\(\alpha\)/[N ii] complex is observed in the secondary nuclei. The FWHMs of the broad H\(\alpha\) lines in the primary and secondary nuclei are inconsistent in each merger, suggesting that each nucleus in each merger hosts a Type 1 AGN. However, spatially-resolved LBT optical spectroscopy reveal rest-frame stellar absorption features, indicating the secondary sources are foreground stars and that the previously detected broad lines are likely the result of fiber spillover effects induced by the atmospheric seeing at the time of the SDSS observations. This study demonstrates for the first time that optical spectroscopic searches for Type 1/Type 1 pairs similarly suffer from fiber spillover effects as has been observed previously for Seyfert 2 dual AGN candidates. The presence of foreground stars may not have been clear if an instrument with more limited wavelength range or limited sensitivity had been used. Active Galactic Nuclei - Active Galaxies 0000-0002-4880-7886]Ryan W. Pfeifle 0000-0002-4703-1888]Barry Rothberg 0000-0002-4880-7888]Kimberly A. Weaver 0000-0002-4880-7888]Remington O. Sexton 0000-0002-4880-7888]Jenna M. Cann 0000-0002-1888-7888]Nathan J. Secrest 0000-0002-0703-3473]Michael A. Reef 0000-0002-4880-7888]Thomas Bohn ## 1 Introduction Mergers of galaxies (e.g., Toomre & Toomre, 1972; Rothberg & Joseph, 2004) are a ubiquitous phenomenon in the universe and represent a natural consequence of our current cosmological paradigm of "bottom-up" mass accumulation and galaxy evolution. During the merger process, tidal torques drive inflows of gas into the galaxy nuclei (e.g., Barnes & Hernquist, 1996) which can fuel both star formation (e.g., Barnes & Hernquist, 1991; Mihos & Hernquist, 1996) and the growth of the central supermassive black holes (SMBHs, e.g., Hopkins et al., 2006, 2008), and this avenue for mass redistribution and transformation into stellar and/or SMBH mass has been postulated as one way the known scaling relations between the masses of the central SMBHs and the velocity dispersions (M-\(\sigma\), e.g., Ferrarese & Merritt, 2000; Gebhardt et al., 2000) or the luminosities (M-L, e.g., Gultekin et al., 2009) of the host spheroids are established. It is generally accepted that the peak of active galactic nucleus (AGN) accretion activity - and the point at which a merger should host a dual AGN - should occur at nuclear separations of \(<10\) kpc based upon numerical simulations of major mergers (mass ratios \(<\)1:3, e.g., Van Wassenhove et al., 2012; Capelo et al., 2015; Blecha et al., 2018), and simulations appear to show this is also the case for minor mergers (mass ratios \(>\)1:3) (Callegari et al., 2009, 2011; Capelo et al., 2015), the latter point of which is particularly interesting because minor mergers outnumber major mergers by a factor of \(\geq 3\)-4 at late epochs (e.g., Kaviraj, 2014, and references therein). Numerical and hydrodynamic simulations offer a relatively consistent physical picture of merger-induced SMBH growth and evolution in minor mergers, comprising three key features: (1) substantial growth of the SMBH in the less massive progenitor and the triggering of dual AGNs when the nuclear separations are \(<10\) kpc (Callegari et al., 2009, 2011; Van Wassenhove et al., 2012; Capelo et al., 2015); (2) the drastic evolution of the merger mass ratio (Callegari et al., 2011; Capelo et al., 2015); and (3) the eventual formation of massive bound binaries (Callegari et al., 2009, 2011; Khan et al., 2012; Capelo et al., 2015). Although the average duty cycle for a dual AGN in a 1:10 merger is far shorter than in gas-rich major mergers (Callegari et al., 2011; Van Wassenhove et al., 2012; Capelo et al., 2015, 2017), the SMBH hosted in the minor companion should grow 2- to 10-fold in mass during the evolution of the merger (Callegari et al., 2011; Khan et al., 2012; Capelo et al., 2015)! This suggests that AGN growth in minor mergers could be an important SMBH growth channel and a pathway to synchronized AGN growth, and indeed a handful of dual AGNs have been observed in late-stage minor mergers: Was 49b (Bothun et al., 1989; Moran et al., 1992; Secrest et al., 2017), J0924+0510 (Liu et al., 2018), IRAS 05589+2828 (Koss et al., 2012), and J1126+2944 (a 460:1 mass ratio merger; Comerford et al., 2015), all of which present as optical Type 2 dual AGNs.1 However, there exists no confirmed case of a kiloparsec-scale dual AGN minor merger in which both nuclei harbor optically Type 1 AGNs.2 Dual Type 1/Type 1 AGNs have been observed at higher redshifts, but only in major (i.e. nearly equal mass) mergers (e.g., Silverman et al., 2020). Footnote 1: Was 49b exhibits very broad H\(\alpha\) (\(\sim 6400\) km s\({}^{-1}\); Secrest et al., 2017), but this broad emission has a low equivalent width and is known to be highly polarized (e.g., Tran, 1995), indicating that it is likely scattered broad line emission. Footnote 2: Foord et al. (2020) recently identified J2356-1016, a minor merger with at least one AGN (Peifie et al., 2019a), as a dual AGN candidate based on Chandra X-ray imaging and SDSS optical spectra. Their analysis of the SDSS fiber spectra revealed tentative evidence for broad lines in the optical spectra of the both nuclei, though follow-up observations are required to confirm these results. In a separate study of AGNs in mergers, we came across SDSS J142129.75+474724.5 (hereafter J1421+4747, \(z=0.073\)), which presents clear tidal features and consists of a primary galaxy with a bright central nucleus and an off-nuclear, dimmer secondary source (see Figure 1), with a projected pair separation of 7.2 kpc (5\({}^{\prime\prime}\)) and velocity offset of \(|\Delta\,v|\approx 180\) km s\({}^{-1}\). SDSS spectroscopic fiber measurements are available for both the primary nucleus and the offset, secondary source, revealing broad optical emission lines in both sources, highly suggestive of a dual Type 1/Type 1 AGN system. The bright, primary nucleus displays prominent, broad emission lines and a few narrow emission lines and is classified as a Seyfert 1.5 (SBS 1419+480, e.g. Baumgartner et al., 2013). The dimmer, secondary source exhibits a strong host galaxy continuum, a narrow [O iii] emission line, and a very prominent and broad H\(\alpha\)/[N ii] emission line complex. Mass measurements from the SDSS and Chen et al. (2018) indicate bulge masses of \(\log(M/M_{\odot})\sim 10.8\) and \(\sim 9.3\) for the primary and secondary sources, yielding a bulge mass ratio of \(\sim 1\):29, which--if true--would make J1421+4747 a dual Type 1/Type 1 AGN in a minor merger with one of the largest mass ratios of any known dual AGNs. Although we stumbled upon J1421+4747 serendipitously, we used the SDSS specObj table for DR16 with a similar selection strategy to try and uncover further cases of Type 1/Type 1 dual AGNs in the local universe. We retrieved all groups of spectra within 1 arcmin and 1000 km s\({}^{-1}\) of each other that have "BROAD" in their SUBCLASS. We removed groups of spectra that corresponded to the same photo-objid (BESTOBJID), and then kept only those with projected separations less than 10 kpc. This left 60 pairs of spectra which were manually inspected. We removed those for which one or both members of the pair are classified as "GALAXY BROADLINE", since these are apparently absorption-dominated stellar spectra with no obvious broad emission lines. Finally, we were left with 13 systems, each with a pair of spectra that appear to both show broad emission features. This selection strategy recovered J1421+4747, the known dual AGN Was 49 (e.g., Secrest et al., 2017), and the little-known dual AGN J1558+2723 (the G3-G5 complex from Eckert et al., 2017). One other system represented a convincing Type 1/Type 1 candidate, SDSS J171322.58+325627.9 (hereafter J1713+3256, \(z=0.102\)), which - like J1421+4747 - showed broad and narrow emission lines in the primary nucleus and a broad H\(\alpha\)/[N ii] complex and very narrow [O iii] narrow emission line in the secondary source; the SDSS measurements of the sources in J1713+3256 indicated a mass ratio of \(\sim 1\):7, suggesting it was also a dual Type 1/Type 1 AGN in a minor merger. The remaining 9 other systems selected from SDSS were not convincing Type 1/Type 1 candidates3; four of these systems have been shown to suffer from fiber spillover effects (e.g., Husemann et al., 2020) and are likely single AGNs, while the remaining five systems have fiber positions that overlap and the resulting spectra are therefore likely to include substantial contributions from both nuclei. However, given the \(\lesssim 5\arcsec\) separation of the nuclei in each system and the fact that the spectra were obtained using \(3\arcsec\) or \(2\arcsec\) diameter fibers, it is possible that the broad line observed in both secondary nuclei is spillover light from the central nuclei (as a result of inadequate atmospheric seeing), as has been shown to be an issue for narrow line emission in Husemann et al. (2020). To investigate the true nature of these unique systems, we have reanalyzed the SDSS spectrum of each nucleus in each merger, and we have obtained and analyzed new, spatially resolved long slit optical spectra of J1421+4747 and J1713+3256 from the Large Binocular Telescope observatory (LBT) using the Multi Object Dispersion Spectrograph (MODS-2). We organize this work as follows: we describe the SDSS and LBT MODS-2 observations of each nucleus in Section 2 and we discuss our analysis of these spectra in Section 3. We present our spectroscopic results from the SDSS and LBT spectra in Section 4, discuss these results in the context of the literature in Section 5, and we provide our conclusions in Section 6. We provide details on all 13 'Type 1/Type 1' systems selected from SDSS in Appendix A. Throughout this work we assume the following cosmology: \(\rm H_{0}=70\,km\,s^{-1}\,Mpc^{-1}\), \(\rm\Omega_{M}=0.3\), and \(\rm\Omega_{\Lambda}=0.7\); in this cosmology, \(5\arcsec\) corresponds to a projected separation of 6.7 kpc at \(z=0.07\). ## 2 Observations and Data Processing ### SDSS Spectroscopy The primary nucleus in J1421+4747 was observed with the Baryon Oscillation Spectroscopic Survey (BOSS) spectrograph (Smee et al., 2013; Dawson et al., 2013) on UT 17 March 2013 using a \(2\arcsec\) diameter fiber (fiber ID 946, plate ID 6751) with median seeing \(\sim 1\farcs 5\) (SEEING50 entry from the 'Plate' data table). The secondary nucleus in J1421+4747 was observed as a part of the SDSS Legacy Survey on UT 31 March 2005 using a \(3\arcsec\) diameter fiber (fiber ID 55, plate ID 1672) with median seeing \(\sim 1\farcs 8\). We show the fiber positions on the archival HST and SDSS imaging in Figure 1. Both the primary and secondary nuclei in J1713+3256 were observed as a part of the SDSS Legacy Survey. The primary nucleus was observed on 19 May 2002 UT using a \(3\arcsec\) diameter fiber (fiber ID 619, plate ID, 976), while the secondary nucleus was observed on 05 May 2008 UT using a \(3\arcsec\) diameter fiber (fiber ID 284, plate ID, 2973). Both the primary and secondary source in J1713+3256 were observed with a median seeing of \(\sim 1\farcs 8\). We show the fiber positions on the SDSS imaging in Figure 2. ### LBT Optical Spectroscopy Optical spectroscopy of J1421+4747 and J1713+3256 (Program ID: LD-2022A-003) were obtained with the Large Binocular Telescope (LBT) on Mt.Graham, Arizona, USA on MJD 59473 (UT 16 September 2021) and MJD 59671 (UT 02 April 2022). LBT is a binocular telescope with two 8.4 meter primary mirrors residing on a single altitude-azimuth mount. Each side contains a prime focus camera, four bent Gregorian f/15 ports, and direct f/15 Gregorian port. All observations used both optical Multi-Object Double Spectrographs, which are mounted on the direct Gregorian f/15 ports behind each primary mirror (Pogge et al., 2010). Each MODS contains a red and blue channel for spectroscopy, and both were used (Dual Grating mode) to provide coverage from \(0.32\,\mu{\rm m}-1.05\,\mu{\rm m}\). A \(1\arcsec\) wide longslit (R \(\sim 900\)-\(1500\)) was used and each MODS were placed at a position angle so that both nuclei were aligned in the slit for each set of observations. Two exposures of 1800 seconds were taken in each channel, however, the 2nd exposure for J1421+4747 was read out early due to the object reaching the elevation limits of the telescope. The total Figure 1: The HST optical morphology of SDSS J1421+4747. (Left): The main figure displays the HST F814W image using a log scaling and a perceptually uniform color map. The scale bar in the lower left corner indicates a distance of \(5\arcsec\) or \(6.9\) kpc at \(z=0.073\). The dashed white circles indicate the positions (and radii) of the SDSS fibers. The sources are separated by \(7.2\,{\rm kpc}\) (\(5\farcs 04\), based on the fiber positions) with a velocity offset of \(|\Delta\,v|\approx 180\,{\rm km\ s^{-1}}\). The primary nucleus was observed with the BOSS spectrograph using a \(2\arcsec\) diameter fiber while the secondary nucleus was observed as a part of the SDSS Legacy Survey using a \(3\arcsec\) diameter fiber. The lower right panel displays the lower resolution SDSS r-band image with the same scaling, color scheme, and zoom as the main panel, and the dashed black circles again indicate the SDSS fiber positions and sizes. The upper right panel shows a zoom-in view (\(3\arcsec\times 3\arcsec\)) of the secondary source, indicated by the dotted square in the main panel. integration times were 3100 seconds for J1421+4747 and 3600 seconds for J1713+3256 and the mean airmass of the observations were 1.99, and 1.02, respectively. Observations of the spectro-photometric standard Feige 110 and BD+33 2642 were used to flux calibrate the data and remove the instrumental signatures from the data for J1421+4747 and J1713+3256, respectively. The MODS data were reduced first with modsCC-DRed (Pogge, 2019) version 2.04 to remove bias and flat-field the data using a slitless pixel flat. Next custom IRAF scripts were used to: rectify the tilt in both X and Y using a trace from the the spectrophotometric star and wavelength calibration from arc-lamp lines; correct the final wavelength calibration using known strong auroral skylines in the blue ([OI] \(\lambda\)=557.7338 nm) and red ([OI] \(\lambda\)=630.03 nm) channels; extract a one-dimensional spectrum from each channel; flux calibrate the data using the spectro-photometric standard star; remove telluric features from the red channels using the normalized spectro-photometric standard spectrum; and combine the red and blue channels of each MODS (including re-sampling both channels to a common value of 0.85 \(\AA\) per pixel corresponding to an instrumental resolution of 5.8 \(\AA\), and correcting the data to a heliocentric velocity). Two different 1D apertures were extracted for each nucleus in each target. A 3\({}^{\prime\prime}\) diameter aperture was extracted, centered on the second "nucleus" for both targets. This was to match and compare with the spectra from the SDSS fiber. A 1\({}^{\prime\prime}\).25 diameter aperture was extracted for both "nuclei" in J1421+4747 and a 0\({}^{\prime\prime}\).65 diameter aperture was extracted for both "nuclei" in J1713+3256. These aperture sizes were selected based on the mean seeing over the course of the observations as determined from telemetry from the off-axis guider and wavefront sensor for each MODS. After reviewing the final combined blue+red spectrum from each MODS, specifically a visual comparison of the rest-frame stellar absorption lines in the second "nucleus" of each target, it was decided to analyze only the data obtained with MODS-2 for both targets in order to maximize the S/N. Combining the data from the two MODS resulted in reduced S/N in the final spectrum of the second object in the slit. This is due to a known technical issue in which the sensitivity of MODS-1 has decreased by a factor of 1.6 since the 2011 commissioning4.5 Footnote 4: [https://scienceops.lbto.org/mods/preparing-to-](https://scienceops.lbto.org/mods/preparing-to-) observe/sensitivity/ ## 3 Data Analysis To fit the spectra from SDSS and LBT, we used the open-source spectral analysis program Bayesian AGN Decomposition Analysis for SDSS Spectra (BADASS; Sexton et al., 2021).6 BADASS uses the affine-invariant Markov-Chain Monte Carlo sampler emcee for robust parameter and uncertainty estimation, and it can fit simultaneously for a variety of spectral features, including: individual narrow, broad, and/or absorption line features, the stellar line-of-sight velocity distribution (LOSVD) through the use the penalized PiXel-Fitting (pPXF) method from Cappellari (2017), broad and narrow Fe ii emission features, the AGN power law continuum, and blue-wing outflow emission components. Footnote 5: The cause of this degradation has yet to be localized to a single source, and may be some combination of reduced sensitivity of various optical components in MODS-1 and reduced reflectivity at optical wavelengths from the Adaptive Secondary Mirror coating. Footnote 6: [https://github.com/remingtonsexton/BADASS3](https://github.com/remingtonsexton/BADASS3) For each nucleus we model the narrow and broad emission line profiles using simple Gaussians, and we use the default settings of BADASS when possible (see Section 2.1.2 in Sexton et al., 2021, for a brief description of the default line parameter settings, including line parameters that are tied across different emission lines). For the primary nucleus in J1421+4747, we also modeled additional blue-shifted components as simple Gaussians; we refer to these components as outflows throughout the remainder of this work, but it is important to note that the complex shape of the lines could be due to outflows in the narrow components, complex kinematics in the Figure 2: SDSS Image and Optical Spectra for J1713+3256. (Left): The SDSS r-band image of J1713+3256 shows a larger, primary galaxy along with a smaller off-nuclear knot. The nuclei are separated by 7.9 kpc (4\(\farcs\)32) with a velocity offset of \(|\Delta v|\sim 46\) km s\({}^{-1}\). The scale bar indicates an angular size of 5\({}^{\prime\prime}\), or 9.2 kpc at \(z=0.1\). The dashed white circles indicate the positions (and radii) of SDSS fibers. Each nucleus was observed using a 3\({}^{\prime\prime}\) diameter fiber as part of the SDSS Legacy program. broad line component, the specific viewing angles, or a combination of the these effects. For primary nuclei in J1421+4747 and J1713+3256, the S/N of the spectra allowed us to fit for the stellar LOSVD and the AGN continua. Given the lower S/N of the continuum in each of the secondary nuclei, we elected to fit only for the broad and narrow emission lines, the intrinsic AGN power law continuum, and the broad and narrow Fe ii optical lines and we did not fit for the stellar LOSVD or for the host stellar population template. In a few cases, such as the H\(\beta\) emission line in the spectrum of the primary nucleus of J1421+4747, simple Gaussians do not well reproduce the shape of the broad, narrow, and outflow components of the line; BADASS does allow the user to choose other line profiles--such as Voigt or Gauss-Hermite profiles--but as we show in Section 4, simple Gaussian profiles are sufficient for this study. To reduce computation times, we fit the [O iii] and H\(\alpha\)/[N ii] regions separately (4400-5500A and 6200-6900A, respectively). ## 4 Results In fitting the SDSS spectra of these candidate dual AGNs, we were chiefly interested in confirming the presence of broad H\(\alpha\) emission lines within the secondary nuclei, and we therefore measure the full-width at half maximum (FWHM) of the broad lines to compare the kinematics of the lines in the primary and secondary nuclei. In the case of J1421+4747, we used the line testing feature in BADASS to find that a broad H\(\alpha\) line was needed to reproduce the observed line with \(>99.9\%\) confidence based on the BADASS A/B likelihood test (significance \(>3\sigma\)). We found FWHMs of the broad H\(\alpha\) emission lines of \(6073^{+24}_{-46}\) km s\({}^{-1}\) and \(4290^{+119}_{-118}\) km s\({}^{-1}\) in the primary and secondary nuclei in J1421+4747, where these FWHM values differ at the \(3\sigma\) level7, suggesting physically distinct kinematic broad line regions, and thus suggest physically distinct emission regions (i.e. two distinct BLRs). For J1713+3256, we found that the broad emission line in the secondary source was detected with only \(\sim 48\%\) confidence and a significance of only \(\sim 0.5\sigma\) based on the A/B likelihood test (suggesting a misclassification by the SDSS pipeline, but the S/N of the spectrum warranted follow-up observations); the FWHM values of the broad H\(\alpha\) emission lines in the primary (\(4649^{+55}_{-84}\) km s\({}^{-1}\)) and secondary (\(3560^{+318}_{-190}\) km s\({}^{-1}\)) nuclei differ at the \(\sim 2.7\sigma\) level, suggesting that J1713+3256 also hosts two kinematically distinct broad line regions (if the broad line in the secondary nucleus is real). The FWHM values derived for these two systems are tabulated in Table 1; given the inconsistent widths of the broad H\(\alpha\) emission lines between the primary and secondary nuclei within each system, the SDSS spectra would suggest we are observing distinct broad line regions associated with each source in each merger, providing evidence for the presence of dual Type 1 AGNs. Footnote 7: However, these uncertainties are derived from posteriors corresponding to the assumed Gaussian line model, so these uncertainties may be underestimated; see Section 3.2 from Secrest et al. (2017). Going beyond the broad H\(\alpha\) emission line observed in the SDSS spectrum of each nucleus, we also examined the optical spectroscopic emission line ratios (Baldwin et al., 1981; Kewley et al., 2001; Kauffmann et al., 2003; Kewley et al., 2006) derived from each spectrum; we quote only upper or lower limits for the [O iii]/H\(\beta\), [S ii]/H\(\alpha\), and [O i]/H\(\alpha\) emission line ratios in the secondary nuclei due to the lack of detections (or strong detections) of the H\(\beta\), [S ii], and [O i] lines in the secondary spectra. In each case, the observed line ratios (including the upper/lowe Figure 3: Rest-frame SDSS fiber spectra fit with BADASS. Top to bottom: the H\(\alpha\)\(\lambda\)6563/[N ii]\(\lambda\)6584 spectral region for J1421+4747 primary, J1421+4747 secondary, J1713+3256 primary, J1713+3256 secondary. The various spectral components are listed in the legends in the top right corner of each plot; the final total spectral fit is given as the red line tracing the SDSS spectrum. The [S ii]\(\lambda\lambda\)6717,6733 doublet is also visible in the primary sources in J1421+4747 and J1713+3256. within the Seyfert region of each BPT diagram, suggesting once again the presence of multiple AGN in each system. We also searched the spectra of the secondary sources for evidence of rest-frame stellar absorption features to rule out a scenario in which each pair comprises a foreground star and background AGN; we identified potential rest-frame Na D absorption lines (rest-frame \(\lambda\)5890 and \(\lambda\)5896) in both of the secondary sources, but given the S/N of the spectra and the lack of additional absorption features seen in the SDSS spectra, it was unclear if these absorption lines were genuine. Follow-up observations were needed to elucidate this issue, and for the time being we disfavored the foreground star-background AGN scenario. At first glance, these results would suggest evidence for true Type 1 dual AGNs, but the true nature of these systems was only unveiled through the use of higher spatial resolution LBT spectroscopic observations.8 We show the 2D spectra for J1421+4747 and J1713+3256 in Figure 4 around the H\(\alpha\)/[N iii] and [O iii] emission line regions; strong emission lines in the primary nuclei are present, while the secondary sources show only a strong continuum without obvious signs of emission lines. We extracted the 1D spectra of the secondary sources using a 1\(\farcs\)25 diameter aperture in the case of J1421+4747 (Figure 5) and a 0\(\farcs\)65 diameter aperture in the case of J1713+3256 (Figure 6) and we used BADASS to fit the 1D spectra and test the significance of broad and narrow line components. In J1421+4747, we again identified a broad H\(\alpha\) line (99.0% confidence based on the A/B likelihood test), although the H\(\alpha\)/[N ii] emission line complex is offset by roughly 30A redward of what was observed in the SDSS spectrum. More puzzling is the fact that the FWHM of the broad H\(\alpha\) line was found to be 8064\({}^{+445}_{-443}\) km s\({}^{-1}\), nearly twice that found when analyzing the SDSS spectrum. Footnote 8: The spectral resolution of our LBT spectra was lower than that of the SDSS spectra due to our use of a 1\(\arcsec\) slit, but this does not affect our results. Spatial, rather than spectral, resolution was the critical requirement for this work. While the broad H\(\alpha\) detection on its own may have offered strong evidence that this system was a true dual AGN (but ignoring the unusually large discrepancy between the SDSS and LBT FWHM values), the 1D spectrum also revealed a rest-frame Ca triplet absorption line system (rest-frame 8500A, 8544A, and 8664A)9, indicating that the continuum most likely arises from a foreground star and that the broad H\(\alpha\) line and H\(\alpha\)/[N ii] complex are likely spillover from the primary nucleus. In the case of J1713+3256, the 1D spectrum lacks any characteristic signatures of Type 1 or Type 2 AGNs, and instead displays characteristic rest-frame stellar absorption features: a rest-frame Ca triplet absorption line system and rest-frame Ca K+H absorption lines (rest-frame 3934A and 3969A). Furthermore, we confirmed the presence of rest-frame Na D absorption lines (rest-frame \(\lambda\)5890 and \(\lambda\)5896) in the 1D spectra of the secondary sources in both J1421+4747 and J1713+3256. Thus, the LBT data offer strong evidence that the secondary sources are in fact foreground stars and that each of these systems unfortunately represent the chance alignment of a foreground star and background AGN that only presented as a dual AGN candidate as a result of inadequate atmospheric seeing at the time of the original observations. To confirm this, we matched the secondary sources onto Gaia DR3 (Gaia Collaboration et al., 2022), finding that both exhibit highly significant proper motions of 11.8 mas yr\({}^{-1}\) for J1421+4747 and 10.3 mas yr\({}^{-1}\) for J1713+3256. While apparent proper motions have recently been linked to AGN multiplicity as well as source extent in low-\(z\) systems (Souchay et al., 2022; Makarov and Secrest, 2022), the spectroscopic and astrometric evidence taken together strongly favors the secondary sources being foreground stars. Footnote 9: These lines are offset blueward by \(\sim 6\)Å from their rest-frame wavelengths, but this offset may not be significant given the instrumental resolution of 5.8Å But what of the previously identified broad line in the secondary source of J1713+3256? The LBT 2D spectra gave us an initial clue: there is clear extended emission originating from the primary sources in both J1421+4747 and J1713+3256 that overlaps the continua of the secondary sources; it is this very extended emission that likely gives rise to the H\(\alpha\)/[N ii] complex still observed in the secondary source in J1421+4747, while the observation of J1713+3256 was taken under better seeing conditions and does not suffer as drastically. In an attempt to reproduce what is seen in the SDSS fiber spectra of J1713+3256, we re-extracted the 1D spec Figure 4: LBT 2D optical spectra for (top) J1421+4747 and (bottom) J1713+3256. (Left) The spectral region around the [O iii]\(\lambda\)5007,\(\lambda\)4959 doublet. (Right) The spectral region around the H\(\alpha\)/[N ii] emission line region. Strong emission lines are observed in the spectra of the primary nuclei, while fainter and fairly featureless continua are observed in the secondary sources. Extended emission is observed and originates from the primary nuclei. tra of the secondary sources using a 3\(\arcsec\) diameter aperture (matched in size to the SDSS fibers) and refit the spectrum using BADASS. The broad H\(\alpha\) and narrow H\(\alpha\)/[N ii] emission lines still remain undetected; this is likely due to the fact that the original SDSS spectra were taken in far worse observing conditions (median seeing of \(\sim 1\farcs 8\) for both the primary and secondary nuclei of J1713+3256), while the LBT data were taken in nearly pristine conditions (average seeing \(0\farcs 5\pm 0\farcs 03\) for J1713+3256). A possible explanation for the discrepant broad H\(\alpha\) FWHM values - when comparing the two nuclei in SDSS (as well as when comparing the SDSS FWHM to that measured with LBT in the case of J1421+4747) - could lie in (1) the size of the extraction apertures and (2) the complex spectral shape and the spatial distribution of the H\(\alpha\) emission (as seen in the 2D spectra): In the case of SDSS, the poorer seeing conditions led to the smearing of the primary nuclear emission, allowing it to be observed in the secondary fiber, but the 3" diameter fibers are also effectively averaging emission from a large portion of the galaxy (offset from the nucleus), complicating the measured FWHM and resulting in values that were discrepant with the primary source. With LBT, we are likely better resolving the complex emission from several kinematically distinct spots within the background galaxy but with less contamination from the primary nucleus; however, the LBT measurement of the secondary source still comprises a combination of the foreground star and spatially resolved emission from areas outside of the galaxy's nucleus. It may not be surprising then, that the SDSS and LBT broad H\(\alpha\) FWHMs do not match. As for the optical classifications based upon the BPT narrow emission line ratios, these AGN signatures are undoubtedly the result of the fiber spillover contamination (e.g., Husemann et al., 2020). However, other phenomenon such as extended narrow line regions (e.g., Hainline et al., 2014) or cross-ionization (specifically when two galaxies are involved, e.g., Keel et al., 2019) could also mimic dual AGN signatures. ## 5 Discussion ### The Necessity of Spatially Resolved Spectroscopy Though an analysis of the SDSS optical fiber spectra alone would suggest the presence of dual Type 1/Type 1 AGNs in these presumed minor mergers, our follow-up LBT MODS-2 spectroscopic observations have now clearly shown that these systems comprise foreground stars and background AGNs caught in projection. A foreground star scenario was originally disfavored in both cases due to the absence of unambiguous rest-frame stellar absorption features (candidate Na D absorption lines were spotted in both secondary sources, but follow-up observations were needed to investigate this); a background AGN scenario was ruled out by the lack of lines significantly redshifted relative to the primary galaxy in each system. The measured FWHM values of the broad H\(\alpha\) components were inconsistent when comparing the primary nuclei to the secondary nuclei, but the SDSS optical spectra alone could not definitively exclude the possibility that the fiber spectra of the secondary nuclei were contaminated by light from the primary nuclei; spatially resolved spectroscopy was required to falsify the original result. This work has clearly demonstrated - for the first time for dual AGN candidates - that fiber spillover for broad line AGNs can be a significant contaminant in spectroscopic campaigns searching for closely separated dual AGNs, as suggested by Husemann et al. (2020) in a study on fiber spillover in narrow line dual AGN candidates. Furthermore, the results of this study emphasizes the need for optimal atmospheric seeing conditions when obtaining spectroscopic observations of closely separated sources in order to avoid significant fiber spillover. Husemann et al. (2020) developed a simple model using a 2D Moffat function to quantify the effect of fiber spillover via the flux ratio of two sources (\(f_{primary}/f_{secondary}\)) as a function of the radius between the two sources, the seeing, and \(\beta\) parame Figure 5: LBT optical 1D spectra of the secondary source in J1421+4747. Top: the H\(\alpha\) spectral region for J1421+4747 secondary, which shows the BADASS fit to the continuum and observed emission lines, including a broad H\(\alpha\) line and a narrow H\(\alpha\)/[N ii] emission line complex. Middle and bottom: the detected rest-frame Ca triplet and Na D absorption line systems, indicating that the secondary source is in fact a foreground star and the observed emission lines in the H\(\alpha\) region in both the SDSS and LBT data arise from spillover from the primary nucleus. ter (Figure 2 in Husemann et al., 2020). This toy model offers an excellent check against fiber spillover for broad line AGNs as examined here: the SDSS flux ratios for the narrow [O iii] fluxes and the broad H\(\alpha\) fluxes in J1421+4747 were log10\((F_{\rm H\alpha,P}/F_{\rm H\alpha,S})=2.253\) and log10\((F_{\rm[OIII],P}/F_{\rm[OIII],S})=2.131\), and for J1713+3256 we found log10\((F_{\rm H\alpha,P}/F_{\rm H\alpha,S})=1.421\) and log10\((F_{\rm[OIII],P}/F_{\rm[OIII],S})=1.092\); taking into account the fiber separations (\(\sim\) 4-5\(\arcsec\)) in these systems, J1421+4747 and J1713+3256 both occupy the exact parameter space where fiber spillover effects are expected. These observations serve as a stark reminder that selection techniques relying upon spectroscopic fiber measurements must be carefully scrutinized in order to avoid source confusion and false positives. Taken together with our discussion in Appendix A of the other 'Type 1' candidates selected from SDSS (most of which are, or are likely to be, the result of fiber spillover), this work emphasizes that fiber spillover of single AGN emission in poorer atmospheric seeing conditions is more commonplace than dual Type 1/Type 1 AGN systems. It should be noted that we obtained our optical spectra during higher quality observing conditions than those in Husemann et al. (2020), who also used LBT MODS spectroscopy with 1\(\arcsec\) wide slits (see Section 2.2 above, as well as Table 1 in Husemann et al., 2020, for a comparison of the seeing conditions between their work and this work); our work confirms the effectiveness of their toy model for fiber spillover. ### Dual AGNs in Minor Mergers: A Perspective For simulated prograde-prograde, gas rich minor mergers with mass ratios of 1:4, 1:6, and 1:10, the growth of the SMBH in the minor companion results from approximately two phases: (1) tidally induced gas inflows--as a result of interactions with the more massive galaxy--lead to the build up of SMBH mass and stellar mass (via star formation) in the core of the less massive galaxy until ram pressure stripping abruptly halts this growth phase, and (2) the rapid fueling of the secondary SMBH once it and its associated stellar core have circularized within the disk of the primary and begin sweeping up vast reservoirs of gas (Callegari et al., 2009, 2011; Van Wassenhove et al., 2012; Capelo et al., 2015). While the fueling of the more massive SMBH is expected to be stochastic during earlier pericenter passages, its growth pathway transitions from secular to merger-induced as the orbit of the secondary galaxy shrinks and it circularizes within the disk of the massive companion (Callegari et al., 2009, 2011; Capelo et al., 2015). Thus, it is in the late-stage merger phase where we should expect to find correlated growth between the two SMBHs (Van Wassenhove et al., 2012), and indeed most dual AGNs in minor mergers to date have been found in late-stage mergers (Koss et al., 2012; Comerford et al., 2015; Secrest et al., 2017; Liu et al., 2018) with the exception of Mrk 268 and NGC 1052/NGC 1042 (where the nuclei are separated by \(\sim\)44 kpc and \(\sim\)84 kpc, Koss et al., 2012). The SMBH mass ratios in these minor mergers can experience a dramatic evolution across the merger sequence: by the time the SMBHs have formed a bound binary, an initial 1:10 mass ratio has been shown to evolve to 1:6, while 1:6 and 1:4 mass ratios have been shown to evolve to 1:2 or larger (Callegari et al., 2011; Capelo et al., 2015). Astoundingly, 1:10 mass ratio mergers can even briefly evolve to 1:3 mergers during the merger phase (but prior to the bound binary phase Callegari et al., 2011; Capelo et al., 2015); these striking results demonstrate not only that minor mergers can be incredibly efficient avenues for the build up and evolution of the secondary SMBH but that they also effectively erase the initial SMBH mass ratio. This latter point would suggest that some fraction of SMBH pairs with mass ratios in the range 1:1-1:3 may not have originated from major mergers at all, but rather could have arisen from minor mergers with significantly smaller initial SMBH mass ratios; the same point can be made for dual AGNs in \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Target} & Nucleus & H\(\alpha_{\rm BR,FWHM}\) & log\((\frac{\rm[OIII]}{\rm H\beta})\) & log\((\frac{\rm[NIII]}{\rm H\alpha})\) & log\((\frac{\rm[SIII]}{\rm H\alpha})\) & log\((\frac{\rm[OII]}{\rm H\alpha})\) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline J1421+4747 & 1 & \(6073^{+24}_{-46}\) km s\({}^{-1}\) & \(0.72^{+0.02}_{-0.03}\) & \(-0.1^{+0.03}_{-0.04}\) & \(-0.61^{+0.03}_{-0.05}\) & \(-0.9^{+0.03}_{-0.05}\) \\ J1421+4747 & 2 & \(4290^{+119}_{-118}\) km s\({}^{-1}\) & \(<0.73\) & \(0.05^{+0.14}_{-0.09}\) & \(>-0.56\) & \(>-0.55\) \\ J1713+3256 & 1 & \(4649^{+55}_{-84}\) km s\({}^{-1}\) & \(0.9^{+0.04}_{-0.03}\) & \(-0.07^{+0.02}_{-0.01}\) & \(-0.28^{+0.02}_{-0.02}\) & \(-1.03^{+0.05}_{-0.04}\) \\ J1713+3256 & 2 & \(3560^{+131}_{-190}\) km s\({}^{-1}\) & \(<0.99\) & \(0.04^{+0.1}_{-0.09}\) & \(>-0.05\) & \(>-0.77\) \\ \hline \end{tabular} Note. – Col 1-2: merger designation and nucleus designation. Col 2: FWHM of the broad H\(\alpha\) emission line, in units of km s\({}^{-1}\). Col 3-7: [O iii]\(\lambda\)5007/H\(\beta\), [N ii]/H\(\alpha\), [S ii]\(\lambda\lambda\)6717,6733/H\(\alpha\) doublet, and [O i]\(\lambda\)6302/H\(\alpha\) emission line ratios derived from the SDSS spectra using BADASS. Upper limits are given for the [O iii]\(\lambda\)5007/H\(\beta\) line ratios of the secondary sources since H\(\beta\) was not observed in these cases. Lower limits are given for the [S ii]\(\lambda\lambda\)6717,6733/H\(\alpha\) doublet, and [O i]\(\lambda\)6302/H\(\alpha\) emission line ratios in the secondary sources due to a lack of strong [S ii] or [O i] emission lines. All error bounds are quoted at 1\(\sigma\). \end{table} Table 1: SDSS Fitting Results minor mergers in which the SMBH mass ratios are still minor (e.g., \(>1\):3), i.e. the initial SMBH mass ratios may have been far more disproportionate at the start of the merger sequence. This is particularly interesting for two cases: (1) post-mergers and (2) late-stage mergers prior to nuclear coalescence. In the case of post-mergers, the initial stellar mass ratios will presumably have been erased after the stellar nuclei merge and the system relaxes; here, one may not be able to trace the merger history for the SMBH pair at the center, and thus remnants of minor mergers could be mistaken for relaxed remnants of major mergers10. For late-stage mergers with separations \(0.1\,\mathrm{kpc}<\mathrm{r}_{\mathrm{p}}<10\,\mathrm{kpc}\) (where the nuclei have not yet coalesced), the mass ratios of the SMBHs may not trace the mass ratios of the stellar nuclei if the stellar mass growth has not kept pace with the SMBH growth (e.g., Callegari et al., 2011), producing systems where the secondary SMBHs appear overmassive relative to their host. Such a scenario could at least partially explain the supermassive secondary SMBH in Was 49b (Secrest et al., 2017), which has a mass of \(\sim 10^{8}\)\(M_{\odot}\) despite residing in a host with an apparent stellar mass of only \(\sim 6\times 10^{9}\)\(M_{\odot}\) in a \(\sim 1\):7 to \(\sim 1\):15 minor merger with the larger Was 49a. It would be interesting to compare the stellar mass ratios with the mass ratios of the SMBHs (if these values could be robustly determined) for all dual AGNs in minor mergers to gather evidence of whether the secondary SMBHs are commonly found to be overmassive like in the case of Was 49b, though such a comparison is beyond the scope of this work. Footnote 10: Trainwreck merger remnants may offer the only clear, but indirect clue about the progenitor mass ratio. Numerical simulations suggest, however, that dual AGN observability timescales (above luminosity thresholds of \(10^{43}\) erg s\({}^{-1}\)) in minor mergers are short relative to major mergers (\(\sim\)20-70 Myr vs. \(\sim\)100-160 Myr; Van Wassenhove et al., 2012; Capelo et al., 2017), and these timescales drop by up to a factor of \(\sim 4\) or more when taking into account realistic observability constraints on imaging (separations ranging from 1-10 kpc) and spectroscopic (velocity differences \(\lesssim 150\,\mathrm{km\,s^{-1}}\)) surveys (see Van Wassenhove et al., 2012; Capelo et al., 2017). Dual AGNs in minor mergers may therefore not necessarily be observed as frequently as those in major mergers at any one point in time, but at the very least dual AGNs in minor mergers should be signposts for incredibly efficient merger-induced SMBH growth. Moreover, given (1) how much more frequently minor mergers occur relative to major mergers, and (2) the amount of growth the SMBHs undergo, particularly the SMBH in the less massive progenitor, it would seem plausible that - averaged across cosmic time - minor mergers could represent the dominant formation and/or growth pathway for dual AGNs, despite the shorter duty cycle for the AGNs. While beyond the scope of this work, it may be possible to explore this hypothesis using current cosmological simulations, and it remains important to exhaustively analyze these types of dual AGNs when detected so that we may better understand this under-sampled population. The fact also remains that minor mergers should lead to efficient pairings of SMBHs (Callegari et al., 2009, 2011; Khan et al., 2012; Capelo et al., 2015); at higher redshifts where mergers occur more frequently and gas fractions are higher (the simulations of Callegari et al., 2009, 2011; Van Wassenhove et al., 2012; Capelo et al., Figure 6: LBT optical 1D spectra of the secondary source in J1713+3256. Top: the H\(\alpha\) spectral region for J1713+3256 secondary, which shows the BADASS fit to a relatively featureless continuum. Top-middle: the detected rest-frame Ca triplet absorption line system along with the BADASS spectral fit. Bottom-middle: the detected rest-frame Ca K+H absorption lines along with the BADASS spectral fit. Bottom: the detected rest-frame Na D absorption lines along with the BADASS spectral fit. The middle two panels and the bottom panel indicate that the secondary source is a foreground star and not an AGN, unlike the conclusions made based on the SDSS spectra alone. 2015, 2017, are in fact designed to emulate higher redshift minor mergers), the coalescence of these binaries would be detectable in the future by the Laser Interferometer Space Antennae (LISA, Amaro-Seoane et al., 2017). Intriguingly, simulations predict long timescales for the largest mass ratios (i.e., 1:10, Callegari et al., 2011; Capelo et al., 2015); it is conceivable that such timescales at earlier epochs would increase the chances of minor mergers overtaking one another, leading to the formation of bound SMBH triplets or multiplets (see Pfeifle et al., 2019; Liu et al., 2019, for a local redshift example of a kpc-scale triple AGN in a late-stage merger). This is particularly important, because the gravitational interactions between the SMBHs within a bound SMBH multiplet can shorten the timescales for coalescence of the inner binary (e.g., Ryu et al., 2018), result in ejected SMBHs via gravitational slingshots (e.g., Hoffman and Loeb, 2007; Bonetti et al., 2018, 2019), and drive high orbital eccentricities that would affect the gravitational waveforms observable with LISA (e.g., Bonetti et al., 2019). ### The Lack of Type 1/Type 1 Dual AGNs at Local \(z\) We know of Type 1/Type 1 AGN pairs and/or quasar pairs at higher redshift (beyond the local universe, \(z>0.1\)) in both earlier-stage mergers (e.g., Brotherton et al., 1999; Schneider et al., 2000; Gregg et al., 2002; Hennawi et al., 2006, 2010; Shalyapin et al., 2018; More et al., 2016; McGreer et al., 2016; Green et al., 2010, but some of these may be co-spatial rather than merger-induced) and late-stage mergers (e.g., Silverman et al., 2020), yet apparently none have been found in early- or late-stage mergers in the local universe. This begs the question that we must be missing this potential population in the local universe. Given the prevalence of high absorbing columns in many confirmed dual AGNs in the local universe (e.g., Komossa et al., 2003; Bianchi et al., 2008; Pfeifle et al., 2019), as expected based on recent hydrodynamic simulations (e.g., Capelo et al., 2017; Blecha et al., 2018), one may expect to detect Type 2/Type 2 and Type 1/Type 2 pairs far more frequently than Type 1/Type 1 pairs, in spite of the fact that one might naively expect Type 1/Type 1 pairs to be more easily identified and confirmed via broad emission lines. Unfortunately, spectroscopic surveys such as the SDSS already suffer from biases against observing close pairs due to the fiber collision limit on the spectroscopic plates, and therefore one reason we may not have already identified a significant fraction of the Type 1/Type 1 pairs presumed to exist in the local universe is simply a lack of spectroscopic completeness for close pairs of AGNs. Couple spectroscopic incompleteness with the expectation that Type 1/Type 1 pairs should be intrinsically infrequent relative to other Seyfert pairings, and we can begin to see why such pairs have yet to be discovered locally. Higher redshift surveys typically rely upon photometric selection first, which does not suffer from a fiber collision bias, and so these surveys can more easily identify closely separated AGNs, but with two major caveats: (1) the imaging must be of sufficiently high angular resolution (like Suburu/Hyper Suprime-Cam, e.g., Silverman et al., 2020), and (2) such a selection strategy (necessarily) biases one against weaker AGNs at higher redshifts and biases toward identifying the brightest pairs of AGNs and/or the least obscured AGNs. No equivalent search has been performed for dual AGNs in the local universe; all _systematic_ searches for dual AGNs in the optical band in the local universe have required spectroscopic redshifts and/or spectroscopic emission lines as a primary selection criterion, rather than arising from follow-up observations. Such a task would be no small feat, however: the angular resolution of facilities such as SDSS would preclude the unambiguous detection of photometric pairs in the latest-stage mergers, and a blind photometric search may be temporally prohibitive, thus enforcing a bias towards seeking out only the brightest photometric pairs at somewhat larger separations than the commonly studied late-stage mergers. ## 6 Conclusion Dual AGNs are predicted to be less commonly found in minor mergers than in major mergers, yet minor mergers can lead to substantial growth of the SMBH in the less massive companion and efficient pairings of bound SMBHs following the merger evolution. Here we studied two systems, J1421+4747 and J1713+3256, that appeared to host the first local redshift dual Type 1 AGNs in apparent minor mergers based upon SDSS spectroscopic measurements of both sources in each system. However, follow-up LBT spectroscopic observations have shed a final light on these systems, and we summarize this work here: * Fitting the SDSS spectra with BADASS, we found evidence for broad H\(\alpha\) lines in the primary and secondary nuclei of both J1421+4747 and J1713+3256. We found FWHM values for the broad H\(\alpha\) lines in the primary and secondary nuclei of 6073\({}^{+24}_{-46}\) km s\({}^{-1}\) and 4290\({}^{+119}_{-118}\) km s\({}^{-1}\) in J1421+4747 and 4649\({}^{+55}_{-84}\) km s\({}^{-1}\) and 3560\({}^{+318}_{-190}\) km s\({}^{-1}\) in J1713+3256, which are inconsistent at the \(>\)2-3\(\sigma\) level, suggesting two distinct kinematic regions and hence two broad line regions in each apparent merger. In addition, each nucleus presented Seyfert-like optical spectroscopic narrow line ratios. * Our new LBT optical spectroscopic observations have revealed, however, that while these systems consist of broad line AGNs in the primary nuclei, the secondary sources are actually foreground stars with characteristic rest-frame stellar absorption features (Ca triplet and Na D absorption lines in J1421+4747; Ca triplet, Ca K+H, and Na D absorption lines in J1713+3256) in the spatially resolved spectra. The previously identified broad and narrow emission lines observed in the SDSS spectra (and the LBT spectra, in the case of J1421+4747) of the secondary sources was likely the result of spillover light from the primary source, likely induced by the atmospheric seeing conditions during the previous SDSS observations. * Spatially resolved spectroscopic follow-up observations represent an incredibly important check for dual AGN studies that rely upon spectroscopic fiber measurements, particularly when objects are selected using fiber spectra at small separations, as was the case in this work. Sensitivity and wavelength coverage also played a critical role in this work: the presence of foreground stars may not have been clear if instruments with more limited wavelength ranges or lower sensitivity had been used here. Two of the three lines (the Ca triplet and Ca K+H absorption lines) that clinched the nature of the secondary objects reside at the extreme red and blue ends of optical spectroscopy, while high sensitivity was needed to identify both the Na D lines and the Ca triplets. Confirmation of one or both of the systems studied here was an exciting prospect at the onset of this work, as it would have immediately increased the known number of dual AGNs hosted in minor mergers by \(\sim\)30%-40% while at the same time identifying dual AGNs with remarkably different optical spectroscopic characteristics--i.e. optically unobscured spectra, implying a lack of line-of-sight obscuration--than the currently known dual AGNs host by minor mergers and the known dual AGN population at large. Though this work did not identify new dual Type 1/Type 1 AGNs in minor mergers, when placed into the context of the theoretical works in the literature, it does expose some inadequacies in our understanding of dual AGNs (or AGNs in general) in minor mergers. For example, it is not clear whether minor merger simulations that have accounted for gas absorption (e.g., Capelo et al., 2017) find that minor mergers should produce preferentially obscured nuclei like simulations of major mergers (e.g., Capelo et al., 2017; Blecha et al., 2018). Hydrodynamic and radiative transfer simulations performed in Blecha et al. (2018) predict that dual AGNs should be heavily obscured in major mergers and emitting strongly in the mid-IR as the intrinsic AGN continuum is reprocessed by the dust, but little focus was placed on minor mergers in that work; only a single 1:4.5 minor merger was studied in detail, and very little AGN activity (single or dual AGN) was found to arise, in contrast with previous works (e.g., Callegari et al., 2009, 2011; Van Wassenhove et al., 2012; Capelo et al., 2015, 2017). Given the lack of attention given to minor mergers in simulations where both dust and gas attenuation is taken into account, there still remains a fundamental gap in our understanding of dual AGN mid-IR colors, fueling habits, and line-of-sight nuclear column densities in not just the more commonly examined minor mergers of 1:4-1:10 mass ratios, but also with regard to more severely unequal mass mergers with mass ratios of 1:15, 1:25, and beyond 1:30. Suites of hydrodynamic simulations that can account for line-of-sight absorption and dust attenuation down to tens of parsecs in resolution for a variety of minor mergers may have an important impact on our understanding of selection techniques and biases when seeking out these minor merger dual AGNs. We would like to thank the anonymous referee for their prompt and thoughtful review that helped to improve this work. R.W.P. and J.M.C. acknowledge support for this work through appointments to the NASA Postdoctoral Program at Goddard Space Flight Center, administered by ORAU through a contract with NASA. R.W.P would like to thank N. Latouf for support during this work. The authors would also like to thank Jennifer Power for carefully executing the LBT/MODS observations, and R.T. Gatto for useful discussions and assistance in the observations. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is [http://www.sdss3.org/](http://www.sdss3.org/). SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona Board of Regents; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, The Leibniz Institute for Astrophysics Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia. WISE, SDSS, HST, LBT APLPy (Robitaille & Bressert, 2012), pandas (McKinney, 2010), NumPy (Oliphant, 2006; van der Walt et al., 2011; Harris et al., 2020), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), DS9 (Joye & Mandel, 2003), BADASS (Sexton et al., 2021), Astropy (Astropy Collaboration et al., 2013, 2018) ## Appendix A Other similar targets drawn from SDSS Here we provide the full list of 13 systems drawn from SDSS that show similar dual broad line classifications, based on our selection methodology described in Section 1, which includes J1421+4747 and J1713+3256 (examined here), along with brief notes on some of the objects, most of which are likely the result of fiber spillover (as was observed in this work) and/or substantial contributions from both sources as a result of the fiber positions. Hou et al. (2019) recently examined Chandra X-ray imaging of J0805+2818 and found that only one of the nuclei are detected. Husemann et al. (2020) recently examined J0805+2818, J0858+1822, J0947+6339, J1609+2830 in a study examining fiber spillover contamination in closely-separated dual AGN candidates selected using SDSS spectroscopic fiber measurements. In all cases but J1609+2830, they found that fiber spillover was the likely culprit for the double optical AGN signal. The X-ray results from Hou et al. (2019) and optical spectroscopic results from Husemann et al. (2020) are consistent with J0805+2818 hosting only a single AGN. J1609+2830 was recently observed with Keck, but the K-band observations show broad Pa\(\alpha\) and broad H\({}_{2}\) emission lines in only the eastern nucleus. The weaker, western nucleus shows only H\({}_{2}\) emission and does not show any Pa\(\alpha\) emission (Bohn, private communication). This is not to say that the Keck observation of J1609+2830 rules out a dual AGN scenario, but taken together with the results of Husemann et al. (2020), it is more likely that this system contains only a single AGN. J1421+4747 and J1713+3256, discussed in this paper, are unfortunately star-quasar pairs caught in chance projections. J1429+4447 does not show an obvious companion nucleus in the SDSS imaging. We checked the available HST data, but again we do not find any evidence for a secondary nucleus, so this object is very likely a single AGN. As far as we have found, J0322+0054, J1005+3414, and J1659+2446 have not been followed-up in any dual AGN study in the literature. J0222-0857 was nearly included in a study aimed at investigating AGN cross ionization (Keel et al., 2019), but the target was unfortunately not observed during their follow-up long slit observations. The primary and secondary optical spectra in J0222-0857, J0322+0054, J1005+3414, J1429+4447, and J1659+2446 were obtained from fiber positions that unfortunately overlap in each case, and therefore both sets of spectra in each of these systems are expected to include substantial contributions from both sources.
2304.05521
Q-lump scattering
Q-lumps are spinning planar topological solitons with stationary solutions that satisfy first-order Bogomolny equations. Q-lump scattering has previously been studied only in the charge two sector, by approximating time evolution by motion in the moduli space of stationary solutions. In this paper, higher charge scattering is studied via motion on families of 4-dimensional submanifolds of moduli space, obtained by imposing cyclic symmetries. The results are shown to be in good agreement with field theory simulations, which are then applied to study more complicated Q-lump scattering processes, including examples where the moduli space approximation is not applicable. A variety of exotic scattering events are presented.
Paul Sutcliffe
2023-04-11T22:03:48Z
http://arxiv.org/abs/2304.05521v1
# Q-lump scattering ###### Abstract Q-lumps are spinning planar topological solitons with stationary solutions that satisfy first-order Bogomolny equations. Q-lump scattering has previously been studied only in the charge two sector, by approximating time evolution by motion in the moduli space of stationary solutions. In this paper, higher charge scattering is studied via motion on families of 4-dimensional submanifolds of moduli space, obtained by imposing cyclic symmetries. The results are shown to be in good agreement with field theory simulations, which are then applied to study more complicated Q-lump scattering processes, including examples where the moduli space approximation is not applicable. A variety of exotic scattering events are presented. Introduction Q-lumps are spinning topological soliton solutions of the \(O(3)\)\(\sigma\)-model with an easy axis anisotropy in 2-dimensional space [1]. This theory has no static soliton solutions, but there are stationary Q-lump solutions in which an internal phase rotates at a constant angular frequency, given by the mass parameter of the theory. Q-lumps can have any size and they satisfy first-order Bogomolny equations that are solved explicitly in terms of based rational maps. For each positive integer \(N\), this gives a \(4N\)-dimensional moduli space of stationary Q-lumps with topological charge \(N\). This reflects the fact that there are no forces between stationary Q-lumps, with the parameters of moduli space being interpreted as a position, size and internal phase for each Q-lump, in the asymptotic region in which they are all well-separated. However, for \(N=1\) all Q-lumps have infinite energy, because the field decays too slowly for the energy integral to converge. For \(N>1\) the dimension of the moduli space of finite energy stationary Q-lumps is reduced to \(4N-2\), due to a constraint that may be viewed in the asymptotic region as imposing conditions that relate some of the parameters of the constituent Q-lumps to each other. Q-lump scattering has previously been studied only in the charge two sector [1] by applying the moduli space approximation [2], where dynamics is restricted to the moduli space of finite energy stationary solutions, equipped with a metric and a potential induced from the field theory Lagrangian. This allowed the investigation of the scattering of a pair of unit charge Q-lumps, but the Q-lumps are restricted to have equal sizes and are phase locked with an internal relative phase of \(\pi\). Nonetheless, a rich structure was found for Q-lump scattering in the charge two sector, with features in common with both topological solitons in \(\sigma\)-models [3] and Q-balls [4], but with aspects not found in either of these soliton systems. In the present paper, higher charge scattering is investigated using the moduli space approximation on families of 4-dimensional submanifolds of finite energy stationary Q-lumps, obtained by imposing cyclic symmetries. This allows the first study of Q-lump scattering with relative phases that are not locked at \(\pi\), and also includes examples where Q-lumps have different sizes. Field theory simulations have never been performed for Q-lump scattering. This situation is remedied here, where field theory computations are shown to be in good agreement with the results of moduli space dynamics. Field theory simulations are then applied to higher charge scattering, where the moduli space approach would be cumbersome due to the reasonably large dimensions of the moduli spaces involved. In particular, the scattering of a pair of charge two Q-lumps is found to yield exotic dynamics in the charge four sector. Finally, field theory simulations are applied to the scattering of Q-lumps where the moduli space approximation is not applicable, because the fields have infinite energy. A pair of unit charge Q-lumps with different sizes, or a relative phase not locked at \(\pi\), being prototypical examples. It is found that the dynamics is similar to the finite energy case, indicating that considerations of finite energy are not particularly relevant in the study of local Q-lump dynamics. Q-lumps The theory of interest for Q-lumps is the relativistic \(O(3)\)\(\sigma\)-model in (2+1)-dimensions, modified by the addition of a symmetry breaking mass term [1]. The field \(\boldsymbol{\phi}=(\phi_{1},\phi_{2},\phi_{3})\) is a 3-component unit vector with the Lagrangian density \[\mathcal{L}=\frac{1}{2}\partial_{\mu}\boldsymbol{\phi}\cdot\partial^{\mu} \boldsymbol{\phi}-\frac{m^{2}}{2}(\phi_{1}^{2}+\phi_{2}^{2}), \tag{2.1}\] where the index \(\mu\in\{0,1,2\}\) runs over the time and space coordinates. The mass term in (2.1) is also familiar as an easy axis anisotropy term from the continuum description of a ferromagnet, although the dynamics in that case is different, being first-order in time rather than the second-order relativistic system considered here. The positive constant \(m,\) giving the mass of the \(\phi_{1}\) and \(\phi_{2}\) fields, is taken to be \(m=\frac{1}{20},\) but this value is not particularly significant as different values of \(m\) can be related by a rescaling of the spacetime coordinates. The required boundary condition is \(\boldsymbol{\phi}\rightarrow(0,0,1)\) as \(x^{2}+y^{2}\rightarrow\infty,\) thereby providing a compactification of space from \(\mathbb{R}^{2}\) to \(S^{2}\) by the addition of the point at infinity. At any given time, \(\boldsymbol{\phi}\) is therefore a map between two-spheres and has an associated integer-valued topological charge due to the homotopy group formula \(\pi_{2}(S^{2})=\mathbb{Z}.\) This topological charge, \(N\), can be calculated as the degree of the mapping and is given by the integral \[N=-\frac{1}{4\pi}\int\boldsymbol{\phi}\cdot(\partial_{x}\boldsymbol{\phi} \times\partial_{y}\boldsymbol{\phi})\,dxdy. \tag{2.2}\] It is helpful to introduce the \(\mathbb{CP}^{1}\) formulation of the model by using stereographic projection to define the Riemann sphere coordinate \(W=(\phi_{1}+i\phi_{2})/(1+\phi_{3})\). In this formulation the Lagrangian density (2.1) becomes \[\mathcal{L}=\frac{2}{(1+|W|^{2})^{2}}(\partial_{\mu}W\partial^{\mu}\overline{ W}-m^{2}|W|^{2}), \tag{2.3}\] with a variation that yields the nonlinear field equation \[(1+|W|^{2})\partial_{\mu}\partial^{\mu}W-2\overline{W}\partial_{\mu}W \partial^{\mu}W+m^{2}W(1-|W|^{2})=0. \tag{2.4}\] It is easy to check that stationary solutions of (2.4) can be obtained by solving the first-order Bogomolny equations \[\partial_{y}W=\pm i\partial_{x}W,\quad\text{ and }\quad\partial_{t}W=\pm imW, \tag{2.5}\] where the signs in the above can be chosen independently, but both are taken to be positive in the following. In the topological charge \(N\) sector there is a \(4N\)-dimensional moduli space \(\mathcal{M}_{N}\) of stationary solutions of the Bogomolny equations (2.5), called Q-lumps [1], given by \[W=\frac{\alpha_{N-1}z^{N-1}+\cdots+\alpha_{1}z+\alpha_{0}}{z^{N}+\beta_{N-1}z^ {N-1}+\cdots+\beta_{1}z+\beta_{0}}e^{imt}, \tag{2.6}\] where \(z=x+iy.\) The complex constants \(\alpha_{i},\beta_{i},\) for \(i=0,..,N-1,\) are coordinates on \({\cal M}_{N},\) and are subject only to the constraint that the numerator and the denominator in the based rational map that appears in (2.6) have no common roots. The rational map is based because the degree of the numerator is less than the degree of the denominator, due to the boundary condition that \(\mathbf{\phi}\rightarrow(0,0,1)\) as \(x^{2}+y^{2}\rightarrow\infty,\) which requires that \(W\to 0\) as \(|z|\rightarrow\infty.\) The energy \[E=\int\frac{2}{(1+|W|^{2})^{2}}(|\partial_{t}W|^{2}+|\partial_{x}W|^{2}+| \partial_{y}W|^{2}+m^{2}|W|^{2})\,dxdy, \tag{2.7}\] of the Q-lump solution (2.6) is infinite if \(\alpha_{N-1}\neq 0,\) because the field does not decay sufficiently rapidly for the integral to converge. In particular, this means that there are no finite energy Q-lumps with \(N=1\). For \(N>1\) there is a \((4N-2)\)-dimensional moduli space \(\widetilde{\cal M}_{N}\) of finite energy stationary Q-lumps given by setting \(\alpha_{N-1}=0.\) This reflects the fact that there are no forces between the stationary Q-lumps given by these solutions, although there are some constraints relating the parameters of the individual Q-lump constituents. ## 3 Moduli space dynamics The dynamics of Q-lumps can be investigated by applying the moduli space approximation [2] that is a cornerstone in the study of topological soliton dynamics, having been applied to investigate the dynamics of a wide variety of soliton systems. A slightly unusual feature in the application to Q-lumps is that the dynamics is restricted to motion on a moduli space of stationary soliton solutions, whereas the typical application is to approximate soliton evolution via restricting the motion to a moduli space of static soliton solutions. However, the basic principle remains the same and is implemented as follows. Let \(q_{i},\) for \(i=1,...,4N-2,\) be real coordinates on the moduli space \(\widetilde{\cal M}_{N},\) given by the real and imaginary parts of \(\alpha_{0},...,\alpha_{N-2},\beta_{0},...,\beta_{N-1}\) that appear in (2.6). Allowing these coordinates to be time-dependent, \({\bf q}(t),\) it is convenient to absorb the overall factor \(e^{imt}\) for the stationary solutions into the time dependence of the moduli space coordinates. Explicitly, \(W\) is approximated by the restricted form \[W(z;{\bf q}(t))=\frac{(q_{2N-3}+iq_{2N-2})z^{N-2}+\cdots+(q_{3}+iq_{4})z+q_{1} +iq_{2}}{z^{N}+(q_{4N-3}+iq_{4N-2})z^{N-1}+\cdots+(q_{2N+1}+iq_{2N+2})z+q_{2N-1 }+iq_{2N}}. \tag{3.1}\] Substituting this form into the Lagrangian density (2.3) and performing the integration over space yields the Lagrangian \[L=g_{ij}\dot{q}_{i}\dot{q}_{j}-V-4\pi N, \tag{3.2}\] where \(\dot{q}_{i}=dq_{i}/dt\) and the metric is \[g_{ij}({\bf q})=\int\frac{1}{(1+|W|^{2})^{2}}\bigg{(}\frac{\partial W}{ \partial q_{i}}\frac{\partial\overline{W}}{\partial q_{j}}+\frac{\partial W} {\partial q_{j}}\frac{\partial\overline{W}}{\partial q_{i}}\bigg{)}\,dxdy, \tag{3.3}\] with the potential \[V({\bf q})=2m^{2}\int\frac{|W|^{2}}{(1+|W|^{2})^{2}}\,dxdy. \tag{3.4}\] The equations of motion that follow from (3.2) are \[2g_{ki}\ddot{q}_{i}+\biggl{(}2\frac{\partial g_{ki}}{\partial q_{j}}-\frac{ \partial g_{ij}}{\partial q_{k}}\biggr{)}\dot{q}_{i}\dot{q}_{j}+\frac{\partial V }{\partial q_{k}}=0, \tag{3.5}\] where geodesic motion is modified by the force due to the potential. As there are no finite energy Q-lumps with \(N=1\), the simplest case to consider is \(N=2\), where motion takes place on the 6-dimensional manifold \(\widetilde{\cal M}_{2}\), with the associated field \[W=\frac{q_{1}+iq_{2}}{z^{2}+(q_{5}+iq_{6})z+q_{3}+iq_{4}}. \tag{3.6}\] The centre of mass may be fixed at the origin by setting \(q_{5}=q_{6}=0\), to yield a 4-dimensional submanifold \(\widetilde{\cal M}_{2}^{0}\) of \(\widetilde{\cal M}_{2}\), with coordinates \(q_{1},q_{2},q_{3},q_{4}.\) The moduli space dynamics on the manifold \(\widetilde{\cal M}_{2}^{0}\) was studied some time ago in great detail by Leese [1]. The remainder of this section provides a brief review of this work, to set the scene for the following sections where new results are presented. For \(N=2\) the integrals required to calculate the metric (3.3) and the potential (3.4) can be evaluated in terms of elliptic integrals, although this will not be exploited here as the integrals will be computed numerically, in order to use the same methods that will be applied later for higher charges. The ordinary differential equations in (3.5) are solved numerically using a variable stepsize Runge-Kutta method, with the initial conditions provided as follows. Setting \[q_{1}=2\Lambda(A\cos\Theta-B\sin\Theta),\ q_{2}=2\Lambda(A\sin\Theta+B\cos \Theta),\ q_{3}=B^{2}-A^{2},\ q_{4}=-2AB, \tag{3.7}\] allows the field (3.6) to be rewritten as \[W=\biggl{(}\frac{\Lambda}{z-A-iB}-\frac{\Lambda}{z+A+iB}\biggr{)}e^{i\Theta}, \tag{3.8}\] providing an interpretation of the parameters in terms of a pair of well-separated unit charge Q-lumps with equal size \(\Lambda\) and positions in the \((x,y)\)-plane given by \((A,B)\) and \((-A,-B)\), valid for \(A^{2}+B^{2}\gg\Lambda^{2}.\) Note that the pair of Q-lumps have a relative phase angle of \(\pi\), because of the minus sign between the two terms in (3.8). The fact that both Q-lumps have the same size, and their relative phase is locked at \(\pi\), is a consequence of the finite energy constraint in this case. The scattering of a pair of Q-lumps, with an initial motion parallel to the \(x\)-axis, can therefore be studied using the initial conditions \[\Theta(0)=0,\ \dot{\Theta}(0)=m,\ \Lambda(0)=\lambda,\ \dot{\Lambda}(0)=0,\ A(0)=a,\ \dot{A}(0)=v,\ B(0)=b,\ \dot{B}(0)=0, \tag{3.9}\] where \(\lambda\) is the initial common size, \(\pm(a,b)\) are the initial positions and \(v\) is the initial common speed of each Q-lump. Without loss of generality, the initial size may be set to \(\lambda=1\). The initial separation in the \(x\)-direction is fixed by choosing \(a=-10\), so that the Q-lumps are reasonably well-separated at the start of the motion. To present the resulting evolution, a variant of the multiple exposure plotting technique introduced in [1] will be applied. This involves plotting, in a single image, the curves given by the level set \(\phi_{3}=0\) (or equivalently \(|W|=1\)) at equally spaced time intervals. The curves are coloured blue in the early stages of the motion and transition to red as time increases. Fig.1(a) displays a typical head-on scattering process (\(b=0\)) with initial speed \(v=0.2\). The Q-lump that begins at the left ends up at the top of the image, allowing the identification of a scattering angle between the initial and final directions of motion of around \(2\pi/3.\) This agrees with the results found in [1], where the scattering angle as a function of initial speed \(v\) is found to lie in the interval \((\pi/2,\pi)\), with a value of around \(2\pi/3\) for a wide range of \(v\). The scattering angle is a monotonically decreasing function of \(v\), tending towards \(\pi/2\) in the fast scattering limit, where the interaction time is much shorter than the period of the internal rotation. This is expected, as the traditional \(\pi/2\) scattering of a range of topological solitons, including lumps in the pure \(\sigma\)-model, should be recovered in this limit. The collision induces an oscillation of the size of the Q-lumps that is clearly visible in the multiple exposure plot. The amplitude of this oscillation decreases if the initial scattering speed \(v\) is decreased. Scattering with a positive impact parameter \(b=3\) is displayed in Fig.1(b). The results for \(b>0\) are not surprising, with the scattering angle being a monotonically decreasing function of \(b\), tending to zero in the large \(b\) limit. Scattering is not symmetric under the replacement \(b\to-b\), as illustrated in Fig.1(c) by the example with \(b=-3\). This lack of symmetry is due to the fact that the internal phase rotation is anti-clockwise and therefore selects a preferred sign for comparison with the angular momentum generated by the collision. The combination \(b\to-b\) together with \(m\to-m\) is required to generate a symmetric scattering event. For \(b<0\) the scattering angle is a complicated function of both \(b\) and \(v\), with regions with a highly sensitive dependence, as the Q-lumps can orbit around each other many times before they eventually escape, or merge to lose their individual identities before separating [1]. Cyclic scattering For \(N>2\) the study of generic finite energy Q-lump scattering would be rather cumbersome using the moduli space approximation, because \(\widetilde{\cal M}_{N}\) has dimension \(4N-2\). Fixing a centre of mass reduces the dimension to \(4(N-1)\), but this is still a little unwieldy. To avoid this difficulty, in this section the dynamics on moduli space will be investigated on various 4-dimensional submanifolds of \(\widetilde{\cal M}_{N}\), obtained by imposing cyclic symmetries. Rather than thinking of fixing the centre of mass to obtain the 4-dimensional submanfold \(\widetilde{\cal M}_{2}^{0}\) of \(\widetilde{\cal M}_{2}\), an alternative point of view to obtain this submanifold is to impose the cyclic \(C_{2}\) symmetry \(W(-z)=W(z)\), given by a spatial rotation around the origin by \(\pi\). The fact that both Q-lumps have the same size, and their relative phase is locked at \(\pi\), is then a clear consequence of the \(C_{2}\) symmetry that exchanges the pair of Q-lumps. The generalization to study Q-lump scattering with charge \(N\) and cyclic \(C_{N}\) symmetry is therefore rather natural. Let \(\omega_{N}=e^{2\pi i/N}\) denote the \(N^{\rm th}\) root of unity and define \(\Sigma_{N}^{j}\), for \(j=0,...,N-2\), to be the 4-dimensional submanifold of \(\widetilde{\cal M}_{N}\) obtained by imposing the cyclic \(C_{N}\) symmetry \(W(\omega_{N}z)=\omega_{N}^{j}W(z)\). The corresponding field is given by \[W=\frac{z^{j}(q_{1}+iq_{2})}{z^{N}+q_{3}+iq_{4}}, \tag{4.1}\] where a convenient relabelling of the indices of \({\bf q}\) has been applied so that the coordinates on this submanifold are \(q_{1},q_{2},q_{3},q_{4}\). Note that the submanifold \(\Sigma_{N}^{0}\) contains the axially symmetric Q-lump with charge \(N\), given by \(W=(q_{1}+iq_{2})/z^{N}\), but this is not contained in any of the other submanifolds, \(\Sigma_{N}^{j}\) with \(j>0\). In the asymptotic region of well-separated unit charge Q-lumps, the field (4.1) may be rewritten as \[W=e^{i\Theta}\Lambda\sum_{k=0}^{N-1}\frac{\omega_{N}^{(j+1)k}}{z-(A+iB)\omega _{N}^{k}}, \tag{4.2}\] revealing \(N\) unit charge Q-lumps of equal size \(\Lambda\) on the vertices of a regular \(N\)-gon, with a frozen relative phase of \(2(j+1)\pi/N\) between neighbouring Q-lumps. The scattering discussed in the previous section corresponds to the case \(N=2\), where there is only one submanifold, \(\Sigma_{2}^{0}\), and the relative phase is frozen at \(\pi\). These submanifolds therefore allow the first studies of Q-lump dynamics where relative phases are not frozen at \(\pi\). The asymptotic formula (4.2) is a clear generalization of the \(N=2\) formula (3.8), and the same initial conditions (3.9) can be used for the parameter values. Fig.2(a) illustrates the scattering of three Q-lumps with \(C_{3}\) symmetry and relative phases of \(2\pi/3\) between the Q-lumps, that is, dynamics on the submanifold \(\Sigma_{3}^{0}.\) The scattering angle lies in the interval \((2\pi/3,\pi)\), and tends towards the lower limit of this interval as the initial speed increases. This is the expected generalization of the scattering on \(\Sigma_{2}^{0}\), for the following reason. In the case of cyclic \(C_{N}\) scattering of \(N\) lumps in the pure \(\sigma\)-model, the similar scattering process results in lumps on the vertices of an outgoing regular \(N\)-gon that is the dual of the incoming \(N\)-gon [5]. This has been termed \(\pi/N\) scattering, where the scattering angle refers to the rotation of the polygon, because the lumps lose their individual identities during the scattering process so a scattering angle for any individual lump is ill-defined. However, in the case of Q-lumps, the individual Q-lumps remain distinct enough to define the scattering angle of a single Q-lump, as earlier, and translating the polygon scattering into this definition yields a scattering angle of \(\pi(N-1)/N.\) Hence the expected lower limit of \(2\pi/3\) in the above case of \(\Sigma_{3}^{0},\) where \(N=3.\) Fig.2(b) displays the equivalent scattering process to Fig.2(a), with the same parameter values, but now on the submanifold \(\Sigma_{3}^{1},\) so the only change is that the relative phase between the Q-lumps is now \(4\pi/3\) rather than \(2\pi/3\). The change in relative phase has reduced the scattering angle, which is now slightly less than \(2\pi/3,\) and is therefore outside the interval found for scattering on \(\Sigma_{3}^{0}.\) There is also a slight reduction in the amplitude of the size oscillations of the outgoing Q-lumps. Note that the configuration formed at the point of closest approach is very different on \(\Sigma_{3}^{0}\) and \(\Sigma_{3}^{1},\) reflecting the fact that only the former submanifold contains the axially symmetric charge three Q-lump. Scatterings on the submanifolds \(\Sigma_{3}^{0}\) and \(\Sigma_{3}^{1},\) with both positive and negative impact parameters, can be found in Fig.3. These results show similar features to the charge two case, with the scattering angle again reduced on \(\Sigma_{3}^{1}\) compared to \(\Sigma_{3}^{0}.\) Charge four scattering with \(C_{4}\) symmetry provides access to three different sets of relative phases. Examples on the submanifolds \(\Sigma^{0}_{4},\Sigma^{1}_{4},\Sigma^{2}_{4}\) are displayed in Fig.4, where the same parameter values are used in each case. This provides further evidence that increasing the relative phase between neighbouring Q-lumps reduces both the scattering angle and the amplitude of the size oscillation of the outgoing Q-lumps. So far, all the examples studied have involved Q-lumps that all have the same size. It is possible to gain access to scattering events that involve Q-lumps with different sizes by considering another family of 4-dimensional submanifolds, this time by imposing cyclic \(C_{N-1}\) symmetry in the charge \(N\) sector. For \(N>2\), define the 4-dimensional submanifold, \(\Xi_{N}\), of \(\widetilde{\mathcal{M}}_{N}\) by imposing the cyclic \(C_{N-1}\) symmetry \(W(\omega_{N-1}z)=\omega_{N-1}^{N-2}W(z)\). In this case the field takes the form \[W=\frac{q_{1}+iq_{2}}{z^{N}+z(q_{3}+iq_{4})}, \tag{4.3}\] that includes the axially symmetric Q-lump with charge \(N\). In the asymptotic region of well-separated Q-lumps, the field may be rewritten as \[W=e^{i\Theta}\Lambda\bigg{(}\frac{1-N}{z}+\sum_{k=0}^{N-2}\frac{1}{z-(A+iB) \omega_{N-1}^{k}}\bigg{)}, \tag{4.4}\] revealing \(N-1\) unit charge Q-lumps with size \(\Lambda\) on the vertices of a regular \((N-1)\)-gon and a unit charge Q-lump at the origin with size \((N-1)\Lambda\). All the Q-lumps on the \((N-1)\)-gon have the same phase, but there is a phase difference of \(\pi\) between any of these Q-lumps and the Q-lump at the origin. Fig.5 presents examples on the submanifolds \(\Xi_{3}\) and \(\Xi_{4}\). The larger Q-lump remains at the origin, with a visible distortion of the axial symmetry to \(C_{N-1}\) symmetry, and its size oscillation is synchronized with the oscillation of the smaller Q-lumps, as it must to keep the ratio of the sizes equal to \(N-1\). A comparison of Fig.5(b) and Fig.2(a) reveals that the presence of the central Q-lump increases the scattering angle and reduces the amplitude of the oscillation in size. As mentioned earlier, a general feature of Q-lump scattering is that an increase in the speed of the incoming Q-lumps produces an increase in the amplitude of the size oscillation of the outgoing Q-lumps. An interesting example to investigate the consequences of varying the initial speed is to take the \(\Sigma_{4}^{0}\) scattering shown in Fig.4(a) and vary the initial speed from \(v=0.1\). Reducing the initial speed to \(v=0.05\) results in the scattering presented in Fig.6(a), with a clear reduction in the amplitude of the size oscillation, as there is less kinetic energy to transfer to this mode. A much more interesting phenomenon is found by increasing the speed to \(v=0.2\), as displayed in Fig.6(b). The scattering process is now more complicated, making it a little difficult to decipher the information contained within the multiple exposure plot. However, it is clear that the scattering is now qualitatively different from the the previous examples. An analysis of this scattering event will be presented later, with a series of energy density plots from full field simulations being easier to interpret than this single multiple exposure plot. Figure 5: Moduli space dynamics with \(N-1\) unit charge Q-lumps scattering on a larger unit charge Q-lump at the origin, using \(\lambda=1\), \(a=-10\), \(b=0\), \(v=0.1\) for (a) \(N=3\) with \(C_{2}\) symmetry on \(\Xi_{3}\); (b) \(N=4\) with \(C_{3}\) symmetry on \(\Xi_{4}\). Figure 6: Moduli space dynamics for \(N=4\) with \(C_{4}\) symmetry on the submanifold \(\Sigma_{4}^{0}\) using \(\lambda=1\), \(a=-10\), \(b=0\), with (a) \(v=0.05\); (b) \(v=0.2\). Field theory dynamics The \(\mathbf{\phi}\) field formulation is used to perform full field simulations of Q-lump dynamics. The field equation that follows from the variation of (2.1) is \[\partial_{\mu}\partial^{\mu}\mathbf{\phi}+(\partial_{\mu}\mathbf{\phi}\cdot\partial^{\mu }\mathbf{\phi})\mathbf{\phi}+m^{2}\phi_{3}(\phi_{3}\mathbf{\phi}-\mathbf{e}_{3})=0, \tag{5.1}\] where \(\mathbf{e}_{3}=(0,0,1).\) This nonlinear partial differential equation is solved numerically on a square lattice consisting of \(1501\times 1501\) lattice points, with lattice spacings \(\Delta x=\Delta y=0.04\), to give a spatial simulation region of \([-30,30]\times[-30,30]\). Spatial derivatives are computed using a fourth-order finite difference approximation and time evolution is performed using a fourth-order Runge-Kutta scheme with a fixed timestep \(\Delta t=0.01\). The boundary condition at the edge of the simulation lattice is taken to be compatible with the internal rotation of a Q-lump, in either direction, by imposing the evolution equation \(\partial_{t}\partial_{t}W=-m^{2}W\). In terms of the \(\mathbf{\phi}\) field this becomes \[\partial_{t}\partial_{t}\phi_{1} -2\frac{\partial_{t}\phi_{1}\partial_{t}\phi_{3}}{1+\phi_{3}}+ \phi_{1}\bigg{(}\frac{|\partial_{t}\mathbf{\phi}|^{2}}{1+\phi_{3}}+m^{2}\phi_{3} \bigg{)}=0 \tag{5.2}\] \[\partial_{t}\partial_{t}\phi_{2} -2\frac{\partial_{t}\phi_{2}\partial_{t}\phi_{3}}{1+\phi_{3}}+ \phi_{2}\bigg{(}\frac{|\partial_{t}\mathbf{\phi}|^{2}}{1+\phi_{3}}+m^{2}\phi_{3} \bigg{)}=0\] (5.3) \[\partial_{t}\partial_{t}\phi_{3} -2\frac{\partial_{t}\phi_{3}\partial_{t}\phi_{3}}{1+\phi_{3}}+| \partial_{t}\mathbf{\phi}|^{2}-m^{2}(1-\phi_{3}^{2})=0. \tag{5.4}\] To check the results of the moduli space approximation, the initial conditions for the field theory simulations, \(\mathbf{\phi}|_{t=0}\) and \(\partial_{t}\mathbf{\phi}|_{t=0}\), are taken to be the same as in the moduli space dynamics. It is found that the examples presented in the previous sections are in excellent agreement with the field theory simulations, thereby providing a good cross-check on both methods. Representative examples are provided in Fig.7, which reproduces the charge two scattering on \(\Sigma_{2}^{0}\) found in Fig.1(a), and Fig.8, which reproduces the cyclic charge four scattering on \(\Sigma_{4}^{1}\) from Fig.4(b). In these figures the energy density at various times is visualized using a heat map, with the colour bar provided for values in the interval \([0,1]\) and values gre Figure 7: Energy density plots from field theory dynamics that reproduces the \(N=2\) scattering on \(\Sigma_{2}^{0}\) shown in Fig.1(a). as white. For clarity, only the region \([-20,20]\times[-20,20]\) of the full simulation domain \([-30,30]\times[-30,30]\) is shown. These two simulations are also available as short movies, see the supplementary data files m01.mp4 and m02.mp4. To aid comparison with the results from moduli space dynamics, these simulations are also presented as multiple exposure plots, see Fig.9(a) (to be compared with Fig.1(a)) and Fig.9(b) (to be compared with Fig.4(b)). This makes the excellent agreement obvious. It is now time to return to the exotic scattering event presented at the end of the previous section in Fig.6(b), to investigate this process via full field simulations. The resulting energy density plots are displayed in Fig.10, for times that include and go beyond the multiple exposure plot from moduli space dynamics that is shown in Fig.6(b). This simulation is also available as the movie m03.mp4 in the supplementary data. Initially the scattering is similar to that found at lower speeds, but a difference at this higher speed can already be seen at \(t=74\), where the expansion of the size of the Q-lumps is comparable to their increase in separation. Q-lumps are well-separated if their separation is large compared to their size, but if their size grows sufficiently rapidly then they are not well-separated even as they move apart. This is the situation found at \(t=94\), where the configuration resembles the ring of an axially symmetric charge four Q-lump with a slight square perturbation, rather Figure 8: Energy density plots from field theory dynamics that reproduces the cyclic \(N=4\) scattering on \(\Sigma_{4}^{1}\) shown in Fig.4(b). Figure 9: Field theory simulations represented as multiple exposure plots for comparison with moduli space dynamics, (a) compare with Fig.1(a); (b) compare with Fig.4(b). than four distinct Q-lumps. The ring reaches a maximum size at \(t=144\) and then the size oscillation enters the phase of size reduction that shrinks the ring, as the small distinct Q-lumps reappear at \(t=238\). The Q-lumps now scatter again, in a similar manner to the early stage of the scattering, but this time the size oscillation is not sufficient to prevent well-separated Q-lumps from emerging and escaping. This exotic double scattering process is made possible at high speeds by a sufficient transfer of kinetic energy into the size oscillation mode. The scattering on \(\Sigma_{4}^{1}\), shown in Fig.8 for \(v=0.1\), also transforms to a similar complicated scattering if the initial speed is increased to \(v=0.2\), but such a transformation is not found on the submanifold \(\Sigma_{4}^{2}\) at the same speed \(v=0.2\). This is consistent with the results presented in Fig.4 for \(v=0.1\), where the amplitude of the size oscillation generated on \(\Sigma_{4}^{2}\) is much less than on \(\Sigma_{4}^{0}\) or \(\Sigma_{4}^{1}\). Clearly, relative phases play an important role in the transfer of energy between different modes. The double scattering process, where soliton kinetic energy is transferred to another mode and then back to soliton kinetic energy, is reminiscent of the scattering of kinks and anti-kinks in some (1+1)-dimensional systems, which leads to Figure 10: Energy density plots from field theory dynamics associated with the cyclic charge four scattering of \(\Sigma_{4}^{0}\), with an initial speed \(v=0.2\). resonant scattering and a fractal structure in the dependence of the final state on the initial collision speed [6]. It might be interesting to investigate whether similar phenomena are possible in this (2+1)-dimensional soliton system. Some scattering events in the charge four sector will now be considered, where the initial conditions consist of a pair of well-separated axially symmetric charge two Q-lumps. One of the differences in studying the scattering of a pair of charge two Q-lumps, rather than a pair of charge one Q-lumps, is that the finite energy constraint now allows the pair of Q-lumps to have different sizes and any value of the relative phase. Recall that a pair of charge one Q-lumps must have equal size and be phase locked with a relative phase of \(\pi,\) in order to have finite energy. In principle, this charge four scattering could be investigated using moduli space dynamics, but the moduli space \(\widetilde{\mathcal{M}}_{4}\) is 14-dimensional, so even after fixing the centre of mass, the motion on a 12-dimensional space is still a little cumbersome. Therefore, the investigation will proceed via field theory simulations. The initial conditions can be taken from the moduli space approximation, namely the initial fields \(\boldsymbol{\phi}|_{t=0}\) and \(\partial_{t}\boldsymbol{\phi}|_{t=0}\) are taken to agree with those obtained from the field \[W=\bigg{(}\frac{\lambda_{1}^{2}}{(z-a-vt-ib)^{2}}+\frac{e^{i\chi}\lambda_{2}^ {2}}{(z+a+vt+ib)^{2}}\bigg{)}e^{imt}. \tag{5.5}\] This describes a pair of axially symmetric charge two Q-lumps at positions \((x,y)=\pm(a,b),\) with initial sizes \(\lambda_{1}\) and \(\lambda_{2},\) and an initial relative phase \(\chi.\) The Q-lumps are initially moving parallel to the \(x\)-axis, in opposite directions, with equal speed \(v.\) To compare with the earlier scattering of a pair of charge one Q-lumps, the first simulations will consider equal sizes and a relative phase \(\chi=\pi.\) An example of a head-on scattering (\(b=0\)) with \(\lambda_{1}=\lambda_{2}=3,\) and parameters \(a=-10\) and \(v=0.1,\) is presented in Fig.11. Figure 11: Energy density plots from field theory dynamics for the scattering of a pair of charge two Q-lumps with parameters \(\lambda_{1}=\lambda_{2}=3,\ a=-10,\ b=0,\ \chi=\pi,\ v=0.1.\) The energy density plots in Fig.11, and the corresponding movie m04.mp4, show the formation of a merged configuration at \(t=84\), followed by the fission into four distinct Q-lumps. Two of the Q-lumps remain close to the origin, while the other two carry most of the kinetic energy as they move away. This scattering is presented as a multiple exposure plot in Fig.12(a). The result of introducing a non-zero impact parameter, \(b\), is presented in Fig.12(b) for \(b=5\), and in Fig.12(c) for \(b=-4\). This mirrors the behaviour found earlier for scattering in the charge two sector. For the positive impact parameter the Q-lumps are deflected away from each other, with the scattering producing very little deformation to the Q-lumps. For the negative impact parameter the Q-lumps are attracted towards each other and the scattering induces a more significant deformation of the Q-lumps, but not enough to fission the charge two Q-lumps into individual charge one Q-lumps. The deformation of the outgoing Q-lumps is more clearly visible in the energy density plots in Fig.13 and the associated movie m05.mp4. Figure 12: Field theory simulations of the scattering of a pair of charge two Q-lumps with \(\lambda_{1}=\lambda_{2}=3,\ a=-10,\ \chi=\pi,\ v=0.1,\ \ ({\rm a})\ b=0\); (b) \(b=5\); (c) \(b=-4\). Figure 13: Energy density plots from field theory dynamics for the scattering of a pair of charge two Q-lumps with parameters \(\lambda_{1}=\lambda_{2}=3,\ a=-10,\ b=-4,\ \chi=\pi,\ v=0.1\). The initial condition (5.5) with \(\lambda_{1}=\lambda_{2}\) and \(\chi=\pi\) satisfies \(W(-z)=-W(z)\), and hence all the scatterings investigated so far in the charge four sector display a cyclic \(C_{2}\) symmetry. If \(\lambda_{1}=\lambda_{2}\) and \(\chi=0\), then again there is a \(C_{2}\) symmetry, but this time realized as \(W(-z)=W(z)\). The result of changing the phase from \(\pi\) to zero can be illustrated by repeating the simulation presented in Fig.12(c) and Fig.13, but with the new phase. This produces the scattering displayed in Fig.14(a) as a multiple exposure plot, and in Fig.15 as energy density heat maps, with m06.mp4 the corresponding movie. Comparing Fig.13 and Fig.15 reveals that the change in phase makes a considerable difference to the intermediate configuration that is formed, and subsequently to the outgoing Q-lumps. The deformation to each charge two Q-lump is now much stronger, with the amplitude of oscillation large enough that individual Q-lumps of different sizes are visible at some points in the oscillation, although the perturbation is not strong enough to yield fission, in which the Q-lumps would remain well-separated for all subsequent times. Figure 15: Energy density plots from field theory dynamics for the scattering of a pair of charge two Q-lumps with parameters \(\lambda_{1}=\lambda_{2}=3,\ a=-10,\ b=-4,\ \chi=0,\ v=0.1\). The \(C_{2}\) symmetry in the charge four scattering can be broken either by changing the initial relative phase so that \(\chi\notin\{0,\pi\}\), or by choosing different initial sizes for the Q-lumps. An example of the first possibility is presented in Fig.14(b), where the relative phase is taken to be \(\chi=\pi/2\), and the second possibility is realized in Fig.14(c), where \(\lambda_{1}=2\) and \(\lambda_{2}=4\), with \(\chi=\pi\). These plots illustrate that, generically, a head-on collision results in fission that produces individual Q-lumps with a variety of speeds, scattering angles and sizes. There is a general issue for Q-lumps, regarding the relevance of restricting dynamics to only finite energy solutions, because of the following reasoning. Any stationary Q-lump solution with infinite energy in the charge \(N\) sector can be associated with a finite energy stationary Q-lump solution in the charge \(N+1\) sector, via the addition of an extra Q-lump that can be placed arbitrarily far from any of the other existing Q-lumps. Explicitly, the process may be represented by the formula \[W=\frac{\alpha_{N-1}z^{N-1}+\cdots+\alpha_{1}z+\alpha_{0}}{z^{N}+\beta_{N-1}z^ {N-1}+\cdots+\beta_{1}z+\beta_{0}}e^{imt}-\frac{\alpha_{N-1}}{z-\mu}e^{imt}, \tag{5.6}\] for a finite energy Q-lump with charge \(N+1\). Here \(\mu\) is a positive real parameter, that can be made arbitrarily large to move the extra Q-lump far from any of the existing Q-lumps that are contained within the infinite energy solution given by just the first term of (5.6). This shows that for any infinite energy stationary Q-lump solution \(\mathcal{S}\), and any choice of a compact region \(\Omega\) of the plane, there is a finite energy stationary Q-lump solution \(\widetilde{\mathcal{S}}\) that approximates \(\mathcal{S}\) in \(\Omega\) to any desired level of accuracy. In terms of local dynamics, it therefore seems difficult to argue that certain configurations must be ignored because they have infinite energy when considered over the whole plane. A concrete example of the above general idea is provided by perturbations of the axially symmetric charge two Q-lump. Perturbed oscillating charge two Q-lumps appeared in some of the charge four scattering events presented above. These configurations cannot be studied within the moduli space approximation as isolated charge two Q-lumps, because the relevant stationary solutions have infinite energy. However, they can be studied using field theory simulations. Consider an initial condition taken from the field \[W=\frac{\lambda^{2}+z\varepsilon t}{z^{2}}e^{imt}, \tag{5.7}\] where \(\varepsilon\) is a real parameter that induces a symmetry breaking perturbation of the axially symmetric charge two Q-lump, without splitting it into a pair of charge one Q-lumps. The resulting dynamics is shown as the energy density plots in Fig.16, and as the movie m07.mp4, for the example with parameter values \(\lambda=3\) and \(\varepsilon=0.1.\) It can be seen that the distorted configuration rotates and oscillates but does not separate into individual Q-lumps. Note the similarity between the deformed Q-lump at \(t=40\) in Fig.16 and the pair of deformed Q-lumps in Fig.15 at \(t=184\). This confirms that field theory simulations are useful in studying Q-lump dynamics, even when the moduli space approximation is not applicable because the field configuration has infinite energy when extended to the full plane. Once the constraint of finite energy is removed, there are more possibilities for Q-lump scattering, particularly in the charge two sector. The relative phase can be unfrozen from \(\chi=\pi\), and the pair of Q-lumps can be given different initial sizes. The appropriate initial condition is taken from the field \[W=\bigg{(}\frac{\lambda_{1}}{z-a-vt-ib}+\frac{e^{i\chi}\lambda_{2}}{z+a+vt+ib} \bigg{)}e^{imt}. \tag{5.8}\] The field theory simulations presented in Fig.17 illustrate how the previous scattering with \(\chi=\pi\), for example as shown in Fig.1, is modified by changing the phase to \(\chi=0.\) The other parameters are taken to be \(\lambda_{1}=\lambda_{2}=1,\)\(a=-10,\)\(v=0.1,\) with three different values for the impact parameter \(b\). The head-on collision (\(b=0\)) in Fig.17(a) shows a scattering angle less than \(\pi/2\), in contrast to the \(\chi=\pi\) scattering, where the scattering angle always lies in the interval \((\pi/2,\pi).\) As the initial speed decreases, the scattering angle increases, and is close to \(\pi/2\) for \(v=0.05,\) for example. A positive impact parameter, Fig.17(b) with \(b=3,\) shows repulsion, and a negative impact parameter Fig.17(c) with \(b=-3,\) reveals attraction. This agrees with the previous \(\chi=\pi\) situation, although these forces now have less influence as the interaction is minimal. Otherwise, as expected from the arguments given above, there is little within these results to indicate a significant difference between finite and infinite energy dynamics at the local level. Figure 16: Energy density plots from field theory dynamics for a perturbed charge two Q-lump with parameters \(\lambda=3\) and \(\varepsilon=0.1.\) Figure 17: Field theory simulations of the scattering of a pair of charge one Q-lumps with \(\lambda_{1}=\lambda_{2}=1,\ a=-10,\ v=0.1,\ \chi=0\) and (a) \(b=0;\) (b) \(b=3;\) (c) \(b=-3.\) Finally, Fig.18 presents some examples of the head-on scattering of unit charge Q-lumps, where the pair of Q-lumps have different initial sizes. The parameter values for these simulations are \(\lambda_{1}=1,\,\lambda_{2}=3,\,a=-10,\,\,b=0,\,v=0.2,\) with three different values of the initial relative phase \(\chi=\pi,\pi/2,0.\) This provides a clear illustration of the significance of the initial phase in determining the outcome of the scattering, with dramatic changes in the scattering angles, sizes, and amplitudes of size oscillations, as \(\chi\) is varied. ## 6 Conclusion The moduli space approximation has been used to study Q-lump scattering, extending previous studies in the charge two sector to higher charges by imposing cyclic symmetries that restrict the motion to a 4-dimensional manifold. Field theory simulations of Q-lump scattering have been performed for the first time, with results that show an excellent agreement with moduli space dynamics. A range of exotic scattering events have been presented that include Q-lump fission and double scattering phenomena. Field theory simulations have also been applied to situations where the moduli space approximation is not applicable, revealing that considerations of finite energy are not particularly relevant in the study of local Q-lump dynamics. There are several directions in which this work could be extended. For example, the spatial plane could be replaced by a compact manifold, such as a torus, as a natural way to unfreeze the moduli that are fixed in the planar case by finite energy considerations. Q-lumps can also be generalized to systems in which the target space is a Kahler manifold with a continuous isometry that has at least one fixed point [7]. It might be interesting to investigate Q-lump dynamics in such systems, to see if any new features appear. Dyonic instantons provide gauge theory analogues of Q-lumps in (4+1)-dimensions, where moduli space dynamics has been applied to study scattering in the charge two sector [8, 9]. Higher charge investigations could be performed by imposing symmetries, as in the present paper for Q-lumps, and perhaps field theory simulations are just about in reach with current computing capabilities. Finally, the study of spinning topological solitons in modified \(\sigma\)-models is of general interest. In particular, spinning Skyrmions in (3+1)-dimensions can be regarded as approximations to nucleons with quantized spin, so the classical scattering of spinning Skyrmions is relevant to the study of nucleon-nucleon scattering [10, 11, 12, 13]. Q-lumps provide a simple lower-dimensional analogue of this situation, so a detailed understanding of their dynamics may provide some insight into the more complicated Skyrmion system.
2307.13613
Eigenvalue Bounds for Sum-Rank-Metric Codes
We consider the problem of deriving upper bounds on the parameters of sum-rank-metric codes, with focus on their dimension and block length. The sum-rank metric is a combination of the Hamming and the rank metric, and most of the available techniques to investigate it seem to be unable to fully capture its hybrid nature. In this paper, we introduce a new approach based on sum-rank-metric graphs, in which the vertices are tuples of matrices over a finite field, and where two such tuples are connected when their sum-rank distance is equal to one. We establish various structural properties of sum-rank-metric graphs and combine them with eigenvalue techniques to obtain bounds on the cardinality of sum-rank-metric codes. The bounds we derive improve on the best known bounds for several choices of the parameters. While our bounds are explicit only for small values of the minimum distance, they clearly indicate that spectral theory is able to capture the nature of the sum-rank-metric better than the currently available methods. They also allow us to establish new non-existence results for (possibly nonlinear) MSRD codes.
Aida Abiad, Antonina P. Khramova, Alberto Ravagnani
2023-07-25T16:18:41Z
http://arxiv.org/abs/2307.13613v2
# Eigenvalue Bounds for Sum-Rank-Metric Codes ###### Abstract We consider the problem of deriving upper bounds on the parameters of sum-rank-metric codes, with focus on their dimension and block length. The sum-rank metric is a combination of the Hamming and the rank metric, and most of the available techniques to investigate it seem to be unable to fully capture its hybrid nature. In this paper, we introduce a new approach based on sum-rank-metric graphs, in which the vertices are tuples of matrices over a finite field, and where two such tuples are connected when their sum-rank distance is equal to one. We establish various structural properties of sum-rank-metric graphs and combine them with eigenvalue techniques to obtain bounds on the cardinality of sum-rank-metric codes. The bounds we derive improve on the best known bounds for several choices of the parameters. While our bounds are explicit only for small values of the minimum distance, they clearly indicate that spectral theory is able to capture the nature of the sum-rank-metric better than the currently available methods. They also allow us to establish new nonexistence results for (possibly nonlinear) MSRD codes. **Keywords:** sum-rank-metric code, bound, network coding, graph, spectral graph theory, \(k\)-independence number, eigenvalues, MSRD code ## 1 Introduction In the past decade, _sum-rank-metric codes_ have featured prominently in the coding theory literature, mainly in connection with multi-shot network coding; see [4, 5, 7, 8, 9, 10, 24, 25, 26, 27, 28, 29, 30, 32, 33] among many others. In that context, sum-rank-metric codes can significantly reduce the alphabet size with respect to schemes based on ordinary rank-metric codes. Sum-rank-metric codes have been proposed for other applications as well, including space-time coding and distributed storage; see for instance [22, 27, 16, 36]. A sum-rank-metric code is a subspace of the Cartesian product of (possibly different) matrix spaces over a finite field \(\mathbb{F}_{q}\). The _sum-rank_ of a matrix tuple is the sum of the ranks of its components, and the _sum-rank distance_ between matrix tuples is the sum-rank of their difference. A code has good correction capability when it has large cardinality and all distinct matrix tuples are at large sum-rank distance from each other. When investigating the sum-rank metric, it is often helpful to observe that it is a _hybrid_ between the rank and the Hamming metric; see [23] for a general reference about the latter. In fact, sum-rank-metric codes exhibit a behaviour similar to Hamming-metric codes when all the the matrix spaces in the Cartesian product are small, and similar to rank-metric codes when the number of matrix spaces in the product is small. We are however not aware of any rigorous formalization of this behavior. Furthermore, the techniques that have so far been proposed to investigate sum-rank-metric codes seem to perform well only in one of the two extreme cases (either small matrix spaces, or few blocks). Even though several mathematical properties of sum-rank-metric codes have been discovered in the past years (see all the references above), it is still widely unknown what the largest cardinality of a sum-rank-metric code with given correction capability can be. This number has been computed in various special cases, most notably when the size of the underlying field size is large and when the matrix spaces are all of the same type. Several bounds and constructions of sum-rank-metric codes are available in the literature, most of which are surveyed or established in [7]. In this paper, we introduce a novel eigenvalue method to establish upper bounds on the cardinality of a sum-rank-metric code with given correction capability. We do not impose any restrictions on the field size and on the ambient space. In fact, our results even apply to nonlinear codes. Our approach uses algebraic graph theory and linear optimization: we construct a graph that naturally encodes the sum-rank metric, and we show that the size of the largest codes coincides with a generalization of the independence number of such graph (the so-called \(k\)-independence number). We then use results from spectral graph theory and linear programming [17, 1] (in particular, the so-called _Ratio-Type_ bound on the \(k\)-independence number, and a linear optimization method to calculate it) to estimate this quantity and derive the new eigenvalue bounds. It turns out that our spectral bounds outperform for several instances the best bounds currently available. Moreover, our bounds can be made explicit for small values of the minimum distance (\(d=3\) and \(d=4\)), and the results indicate very clearly that spectral methods are able to capture the nature of the sum-rank metric better than the known methods. For general values of the minimum distance \(d\), a linear optimization method is provided in order to calculate the best value of the general eigenvalue bound. Our bounds heavily rely on spectral graph theory techniques and don't appear to have any elementary derivation. As a by-product, this paper also initiates the study of sum-rank-metric graphs and their mathematical properties. More precisely, we prove that said graphs are vertex-transitive, \(\delta\)-regular (where we explicitly compute the value of \(\delta\)), and walk-regular. Note that some of these properties are indeed needed for our eigenvalue machinery to work. We also characterize when sum-rank-metric graphs are distance-regular. Moreover, we compute the entire spectrum of sum-rank-metric graphs, which is the key ingredient needed for using our bounds on the cardinality of sum-rank-metric codes. Outline.This paper is organized as follows. In Section 2 we define the sum-rank metric and sum-rank-metric codes, surveying the known bounds for their cardinality. In Section 3 we introduce and study the combinatorial properties of sum-rank-metric graphs. In Section 4 we present our eigenvalue bounds for sum-rank-metric graphs, and in Section 5 we compare them with the state of the art. Finally, in Section 6 we show the implications of our bounds for the block length of MSRD codes, obtaining various nonexistence results for (possibly nonlinear) MSRD codes. ## 2 Sum-Rank-Metric Codes Throughout this paper, \(q\) denotes a prime power and \(\mathbb{F}_{q}\) is the finite field with \(q\) elements. We let \(t\) be a positive integer and \(\mathbf{n}=(n_{1},\ldots,n_{t})\), \(\mathbf{m}=(m_{1},\ldots,m_{t})\in\ \mathbb{N}^{t}\) are ordered tuples with \(m_{1}\geq m_{2}\geq\cdots\geq m_{t}\) and \(n_{i}\geq m_{i}\) for all \(i\in\{1,\ldots,t\}\). We also put \(N\coloneqq n_{1}+\cdots+n_{t}\) for ease of notation. **Definition 1**.: The **sum-rank-metric space** is the direct sum \[\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\coloneqq\bigoplus_{i= 1}^{t}\mathbb{F}_{q}^{n_{i}\times m_{i}},\] which is an \(\mathbb{F}_{q}\)-linear space itself. The **sum-rank** of \((X_{1},\ldots,X_{t})\in\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) is \(\operatorname{srk}(X)=\sum_{i=1}^{t}\operatorname{rk}(X_{i})\). The **sum-rank distance** between \(X,Y\in\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) is \(\operatorname{srk}(X-Y)\). It is known and very easy to see that via \((X,Y)\mapsto\operatorname{srk}(X-Y)\) is indeed a distance, i.e., it is a symmetric, bilinear and nondegenerate function. **Remark 2**.: Note that sum-rank distance is a generalization of both rank distance, which is reached for \(t=1\), and Hamming distance, which is reached for \(n_{i}=m_{i}=1\) for \(i\in\{1,\ldots,t\}\). The key players of this paper are sum-rank-metric codes, defined as follows. **Definition 3**.: A (**sum-rank-metric**) **code** is a non-empty subset \(\mathcal{C}\subseteq\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\). The **minimum** (**sum-rank**) **distance** of a code \(\mathcal{C}\) with \(|\mathcal{C}|\geq 2\) is defined via \[\operatorname{srk}(\mathcal{C})\coloneqq\min\{\operatorname{srk}(X-Y)\,:\,X,Y \in\mathcal{C},\,X\neq Y\}.\] In this context, the elements of \(\mathcal{C}\) are called **codewords**. This paper focuses on the largest cardinality a sum-rank-metric code can have for the given minimum distance. **Notation 1**.: We let \(A_{q}(\mathbf{n},\mathbf{m},d)\) to denote the largest cardinality of a sum-rank-metric code in \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with the property that any two distinct codewords of \(\mathcal{C}\) are at sum-rank distance at least \(d\) from each other. In our approach, we will study sum-rank-metric codes using techniques from spectral graph theory. We thus introduce the following object. **Definition 4**.: The **sum-rank-metric graph** is the graph whose vertices are the elements of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) and whose edges are the pairs \((X,Y)\in\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})^{2}\) with the property that \(\operatorname{srk}(X-Y)=1\). We denote it by \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). In the case where \(t=1\), \(\mathbf{n}=(n)\), and \(\mathbf{m}=(m)\), the notation \(\operatorname{Mat}(n,m,\mathbb{F}_{q})\) and \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) will sometimes be used instead of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) and \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). Such graph is called **rank-metric graph**, which is also known in the literature as **bilinear forms graph**. For the rank metric, the Singleton bound is always met [15], so there has not been a combinatorial investigation of the graphs associated with it. Even if the sum-rank-metric graph is defined by capturing only the pairs \((X,Y)\) at distance \(1\), it encodes all distances between pairs of tuples of matrices. This fact follows from the following result. **Proposition 5** (see [8, Proposition 4.3]).: _The geodesic distance of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) coincides with the sum-rank distance._ ### Overview of Known Bounds In this subsection we concisely recall the main bounds currently known on sum-rank-metric codes. To our best knowledge, the overview offered in [7] is up to date and we refer to that for more details. At the end of the subsection we also present a new bound based on Ramsey theory. While the bound is not expected to be good in general, it shows an interesting connection between coding theory and Ramsey theory. In [7], the authors observe that if \(\mathcal{C}\) is a sum-rank-metric code in \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) of size at least \(2\) with \(\operatorname{srk}(\mathcal{C})\geq d\), then the size of \(\mathcal{C}\) is upper bounded by the size of the largest Hamming-metric code over field \(\mathbb{F}_{q^{m}}\) of length \(N\) and minimum distance at least \(d\), where \(m=\max\{m_{1},\ldots,m_{t}\}\). This observation implies the following bounds. **Theorem 6** (see [7, Theorem III.1]).: _Let \(m=\max\{m_{1},\ldots,m_{t}\}\) and let \(\mathcal{C}\subseteq\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) be a sum-rank-metric code with \(|\mathcal{C}|\geq 2\) and \(\operatorname{srk}(\mathcal{C})\geq d\). The following hold._ \[\textbf{Induced Singleton bound:}\quad|\mathcal{C}|\leq q^{m(N-d+1)}\] \[\textbf{Induced Hamming bound:}\quad|\mathcal{C}|\leq\left\lfloor \frac{q^{mN}}{\sum_{s=0}^{\lfloor(d-1)/2\rfloor}\binom{N}{s}(q^{m}-1)^{s}}\right\rfloor,\] \[\textbf{Induced Plotkin bound:}\quad|\mathcal{C}|\leq\left\lfloor \frac{q^{m}d}{q^{m}d-(q^{m}-1)N}\right\rfloor\text{ if }d>(q^{m}-1)N/q^{m}\text{,}\] \[\textbf{Induced Elias bound:}\quad\quad|\mathcal{C}|\leq\left\lfloor \frac{Nd(q^{m}-1)}{q^{m}w^{2}-2Nw(q^{m}-1)+(q^{m}-1)Nd}\cdot\frac{q^{mN}}{V_{w }(\mathbb{F}_{q^{m}}^{N})}\right\rfloor.\] _In the Induced Elias bound, \(w\) is any integer between \(0\) and \(N(q^{m}-1)/q^{m}\) such that the denominator is positive, and \(V_{w}(\mathbb{F}_{q^{m}}^{N})=\sum_{i=0}^{w}\binom{N}{i}(q^{m}-1)^{i}\), i.e. the cardinality of any ball of radius \(w\) in \(\mathbb{F}_{q^{m}}^{N}\) with respect to the Hamming distance._ The following results are also taken from [7], but are not induced by Hamming-metric codes. Each of the bounds has a separate proof, for which the reader is referred to [7]. **Theorem 7** (see [7, Theorems III.2, III.6-III.8]).: _Let \(\mathcal{C}\subseteq\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) be a code with \(|\mathcal{C}|\geq 2\) and \(\operatorname{srk}(\mathcal{C})\geq d\). Let \(j\) and \(\delta\) be the unique integers satisfying \(d-1=\sum_{i=1}^{j-1}n_{i}+\delta\) and \(0\leq\delta\leq n_{j}-1\). Let \(\ell\leq t-1\) and \(\delta^{\prime}\leq n_{\ell+1}-1\) be the unique positive integers such that \(d-3=\sum_{j=1}^{\ell}n_{j}+\delta^{\prime}\). Define \(\mathbf{n}^{\prime}=(n_{\ell+1}-\delta^{\prime},n_{\ell+2},\ldots,n_{t})\) and \(\mathbf{m}^{\prime}=(m_{\ell+1},m_{\ell+2},\ldots,m_{t})\). Finally, let \(Q=\sum_{i=1}^{t}q^{-m_{i}}\). The following hold._ \[\textbf{Singleton bound:}\quad|\mathcal{C}|\leq q^{\sum_{i=j}^{t}m_{i}n_{i}-m_ {j}\delta}\text{,}\] \[\textbf{Sphere-Packing bound:}\quad|\mathcal{C}|\leq\left\lfloor\frac{| \operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})|}{V_{r}(\operatorname {Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))}\right\rfloor\text{, where }r=\lfloor(d-1)/2\rfloor\text{,}\] \[\textbf{Projective Sphere-Packing bound:}\quad|\mathcal{C}|\leq\left\lfloor\frac{| \operatorname{Mat}(\mathbf{n}^{\prime},\mathbf{m}^{\prime},\mathbb{F}_{q})|}{V _{l}(\operatorname{Mat}(\mathbf{n}^{\prime},\mathbf{m}^{\prime},\mathbb{F}_{q })|}\right\rfloor\text{ if }3\leq d\leq N\text{,}\] \[\textbf{Total Distance bound:}\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ Using the same connection to the size of the largest Hamming-metric code, we can extend the Ramsey-type bound given in [11, Theorem 2.1] to the sum-rank-metric case. Let \(R(k;r,s)\) be a **set-coloring Ramsey number**, which is the minimum positive integer \(n\) such that in every edge-coloring \(\chi:E(K_{n})\to\binom{r}{a}\), where each edge is mapped onto a set of \(s\) colors chosen from a palette of \(r\) colors, there exists a monochromatic \(k\)-clique. The following theorem establishes the first connection between sum-rank-metric codes and set-coloring Ramsey numbers. **Theorem 8**.: _Let \(m=\max\{m_{1},\ldots,m_{t}\}\). Let \(\mathcal{C}\subseteq\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) be a sum-rank-metric code with \(|\mathcal{C}|\geq 2\) and \(\operatorname{srk}(\mathcal{C})\geq d\) with \(d<N\). Also, let \(a\) and \(b\) be positive integers such that \(b<a\). Finally, let \(r=aN\) and \(s=db\). If \(q^{m}=R(k;a,b)-1\) then there exist constants \(c,c^{\prime}>0\) such that, for any integers \(k,r,s\) with \(k\geq 3\) and \(r>s\geq 1\) we have_ \[|\mathcal{C}|<R(k;r,s)\leq 2^{c^{\prime}k(r-s)^{2}r^{-1}\log\left(\frac{r}{ \min(s,r-s)}\right)}.\] Proof.: Following the same observation as the one used in Theorem 6, the size of \(\mathcal{C}\) is bounded by the size of the largest Hamming-metric code over \(\mathbb{F}_{q^{m}}\) of length \(N\). The connection between set-coloring Ramsey numbers and the size of the largest Hamming code used for the lower bound on \(R(k;r,s)\) is given in [11, Theorem 2.1]. The upper bound on \(R(k;r,s)\) is given by [11, Theorem 1.1]. ## 3 The Sum-Rank-Metric Graph In this section, several structural properties of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) are investigated, including regularity, distance-regularity, and walk-regularity. One of the reasons for investigating these, besides mathematical curiosity, is to identify what kind of existing eigenvalue bounds on the \(k\)-independence number can be applied to \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). As we will see in later sections, these techniques lead to upper bounds for the size of sum-rank-metric codes that improve the best known bounds for several parameters. **Notation 2**.: Throughout the paper, unless otherwise specified, we will use \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\), to denote the (not necessarily distinct) eigenvalues of the adjacency matrix of a graph with \(n\) vertices. To denote the \(r+1\)_distinct_ eigenvalues of the adjacency matrix we will use \(\theta_{0},\theta_{1},\ldots,\theta_{r}\), with multiplicity of \(\theta_{i}\) denoted by \(m(\theta_{i})\). In general we assume \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) and \(\theta_{0}>\theta_{1}>\cdots>\theta_{r}\), unless explicitly specified otherwise. The adjacency relation on the vertices of a graph is denoted by \(\sim\) and the **degree** of a vertex is the number of edges that are incident to it. Recall that the **Cartesian product** of graphs \(G\) and \(H\) is the graph \(G\times H\) whose vertices are the elements of the Cartesian product of the vertices of \(G\) and \(H\), and where vertices \((g_{1},h_{1})\) and \((g_{2},h_{2})\) are adjacent if either \(g_{1}=g_{2}\) and \(h_{1}\sim h_{2}\), or \(g_{1}\sim g_{2}\) and \(h_{1}=h_{2}\). This definition can be inductively extended to a Cartesian product of any finite number of graphs. The following result shows that the sum-rank-metric graph is the Cartesian product of bilinear forms graphs. **Proposition 9**.: _Let \(\Gamma_{i}=\Gamma(\operatorname{Mat}(n_{i},m_{i},\mathbb{F}_{q}))\) for \(i\in\{1,\ldots,t\}\). Then \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is the Cartesian product \(\Gamma_{1}\times\cdots\times\Gamma_{t}\)._ Proof.: We prove it by induction on \(t\). For \(t=1\), the claim is trivial. Suppose \(t\geq 2\) and let \(X_{i}\) and \(Y_{i}\) be two matrices from \(\operatorname{Mat}(n_{i},m_{i},\mathbb{F}_{q})\) which correspond to two vertices of \(\Gamma_{i}\) for \(i\in\{1,\ldots,t\}\). By the induction hypothesis, \(\Gamma^{\prime}:=\Gamma_{1}\times\Gamma_{n-1}\) is \(\Gamma(\operatorname{Mat}(\mathbf{n}^{\prime},\mathbf{m},\mathbb{F}_{q}))\) where \(\mathbf{n}^{\prime}=(n_{1},\ldots,n_{t-1})\) and \(\mathbf{m}^{\prime}=(m_{1},\ldots,m_{t-1})\). Consider the Cartesian product \(\Gamma_{1}\times\cdots\times\Gamma_{t-1}\times\Gamma_{t}\). The vertex set of this graph is clearly \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\), and two vertices \((X_{1},\ldots,X_{t})\) and \((Y_{1},\ldots,Y_{t})\) are adjacent if and only if one of the following holds: * \((X_{1},\ldots,X_{t-1})=(Y_{1},\ldots,Y_{t-1})\) and \(X_{t}\sim Y_{t}\) in \(\Gamma_{t}\), which is equivalent to \(\operatorname{srk}((X_{1},\ldots,X_{t-1})-(Y_{1},\ldots,Y_{t-1}))=0\) and \(\operatorname{rk}(X_{t},Y_{t})=1\); * \((X_{1},\ldots,X_{t-1})\sim(Y_{1},\ldots,Y_{t-1})\) in \(\Gamma^{\prime}\) and \(X_{t}=Y_{t}\), which is equivalent to \(\operatorname{srk}((X_{1},\ldots,X_{t-1})-(Y_{1},\ldots,Y_{t-1}))=1\) and \(\operatorname{rk}(X_{t},Y_{t})=0\). In either case it follows that \(\operatorname{srk}((X_{1},\ldots,X_{t})-(Y_{1},\ldots,Y_{t}))=1\). Moreover, these are the only two possibilities when \((X_{1},\ldots,X_{t})\) and \((Y_{1},\ldots,Y_{t})\) are adjacent in \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). Hence \(\Gamma_{1}\times\cdots\times\Gamma_{t}\) is exactly \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\), as claimed. Note that the graphs \(\Gamma_{i}\) defined in Proposition 9 are bilinear forms graphs. More information about them can be found, for example, in [6, Section 9.5.A]. Recall that a graph is \(a\)**-regular** if the degree of every vertex is \(a\). A graph is called **vertex-transitive** if for any two vertices \(u,v\) there exists a graph automorphism \(f\) such that \(f(u)=v\). A graph is a **Cayley graph** over a group \(G\) with a connecting set \(S\subseteq G\) if the vertices of the graph are the elements of \(G\), and two vertices \(x,y\) are adjacent if and only if there exists and element \(s\in S\) such that \(x+s=y\). We assume \(S\) does not contain the neutral element of \(G\) and is closed under taking inverses so that the Cayley graph is always undirected and without loops. The following result shows that the sum-rank-metric graph is a Cayley graph, which implies that it is also vertex-transitive and hence regular. **Proposition 10**.: _The graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is the Cayley graph over an additive group \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with a connecting set \(S=\{X\in\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\mid \operatorname{srk}(X)=1\}\). In particular, \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is vertex-transitive and \(\delta\)-regular with \(\delta=\frac{1}{q-1}\sum_{i=1}^{t}(q^{n_{i}}-1)(q^{m_{i}}-1)\)._ Proof.: By definition of the sum-rank-metric graph, if two vertices \(A\) and \(B\) are adjacent, then \(\operatorname{srk}(A-B)=1\). Clearly \(A-B\in S\), the connecting set of the Cayley graph. On the other hand, for an arbitrary \(X\) such that \(X\in S\), the vertices \(A\) and \(A+X\) are adjacent since \(\operatorname{srk}(A+X-A)=\operatorname{srk}(X)=1\). Thus, the graph is indeed Cayley, and hence vertex-transitive. The regularity parameter \(\delta\), which is also the largest eigenvalue of the spectrum of the graph, can be calculated because it is the cardinality of the connecting set \(S\), i.e., the neighborhood of \(O\). Since \(\operatorname{srk}(X)=1\) implies that there exists a single \(i\in\{1,\ldots,t\}\) such that \(\operatorname{rk}(X_{i})=1\) while \(\operatorname{rk}(X_{j})=0\) for all \(j\neq i\), we have \[\delta=\frac{1}{q-1}\sum_{i=1}^{t}(q^{n_{i}}-1)(q^{m_{i}}-1)\] by [6, Theorem 9.5.2]. We now turn to the (partial) walk-regularity of sum-rank-metric graphs. Recall that a graph is \(l\)**-partially walk-regular** if for any vertex \(v\) and any positive integer \(i\leq l\) the number of closed walks of length \(i\) that start and end in \(v\) does not depend on the choice of \(v\). A graph is **walk-regular** if it is \(l\)-partially walk-regular for any positive integer \(l\). As already mentioned, the main motivation for studying the walk-regularity of the sum-rank-metric graph is to check the applicability of spectral methods to control the graph's \(k\)-independence number, which we will present later in Section 4. **Proposition 11**.: _The graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is walk-regular._ Proof.: The claim immediately follows from the observation that, for any fixed positive integer \(k\), there is a bijection between a set of closed \(k\)-walks that start at \(O\) (which is the vertex corresponding to the tuple of zero metrices) and a set of closed \(k\)-walks that start any other vertex \(v=(X_{1},X_{2},\ldots,X_{t})\). The bijection from walks starting with \(O\) to walks starting with \(v\) is induced by the bijection of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) given by \((Y_{1},Y_{2},\ldots,Y_{t})\mapsto(Y_{1}+X_{1},Y_{2}+X_{2},\ldots,Y_{t}+X_{t})\). Another important property of a graph is (partial) distance regularity. Recall that a graph is \(l\)**-partially distance-regular** if for a pair of vertices \(x,y\) with \(d(x,y)=i\leq l\), the number of vertices at distance \(j\) from \(x\) and at distance \(h\) from \(y\) with \(i+j\leq l\) only depends on \(i,j,h\) (and not on the choice of the vertices \(x\), \(y\)). Let us use \(p^{i}_{j,h}(x,y)\) to denote this number. Then note that a graph is \(l\)-partially distance-regular if the values \(c_{i}(x,y)=p^{i}_{1,i-1}(x,y)\), \(a_{i-1}(x,y)=p^{i-1}_{1,i-1}(x,y)\), \(b_{i-2}(x,y)=p^{i-2}_{1,i-1}(x,y)\) only depend on \(i\leq l\) and not on the choice of the vertices \(x,y\). If a graph is \(l\)-partially distance-regular for any \(k\), then it is called **distance-regular**. In a distance-regular graph \(\Gamma\) with geodesic distance \(d\), the values \(a_{i}=a_{i}(x,y)\), \(b_{i}=b_{i}(x,y)\), and \(c_{i}=c_{i}(x,y)\) for \(i=0,\ldots,D=\max_{x,y\in\Gamma}d(x,y)\) form the **intersection array** of the graph. Because distance-regular graphs are also \(\delta\)-regular for some \(\delta\), we have \(a_{i}+b_{i}+c_{i}=0\) for any suitable \(i\), and in particular, \(b_{0}=\delta\) and \(a_{0}=c_{0}=0\). This implies that, to define the intersection array of the distance-regular graph, it is sufficient to define the values \(b_{i}\) and \(c_{i}\). Note that any \(l\)-partially distance-regular graph is also \(l\)-partially walk-regular, but the converse is not true in general. For instance, we will see in Proposition 12 that the sum-rank-metric graphs, despite being walk-regular (Proposition 11), are not even necessarily \(2\)-partially distance-regular. The next result characterizes for which parameters sum-rank-metric graphs are (partially) distance-regular. **Proposition 12**.: _Suppose \(t\geq 2\). Then the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is distance-regular if and only if \(n_{i}=1\) and \(m_{i}=m\) for all \(i\in\{1,\ldots,t\}\). The intersection array in that case is given by \(b_{i}=(t-i)(q^{m}-1)\) and \(c_{i}=i\) for all \(i\in\{0,\ldots,t\}\). Moreover, if \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is not distance-regular, then it is also not \(2\)-partially distance-regular._ Proof.: (\(\Leftarrow\)) To prove that \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is distance-regular, it is enough to show that for vertices \(v\) and \(u\) with \(d(v,u)=i\) the values \(a_{i}\), \(b_{i}\), and \(c_{i}\) (which are the number of neighbors of \(v\) at distance \(i\), \(i+1\), and \(i-1\) from \(u\), respectively), do not depend on the choice of \(v\) and \(u\). Since \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is vertex-transitive by Proposition 10, we may assume without loss of generality that \(u\) corresponds to \(O\), the element of \(t\) zero matrices of size \(1\times m\). Each matrix of size \(1\times m\) has rank at most \(1\). Since \(d(u,v)=i\), the element corresponding to \(v\) is a collection of \(i\) non-zero and \(t-i\) zero matrices. To obtain the number of neighbors of \(v\) at distance \(i\) from \(O\) we choose one of the \(i\) non-zero matrices of \(v\) and substitute it for a different non-zero \(1\times m\) matrix. There are \(q^{m}-2\) possibilities for this, so \(a_{i}=i(q^{m}-2)\). For the number of neighbors of \(v\) at distance \(i+1\) from \(O\), we take one of the \(t-i\) zero matrices of \(v\) and replace it with one of \(q^{m}-1\) non-zero matrices of size \(1\times m\). Hence \(b_{i}=(t-i)(q^{m}-1)\). To get a neighbor of \(v\) at distance \(i-1\) from \(O\), we choose one of the \(i\) non-zero matrices of \(v\) and replace it with a zero matrix, which means \(c_{i}=i\). It is easy to verify that \(a_{i}+b_{i}+c_{i}=\delta\), the regularity of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) given by Proposition 10. Moreover, neither of these values depends on the choice of \(v\), as required. (\(\Rightarrow\)) Next, we show that \(n_{i}=1\) and \(m_{i}=m\) is the only case when \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is distance-regular. For this, we consider the following two possibilities separately. First, suppose \(n_{i}=1\) for all \(i\in\{1,\ldots,t\}\) but \(m_{i}\neq m_{j}\) for some \(i,j\in\{1,\ldots,t\}\). As before, we attempt to calculate \(a_{i}(v,O)\), where \(O\) is the vertex corresponding to an element of all-zero matrices. Without loss of generality assume that \(m_{i}\in m,m^{\prime}\) for all \(i\in\{1,\ldots,t\}\) and \(m\neq m^{\prime}\). Then \(a_{i}(v,O)\) is given by \(j(q^{m}-1)+k(q^{m^{\prime}}-1)\), where \(j\) is the number of non-zero matrices of size \(1\times m\) in \(v\) and \(k\) is the number of non-zero matrices of size \(1\times m^{\prime}\) in \(v\). Clearly \(j+k=i\), but the exact values of both \(j\) and \(k\) depend on the choice of \(v\), so the value \(a_{i}\) cannot be defined from \(i\) alone. Note that, in particular, \(a_{2}(v,O)\) depends on the choice of \(v\), so in this case the graph is also not \(2\)-partially distance-regular. Next, we consider the possibility that \(n_{j}\geq 2\) for some \(j\in\{1,\ldots,t\}\), which implies that now some elements of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) include matrices of rank \(2\). Consider two vertices \(v\) and \(v^{\prime}\) of the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\), the vertex \(v\) is an element of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with \(i\) matrices of rank \(1\) and \(t-i\) matrices of rank \(0\), while \(v^{\prime}\) is an element with one matrix \(X_{j}\) of rank \(2\), \(i-2\) matrices of rank \(1\), and \(t-i+2\) matrices of rank \(0\). Both these elements have sum-rank equal to \(i\), which means that both vertices \(v\) and \(v^{\prime}\) are at distance \(i\) from \(O\). We now attempt to calculate \(c_{i}(v,O)\) and \(c_{i}(v^{\prime},O)\). For the vertex \(v\), \(c_{i}(v,O)=i\) by the same argument as the one presented in the "if" part of this proof. For the vertex \(v^{\prime}\), we apply a similar argument to obtain \(c_{i}(v^{\prime},O)=i-2+k\), where \(k\) denotes the number of matrices \(M\) of size \(n_{j}\times m_{j}\) such that \(\operatorname{rk}(M)=1\) and \(\operatorname{rk}(X_{j}-M)=1\). In order for \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) to be a distance-regular graph, \(k\) must be exactly \(2\). However, it is easy to construct three different matrices that satisfy the requirement. Indeed, suppose \(\mathbf{x_{1}}\) and \(\mathbf{x_{2}}\) are two linearly independent non-zero vectors such that each row of \(X_{j}\) can be presented as \(\alpha_{i}\mathbf{x_{1}}+\beta_{i}\mathbf{x_{2}}\) for \(i\in\{1,\ldots,n_{j}\}\). Suppose that \(M_{1}\), \(M_{2}\) and \(M_{3}\) are \(n_{j}\times m_{j}\) matrices such that the rows of \(M_{1}\) are \(\alpha_{i}\mathbf{x_{1}}\), the rows of \(M_{2}\) are \(\beta_{i}\mathbf{x_{2}}\), and the rows of \(M_{3}\) are \(\alpha_{i}(\mathbf{x_{1}}-\mathbf{x_{2}})\) for \(i\in\{1,\ldots,n_{1}\}\). It is easy to see that \(\operatorname{rk}(X_{j}-M_{i})=1\) and \(\operatorname{rk}(M_{i})=1\) for \(i\in\{1,2,3\}\). Hence, \(c_{i}(x,O)\) depends on the choice of vertex \(x\), which contradicts the distance-regularity of the graph. Note that, in particular, \(c_{2}(x,O)\) depends on the choice of \(x\), so in the case where \(n_{1}\geq 2\) the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is also not \(2\)-partially distance-regular. Note that in the case \(n_{i}=1\) and \(m_{i}=m\) for some \(m\) and all \(i\in\{1,\ldots,t\}\), the space \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with the sum-rank distance can be seen as \(\mathbb{F}_{q^{m}}\) with the Hamming metric. In other words, when \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is a distance-regular graph, then it is a Hamming-metric graph. We also recall that a graph is **distance-transitive** if for any \(i\) and any two pairs of vertices \(v,w\) and \(x,y\) such that \(d(v,w)=d(x,y)=i\) there exists an automorphism of the graph that maps \(v\) to \(x\) and \(w\) to \(y\). It is well known that distance-transitive graphs are also distance-regular. In particular, sum-rank-metric graphs cannot be distance-transitive in general by Proposition 12. The next two lemmas treat the case \(t=1\), where sum-rank-metric graphs reduce to graphs of bilinear forms. **Lemma 13**.: _Suppose \(t=1\). Then the graph \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) is distance-transitive with diameter \(n\) and the intersection array given by_ \[b_{i} =\frac{1}{q-1}q^{2i}(q^{m-i}-1)(q^{n-i}-1),\] \[c_{i} =\frac{1}{q-1}q^{i-1}(q^{i}-1),\] _for all \(i\in\{0,\ldots,n\}\)._ Proof.: This is an immediate consequence of [6, Theorem 9.5.2] We say that a distance-regular graph has **classical parameters**\((D,q,\alpha,\beta)\) if it has diameter \(D\) and intersection array given by \[b_{i}=\frac{q^{D}-q^{i}}{q-1}\left(\beta-\alpha\frac{q^{i}-1}{q-1}\right),\quad c _{i}=\frac{q^{i}-1}{q-1}\left(1+\alpha\frac{q^{i-1}-1}{q-1}\right),\] for all \(i\in\{0,\ldots,D\}\). Note that the graph \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) is a distance-regular graph with classical parameters \((n,q,q-1,q^{m}-1)\), whose eigenvalues are given in the following result. **Lemma 14**.: _Suppose \(t=1\). Then the graph \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) has \(n+1\) distinct eigenvalues given by_ \[\theta_{i}=\frac{(q^{n-i}-1)(q^{m}-q^{i})-q^{i}+1}{q-1},\quad i=0,\ldots,n,\] _with respective multiplicities given by_ \[m(\theta_{i})=\prod_{s=0}^{i-1}\frac{(q^{n-s}-1)(q^{m}-q^{s})}{q^{s+1}-1}.\] Proof.: This is an immediate consequence of [6, Corollary 8.4.2], which provides the eigenvalues, and of [6, Theorem 8.4.3], which gives the multiplicities. Proposition 9 combined with Lemma 14 and classical graph theory results give us a complete description of the spectrum of the sum-rank-metric graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). **Proposition 15**.: _The distinct eigenvalues of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) are the numbers of the form_ \[\sum_{j=1}^{t}\frac{(q^{n_{j}-i_{j}}-1)(q^{m_{j}}-q^{i_{j}})-q^{ij}+1}{q-1},\] _for \(i_{j}\in\{0,\ldots,n_{j}\}\) and for \(j\in\{1,\ldots,t\}\). Their respective multiplicities are_ \[\prod_{j=1}^{t}\prod_{s=0}^{i_{j}-1}\frac{(q^{n_{j}-s}-1)(q^{m}_{j}-q^{s})}{q^ {s+1}-1}.\] Proof.: It is a well-known fact that, if \(\Gamma_{1}\) and \(\Gamma_{2}\) are graphs with distinct eigenvalues \(\lambda_{i}\), \(i\in\{1,\ldots,n_{1}\}\), and \(\mu_{j}\), \(j\in\{1,\ldots,n_{2}\}\), then the eigenvalues of their Cartesian product \(\Gamma_{1}\times\Gamma_{2}\) have form \(\lambda_{i}+\mu_{j}\) for \((i,j)\in\{1,\ldots,n_{1}\}\times\{1,\ldots,n_{2}\}\); see for instance [13]). This can be inductively extended to a Cartesian product of \(t\geq 2\) graphs. Combining this fact with Proposition 9 and Lemma 14, we obtain the desired result. ## 4 New Eigenvalue Bounds for Sum-Rank-Metric Codes In graph theory, the \(k\)**-independence number** of a graph \(G\), denoted by \(\alpha_{k}\), is the size of the largest set of vertices such that any two vertices in the set are at a distance at least \(k\) from each other. Alternatively, one can consider the \(k\)**-th power graph**\(G^{k}\) with the same vertex set as \(G\) such that two vertices in \(G^{k}\) are adjacent if and only if the distance between these two vertices in \(G\) is at most \(k\). Then the \(k\)-independence number of \(G\) is also the \(1\)-independence number of \(G^{k}\), or the largest size of an independent set in \(G^{k}\). Note that, to apply spectral bounds on the \(1\)-independence number of the \(k\)-th power graph, one needs to know how the spectrum of \(G^{k}\) relates to the spectrum of \(G\). In general, this relation is not known. In [1], sharp eigenvalue upper bounds on the \(k\)-independence number \(\alpha_{k}\) of a graph were derived. In particular, Ratio-Type and Inertia-Type bounds were presented, which generalize the well-known Cvetkovic's [12] and Hoffman's bounds (unpublished, see e.g. [19]) for the classic \(1\)-independence number. Both bounds depend on a choice of a real polynomial \(p\) of degree \(k\) that can be found using mixed-integer linear programming (MILP). In [2], the authors present a MILP implementation for the Inertia-Type bound for general graphs. In [17], an LP approach was proposed to find the best Ratio-Type bound for partially walk-regular graphs using minor polynomials, which has a connection to Delsarte's LP method. In particular, it was shown in the same paper [17] that the LP bound on certain families of Hamming graphs coincides with Delsarte's LP bound (more details on Delsarte's bound are given later on in this section). In [17], a Ratio-Type bound was used to show the nonexistence of \(1\)-perfect codes in odd graphs. The following result establishes a link between coding theory and graph theory. It will play a key role in establishing our main results in Section 5. **Corollary 16**.: \(A_{q}(\mathbf{n},\mathbf{m},d)=\alpha_{d-1}\big{(}\Gamma(\operatorname{Mat}( \mathbf{n},\mathbf{m},\mathbb{F}_{q}))\big{)}\)_._ Proof.: It follows from Proposition 5 that \(A_{q}(\mathbf{n},\mathbf{m},d)=\alpha_{d-1}\), where \(\alpha_{d-1}\) is the \((d-1)\)-independence number of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). Indeed, let \(S\) be a \((d-1)\)-independent set in \(V(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})))\) of the largest cardinality. By definition, the geodesic distance between any two vertices in \(S\) is at least \(d\). Let \(S^{\prime}\) be a subset of elements of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) to which the vertices from \(S\) correspond. By Proposition 5, the geodesic distance between any two vertices is the same as the sum-rank distance. Therefore we have \(A_{q}(\mathbf{n},\mathbf{m},d)\geq|S^{\prime}|=|S|=\alpha_{d-1}\). On the other hand, let \(\mathcal{C}\) be a sum-rank-metric code with the largest cardinality in \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\). Let \(\mathcal{C}^{\prime}\) be the corresponding set of vertices in \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\). The geodesic distance between the vertices is equal to the sum-rank distance, so it is at least \(d\) for any pair of vertices in \(\mathcal{C}^{\prime}\). Hence \(A_{q}(\mathbf{n},\mathbf{m},d)=|\mathcal{C}|=|\mathcal{C}^{\prime}|\leq\alpha _{d-1}\). It follows from Lemma 16 that any upper bound on the \(k\)-independence number of a graph yields an upper bound for sum-rank-metric codes, and vice versa. The next result provides the main bound we will be using in this paper, the so-called Ratio-Type bound, which we will later apply to sum-rank-metric graphs. **Theorem 17** (Ratio-Type bound; see [1]).: _Let \(G=(V,E)\) be a regular graph with \(n\) vertices and adjacency matrix \(A\) with eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{n}\). Let \(p\in\mathbb{R}_{k}[x]\). Define \(W(p)=\max_{u\in V}\{(p(A))_{uu}\}\) and \(\lambda(p)=\min_{i=2,\ldots,n}\{p(\lambda_{i})\}\). Then_ \[\alpha_{k}(G)\leq n\,\frac{W(p)-\lambda(p)}{p(\lambda_{1})-\lambda(p)}.\] For a fixed \(k\), the challenge behind applying the Ratio-Type bound is to find a polynomial of degree \(k\) which minimizes the right-hand of the inequality above. In [17], an LP bound using minor polynomials was proposed for finding, for a given \(k\) and a \(k\)-partially walk-regular graph \(G\), the polynomial \(p\) that optimizes the above Ratio-Type bound that we will be using. In this LP, the input are the distinct eigenvalues of the graph \(G\), denoted \(\theta_{0}>\cdots>\theta_{r}\), with respective multiplicities \(m(\theta_{i})\), \(i\in\{0,\ldots,r\}\). The **minor polynomial**\(f_{k}\in\mathbb{R}_{k}[x]\) is the polynomial that minimizes \(\sum_{i=0}^{r}m(\theta_{i})f_{k}(\theta_{i})\). Let \(p=f_{k}\) be defined by \(f_{k}(\theta_{0})=x_{0}=1\) and \(f_{k}(\theta_{i})=x_{i}\) for \(i\in\{1,\ldots,r\}\), where the vector \((x_{1},\ldots,x_{r})\) is a solution of the following linear program: \[\begin{array}{ll}\mbox{minimize}&\sum_{i=0}^{r}m(\theta_{i})x_{i}\\ \mbox{subject to}&f[\theta_{0},\ldots,\theta_{s}]=0,&s=k+1,\ldots,r\\ &x_{i}\geq 0,&i=1,\ldots,r\end{array}\] Here, \(f[\theta_{0},\ldots,\theta_{m}]\) denote \(m\)-th divided differences of Newton interpolation, recursively defined by \[f[\theta_{i},\ldots,\theta_{j}]=\frac{f[\theta_{i+1},\ldots,\theta_{j}]-f[\theta _{i},\ldots,\theta_{j-1}]}{\theta_{j}-\theta_{i}},\] where \(j>i\), starting with \(f[\theta_{i}]=x_{i}\) for \(i\in\{0,\ldots,r\}\). In [17] it was shown that for \(k=3\) it is possible to obtain tight bounds for every Hamming graph \(H(2,r)\) using this LP approach with minor polynomials, and it is also shown that in for these graphs the Ratio-Type bound coincides with Delsarte's LP bound [14]. **Remark 18**.: We should note that, in order to be applied, Delsarte's LP bound [14] requires a symmetric association scheme, which is a special partition of \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\times\operatorname{ Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) into \(n+1\) binary relations \(R_{0},R_{1},\ldots,R_{n}\in\mathcal{R}\). As part of the definition, for any \(R_{i},R_{j},R_{k}\in\mathcal{R}\) the number of \(z\in\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) such that \((x,z)\in R_{i}\) and \((z,y)\in R_{j}\) is the same for any \(x,y\) such that \((x,y)\in R_{k}\). Assuming the association scheme is the one naturally induced by the sum-rank distance (that is, \((x.y)\in R_{i}\) if and only if \(\operatorname{srk}(x-y)=i\) for all \(i\)) this condition implies that the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) must be distance-regular. By Proposition 12 and Remark 2, this is the case only when the sum-rank metric reduces to the rank or to the Hamming metric, which is a classical setting for the Delsarte LP bound. In the general case, when \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is not distance-regular (for a specific example, we refer to [8, Example 4.4]), one can still define an association scheme (or, more generally, a coherent configuration) by "refining" the sum-rank distance and then apply Delsarte's LP bound to this scheme. This will be studied in detail by the authors elsewhere. Next we will look at formulae for the Ratio-Type bound for small \(k\) and any regular graph \(G\). In the case where \(k=1\), \(\alpha_{1}\) is exactly the independence number and by choosing \(p(x)=x\) we get the classical Hoffman's bound (unpublished, see e.g. [19]). In the particular cases where \(k=2\) and \(k=3\) (corresponding to \(d=3\) and \(d=4\), respectively), the best polynomials that optimize the Ratio-Type bound from Theorem 17 have been found, resulting in the following more explicit bounds. **Theorem 19** (Ratio-Type bound, \(d=3\); see [1]).: _Let \(G\) be a regular graph with \(n\) vertices and distinct eigenvalues of the adjacency matrix \(\theta_{0}>\theta_{1}>\cdots>\theta_{r}\) with \(r\geq 2\). Let \(\theta_{i}\) be the largest eigenvalue such that \(\theta_{i}\leq-1\). Then_ \[\alpha_{2}(G)\leq n\frac{\theta_{0}+\theta_{i}\theta_{i-1}}{(\theta_{0}-\theta _{i})(\theta_{0}-\theta_{i-1})}.\] _Moreover, this is the best possible bound that can be obtained by choosing a polynomial via Theorem 17._ The polynomial giving the best Ratio-Type bound bound has also been found in the case \(d=4\), with a whole paper devoted to the derivation of the polynomial [20]. This gives an indication of the difficulty of making the Ratio-Type bound explicit. **Theorem 20** (Ratio-Type bound, \(d=4\); see [20]).: _Let \(G\) be a regular graph with \(n\) vertices and distinct eigenvalues of the adjacency matrix \(\theta_{0}>\theta_{1}>\cdots>\theta_{r}\) with \(r\geq 3\). Let \(s\) be the largest index such that \(\theta_{s}\geq-\frac{\theta_{0}^{2}+\theta_{0}\theta_{r}^{\prime}-\Delta}{ \theta_{0}(\theta_{r}+1)}\), where \(\Delta=\max_{u\in V}\{(A^{3})_{uu}\}\). Then_ \[\alpha_{3}(G)\leq n\frac{\Delta-\theta_{0}(\theta_{s}+\theta_{s+1}+\theta_{r}) -\theta_{s}\theta_{s+1}\theta_{r}}{(\theta_{0}-\theta_{s})(\theta_{0}-\theta_{ s+1})(\theta_{0}-\theta_{r})}.\] _Moreover, this is the best possible bound that can be obtained by choosing a polynomial via Theorem 17._ **Remark 21**.: In this paper we mainly focus on the applications of the previous two theorems. These only allow us to impose that our sum-rank-metric codes have minimum distance bounded from below by \(3\) or \(4\). While these codes are not particularly interesting from an application viewpoint, the goal of our paper is give evidence that spectral graph theory captures the sum-rank metric better than the currently available methods. Note that the analogues of Theorems 19 and 20 for larger \(d\), or even some weaker versions of such results, would immediately allow studying codes with larger minimum distance via our approach. We leave that to future (graph theory) research. The bounds of Theorems 19 and 20 have been applied to several families of Hamming graphs, which are a special case of sum-rank-metric graphs due to Remark 2, see [17, 20] for more details on the Hamming metric. Recall that in Proposition 11 we showed that the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is walk-regular. It follows from the definition of walk-regularity that, if \(A\) is the adjacency matrix of a walk-regular graph on \(n\) vertices, and \(p\) is a polynomial with real coefficients, then the diagonal of \(p(A)\) is constant with entries \(\frac{1}{n}\operatorname{tr}(p(A))\), where \(\operatorname{tr}(p(A))\) is the trace of \(p(A)\). This means in particular that \(\Delta\) in Theorem 20 can be calculated as \(\Delta=\frac{1}{n}\sum_{i=0}^{r}\theta_{i}^{3}m(\theta_{i})\), where \(m(\theta_{i})\) denotes the multiplicity of the eigenvalue \(\theta_{i}\) of \(A\). In [3] the authors pointed out a connection between \(\alpha_{k}(G_{i})\) for some positive integer \(k\) and \(i\in\{1,\ldots,t\}\) and the \(k\)-independence number of the Cartesian product \(G_{1}\times\cdots\times G_{t}\). Namely, when extended by induction from a Cartesian product of two graphs to a Cartesian product of \(t\) graphs, we obtain: **Lemma 22** (see [3, Theorem 7]).: _For any \(t\) graphs \(G_{1},G_{2},\ldots,G_{t}\),_ \[\alpha_{k}(G_{1}\times\cdots\times G_{t})\leq\min \left(\alpha_{k}(G_{1})\cdot|V_{2}|\cdots|V_{t}|,\alpha_{k}(G_{2}) \cdot|V_{1}|\cdot|V_{3}|\cdots|V_{t}|,\ldots,\right.\] \[\left.\alpha_{k}(G_{t})\cdot|V_{1}|\cdots|V_{t-1}|\right).\] _where \(V_{i}\) is the vertex set of \(G_{i}\) for \(i\in\{1,\ldots,t\}\)._ We know from Proposition 9 that the sum-rank-metric graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is the Cartesian product of respective bilinear forms graphs, so Lemma 22 can be applied to them. We can also use the fact that the cardinality of the vertex set \(V_{i}\) of a bilinear forms graph \(\Gamma(n_{i},m_{i},\mathbb{F}_{q})\) for some integers \(n_{i},m_{i}\) is given by \(|V_{i}|=q^{n_{i}m_{i}}\). In this case, the bound of Lemma 22 becomes \[\alpha_{k}(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))) \leq\alpha_{k}(\Gamma(n_{l},m_{l},\mathbb{F}_{q}))\cdot q^{\sum_{i=1}^{t}n_{ i}m_{i}-m_{l}n_{l}},\] where \(l\in\{1,\ldots,t\}\) is such that \(\alpha_{k}(\Gamma(n_{l},m_{l},\mathbb{F}_{q}))\cdot q^{\sum_{i=1}^{t}n_{i}m_{i }-m_{l}n_{l}}\) is minimal. By comparing this to the Singleton bound of Theorem 7 one can conclude that the latter would be hard to improve using Lemma 22. Indeed, from a computational check of sum-rank-metric graphs on up to \(1000\) vertices, we have not found any instances where the bound of Lemma 22 performs better than the bounds of Theorems 6 and 7 as well as the Ratio-Type bound of Theorem 17 when it is applied to the sum-rank-metric graph itself without utilizing its Cartesian product structure. **Remark 23**.: 1. A useful upper bound on the \(k\)-independence number of a graph \(G\) is the _Lovasz theta number_ of \(G^{k}\), denoted by \(\vartheta_{k}\). Recall that \(G^{k}\) denotes the \(k\)-th power of \(G\), which is the graph with the same vertex set at \(G\), and where two vertices \(u\) and \(v\) in \(G^{k}\) are adjacent if and only if the distance between \(u\) and \(v\) in \(G\) is at most \(k\). The value of \(\vartheta_{k}\) usually provides a good bound on \(\alpha_{k}\), and it can be estimated using Semidefinite Programming (SDP) as follows [21]: For a graph \(G\) on \(n\) vertices, let \(A=(a_{ij})\) range over all \(n\times n\) symmetric matrices such that \(a_{ii}=1\) for any \(i\) and \(a_{ij}=1\) for any distinct \(i,j\) such that respective vertices of \(G\) are non-adjacent. Then \(\vartheta=\min_{A}\lambda_{\max}(A)\), where \(\lambda_{\max}(A)\) is the largest eigenvalue of \(A\). In fact, it was shown in [34] that, for graphs derived from symmetric association schemes, the Lovasz theta number coincides with the bound obtained through Delsarte's LP method [14]. Tables 1 and 2 contain examples of sum-rank-metric graphs for which the Lovasz theta number \(\vartheta_{k}\), the Ratio-Type bound, and the bounds from Theorems 6 and 7 are calculated. Note that the Lovasz theta bound is never worse than the Ratio-Type bound. However, in practice, the Ratio-Type bound can be calculated in an exact and quicker way, while the Lovasz theta number provides an approximation. Indeed, in order to use the Ratio-Type bound, we only require to know the adjacency spectrum, which we know for our graph of interest by Proposition 15. In contrast, Lovasz theta number \(\vartheta_{k}\) requires of the whole graph adjacency matrix in order to be calculated. 2. Other graph theoretical bounds on the \(k\)-independence number are known in the literature. For example, [18] introduces a bound on \(\alpha_{k}\) in terms of the average distance in a connected graph. In [31], the authors show upper bounds which use the minimum and the maximum degree of the graph, along with infinitely many regular graphs for which these bounds are attained. However, there is no example of a non-distance-regular sum-rank-metric graph up to 1000 vertices for which these graph theoretical bounds perform better than the Ratio-Type bound of Theorem 17. ## 5 Explicit Bounds and Comparisons In this section we turn to concrete applications of our eigenvalue bounds, comparing them with some of the known bounds on sum-rank-metric codes and giving evidence that they improve them. At the end of the section we provide some tables showing that our eigenvalue bounds improve on the best currently known bound. **Theorem 24**.: _Let \(G=\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) with \(\mathbf{n}=(n,1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\), with \(m\geq n\) for some integers \(m\geq 2\) and \(n\geq 1\). Then \(\alpha_{2}(G)\) is upper bounded by_ \[\frac{q^{mn+t-1}(q-1)\left((q-1)(\varepsilon+1)(\varepsilon-q+1)+(q^{m}-1) \left(q^{n}-1\right)+(q-1)^{2}(t-1)\right)}{\left(\varepsilon(q-1)+(q^{m}-1) \left(q^{n}-1\right)+(q-1)^{2}(t-1)+1\right)\left(\varepsilon(q-1)+(q^{m}-1) \left(q^{n}-1\right)+(q-1)^{2}(t-2)\right)} \tag{1}\] _where \(\varepsilon=(t-1)\mod q\)._ Proof.: We first note that \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is the Cartesian product of \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) and the Hamming graph \(H(t-1,q)\) on \(q^{t-1}\) vertices. The eigenvalues of \(\Gamma(\operatorname{Mat}(n,m,\mathbb{F}_{q}))\) are given by Lemma 14. The eigenvalues of the Hamming graph are well-known and defined by \((q-1)(t-1)-qj\) for \(j=0,\ldots,t-1\). Hence by Proposition 15 the eigenvalues of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) are \[\lambda_{i}+(q-1)(t-1)-qj\text{ for }i=0,\ldots,n\text{ and }j=0,\ldots,t-1,\] where \(\lambda_{i}=\frac{(q^{n-i}-1)(q^{m}-q^{j})-q^{i}+1}{q-1}\). Let \(\theta_{i}\) and \(\theta_{i-1}\) be eigenvalues as in Theorem 19, meaning \(\theta_{i}\) is the largest eigenvalue such that \(\theta_{i}\leq-1\) while \(\theta_{i-1}\) is the smallest eigenvalue such that \(\theta_{i-1}>-1\). Consider the spectrum of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) described above. When fixing \(\lambda_{i}\), we have \(t\) distinct eigenvalues, and the two eigenvalues among them that are closest to \(-1\) are \[\lambda_{i}+(q-1)(t-1)-q\left\lceil\frac{\lambda_{i}+(q-1)(t-1)+1}{q}\right\rceil\] \[\lambda_{i}+(q-1)(t-1)+q-q\bigg{\lceil}\frac{\lambda_{i}+(q-1)(t-1)+1}{q}\bigg{\rceil}.\] Note that, for any \(a\) and integer \(b>0\), the expression \(b\big{[}\frac{a}{b}\big{]}\) is equal to \(a\) if \(a\mod b\equiv 0\) or to \(a+b-(a\mod b)\) otherwise. Observe that \(\lambda_{i}\equiv q-1\mod q\), and hence \(\lambda_{i}+(q-1)(t-1)+1\equiv-(t-1)\mod q\). Then the two eigenvalues above can be rewritten as: \[-1-(t-1)\mod q\] and \[q-1-(t-1)\mod q.\] Let \(\varepsilon=(t-1)\mod q\), and note that the number of vertices of \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is \(q^{mn+t-1}\), and it is a \(\delta\)-regular graph with \(\delta=\frac{(q^{n}-1)(q^{m}-1)}{q-1}+(t-1)(q-1)\). Then the Ratio-Type bound on \(\alpha_{2}\) from Theorem 19 gives the desired result. We now apply the bound of Theorem 24 in two special cases where we can carefully compare our results with the state of the art. ### The case \(\mathbf{n}=(1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(d=3\) We consider the ambient space \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with \(\mathbf{n}=(1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\) for an arbitrary \(t\geq 2\) and \(m\geq 2\). In this case, the graph \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is the Cartesian product of the complete graph on \(q^{m}\) vertices \(K_{q^{m}}\) and the Hamming graph \(H(t-1,q)\) on \(q^{(t-1)}\) vertices. From the Ratio-Type bound in (1) we derive the bound \[A_{q}(\mathbf{n},\mathbf{m},3)\leq q^{m+t-1}\frac{q^{m}-1+(t-1)(q-1)+(- \varepsilon-1)(q-\varepsilon-1)}{(q^{m}+(q-1)(t-1)+\varepsilon)(q^{m}+(q-1)( t-1)+\varepsilon-q)}, \tag{2}\] where \(\varepsilon=(t-1)\mod q\). In the rest of this section, we analyze the bounds of Theorems 6 and 7, comparing them with the bound in (2). To evaluate the Sphere-Packing bound of Theorem 7 for \(d=3\), we require the cardinality of a ball in \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) of radius \(1\), which is exactly \(\delta+1\), where \(\delta\) is the degree of the graph as in the proof of Theorem 24. This leads us to \[A_{q}(\mathbf{n},\mathbf{m},3)\leq\frac{q^{m+t-1}}{q^{m}+(t-1)(q-1)}.\] **Lemma 25**.: _Let \(\mathbf{n}=(1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound (2) is not worse than the approximation using the Sphere-Packing bound of Theorem 7._ Proof.: It suffices to prove that the inequality \[q^{m+t-1}\frac{q^{m}-1+(t-1)(q-1)+(-\varepsilon-1)(q-\varepsilon-1)}{(q^{m}+( q-1)(t-1)+\varepsilon)(q^{m}+(q-1)(t-1)+\varepsilon-q)}\leq\frac{q^{m+t-1}}{q^{m}+( t-1)(q-1)} \tag{3}\] always holds. By multiplying both sides of the inequality by \(\frac{1}{q^{m+t-1}}\) and both denominators, all of which are all positive values, and subtracting \((q^{m}+(t-1)(q-1))^{2}\) from both sides, we obtain an inequality equivalent to (3), namely \[(-1+(-\varepsilon-1)(q-\varepsilon-1))(q^{m}+(t-1)(q-1))\leq\\ (2\varepsilon-q)(q^{m}+(t-1)(q-1))+\varepsilon(\varepsilon-q).\] The latter is equivalent to \(\varepsilon(\varepsilon-q)(q^{m}+(q-1)(t-1)-1)\leq 0\), which is always true since \(\varepsilon\geq 0\), \(q^{m}+(q-1)(t-1)-1>0\), and \(\varepsilon-q<0\). It is easy to see that Singleton bound of Theorem 7, in the case we are considering in this section, reduces to \[A_{q}(\mathbf{n},\mathbf{m},3)\leq q^{t-2}.\] **Lemma 26**.: _Let \(\mathbf{n}=(1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(t\geq q^{m}+2\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound (2) is not worse than the approximation using the Singleton bound of Theorem 7._ Proof.: This will follow as a corollary of Theorem 32 below. Recall that the Total-Distance bound only applies when \[d>\sum_{i=1}^{t}n_{i}-\sum_{i=1}^{t}q^{-m_{i}}.\] In the case analyzed in this section, this reduces to \(3>t-q^{-m}-(t-1)/q\), which is in turn equivalent to \((t-1)q^{m-1}+1>(t-3)q^{m}\). **Lemma 27**.: _Let \(\mathbf{n}=(1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(t\neq 3\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound (2) is not worse than the approximation using the Total Distance bound of Theorem 7, whenever the latter bound is applicable._ Proof.: Observe that the inequality \((t-1)q^{m-1}+1>(t-3)q^{m}\) holds if and only if either \((t-1)q^{m-1}=(t-3)q^{m}\) or \((t-1)q^{m-1}>(t-3)q^{m}\). It is easy to see that, assuming \(t\geq 2\) and \(q\geq 2\), the latter inequality only holds for \(t=3\), while the only integer solutions of the equation are \((t,q)=(4,3)\) and \((t,q)=(5,2)\). To prove the lemma, we need to show that the inequality \[q^{m+t-1}\frac{q^{m}-1+(t-1)(q-1)+(-\varepsilon-1)(q-\varepsilon-1)}{(q^{m}+ (q-1)(t-1)+\varepsilon)(q^{m}+(q-1)(t-1)+\varepsilon-q)}\leq\frac{3}{3-t+ \frac{1}{q^{m}}+\frac{t-1}{q}} \tag{4}\] holds for all cases when the Total Distance bound is applicable, excluding the case \(t=3\). If \(t=4\) and \(q=3\) then \(\varepsilon=(4-1)\mod 3=0\), and inequality (4) reduces to \[3^{m+3}\frac{3^{m}-1+6-2}{(3^{m}+6)(3^{m}+6-3)} \leq\frac{3\cdot 3^{m}}{3\cdot 3^{m}-4\cdot 3^{m}+1+3\cdot 3^{m-1}}\] \[3^{2}\frac{1}{3^{m}+6} \leq 1,\] which is true for all positive \(m\). If \(t=5\) and \(q=2\), then \(\varepsilon=(5-1)\mod 2=0\). The inequality (4) becomes \[2^{m+4}\frac{2^{m}-1+4-1}{(2^{m}+4)(2^{m}+4-2)} \leq\frac{3\cdot 2^{m}}{3\cdot 2^{m}-5\cdot 2^{m}+1+4\cdot 2^{m-1}}\] \[2^{4}\frac{1}{2^{m}+4} \leq 3\] \[16-12 \leq 3\cdot 2^{m},\] which is true for all \(m\geq 1\). Note that if \(t\geq q^{m}\), as in the assumptions of Lemma 26, then the condition \(t\neq 3\) of Lemma 27 follows automatically from \(m>1\). Using arguments similar to those introduced in Lemmas 25 and 27, one obtains that for any \(q\), \(m\), and \(t\) for which Lemma 26 holds, our Ratio-Type bound is not worse than Induced Singleton and Induced Hamming bounds from Theorem 6 as well. It is also not worse than Induced Plotkin bound, whenever the latter is applicable. By combining all these observations we get the following results. **Theorem 28**.: _Let \(\mathbf{n}=(1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(t\geq q^{m}+2\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound (2) is not worse than the approximation using the Sphere-Packing, Singleton, Total-Distance, Induced Singleton, Induced Hamming, and Induced Plotkin bound for any \((t,q,m)\), whenever these bounds are applicable._ ### The case \(\mathbf{n}=(2,1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(d=3\) In this subsection we consider \(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) with \(\mathbf{n}=(2,1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\) for some \(t\geq 2\) and \(m\geq 2\). According to Proposition 9, the graph is the Cartesian product of the bilinear form graph \(\Gamma(\operatorname{Mat}(2,m,\mathbb{F}_{q}))\) and the Hamming-metric graph \(H(t-1,q)\) with \(q^{t-1}\) vertices. From the bound in (1) we can deduce the bound on the size of a code with minimum distance \(d=3\): \[A_{q}(\mathbf{n},\mathbf{m},3)\leq q^{2m+t-1}\frac{q^{m}+q^{m+1}-t+q(t-3- \varepsilon)+(1+\varepsilon)^{2}}{(1+q^{m}(q+1)+q(t-3)-t+\varepsilon)(1+q^{m }(q+1)+q(t-2)-t+\varepsilon)}. \tag{5}\] **Lemma 29**.: _Let \(\mathbf{n}=(2,1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound (5) is not worse than the approximation using Sphere-Packing bound of Theorem 7._ Proof.: The Sphere-Packing bound reads \[A_{q}(\mathbf{n},\mathbf{m},3)\leq\frac{q^{2m+t-1}}{(q+1)(q^{m}-1)+(t-1)(q-1)}.\] Therefore the inequality we need to prove is: \[\frac{q^{2m+t-1}(q^{m}+q^{m+1}-t+q(t-3-\varepsilon)+(1+\varepsilon)^{2})}{(1+ q^{m}(q+1)+q(t-3)-t+\varepsilon)(1+q^{m}(q+1)+q(t-2)-t+\varepsilon)}\leq \frac{q^{2m+t-1}}{(q+1)(q^{m}-1)+(t-1)(q-1)}.\] This is equivalent to \[\frac{q^{m}+q^{m+1}-t+q(t-3-\varepsilon)+(1+\varepsilon)^{2}}{(1+q^{m}(q+1)+q (t-3)-t+\varepsilon)(1+q^{m}(q+1)+q(t-2)-t+\varepsilon)}\leq\frac{1}{(q+1)(q^ {m}-1)+(t-1)(q-1)},\] which is in turn equivalent to \[t\leq\varepsilon(q^{m}(q+1)+q(q-1)(t-2)-3)(q-\varepsilon)+q^{m}(q+1)+q(t-3)+2 \varepsilon+1.\] The latter inequality is clearly true for all \(t\geq 2\) and \(q\geq 2\), since \(t\leq q^{m}(q+1)+q(t-3)\) under these conditions. **Lemma 30**.: _Suppose \(\mathbf{n}=(2,1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(t>(q^{2m}-q^{m+1}-q^{m}+2q-1)/(q-1)\). Then the approximation for \(A_{q}(\mathbf{n},\mathbf{m},3)\) given by the Ratio-Type bound in (5) is not worse than the approximation given by the Singleton bound of Theorem 7._ Proof.: This will again follow from Theorem 32. Similar to Section 5.1, we can derive conditions under which the Ratio-Type bound (5) cannot perform worse than the Induced Singleton, Induced Hamming, Induced Plotkin, and Total-Distance bounds for sum-rank-metric codes described in Section 2.1. **Theorem 31**.: _Let \(d=3\), \(\mathbf{n}=(2,1,\ldots,1)\), \(\mathbf{m}=(m,1,\ldots,1)\), and \(t>\frac{q^{2m}-q^{m+1}-q^{m}+2q-1}{q-1}\). Then the approximation given by the Ratio-Type bound in (5) is not worse than the approximation given by the Sphere-Packing, Singleton, Total-Distance, Induced Singleton, Induced Hamming, and Induced Plotkin bounds, whenever these bounds are applicable._ We conclude this subsection with two remarks on other techniques that can be applied in the context investigated in this paper. ### Some Computational Results We include some tables to illustrate the performance of our eigenvalue bounds, in comparison with the best currently known bounds. As the tables show, our bound improve on the state of the art for several parameters. Table 1 and Table 2 list the graphs \(\Gamma(\mathrm{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) on \(|V|\) vertices, \(|V|\leq 1000\), which are not distance-regular and for which the Ratio-Type bound on \(\alpha_{2}\) or \(\alpha_{3}\) does not perform worse than the bounds of Theorems 6 and 7. The columns labeled "\(\alpha_{2}\)" and "\(\alpha_{3}\)" contain the exact value of the \(k\)-independence number, for \(k=2,3\). We write "time" if the computation takes a long time on a standard laptop. Only cases for which \(\alpha_{k}\geq 2\) are included. The columns \(\vartheta_{k}\), \(k=2,3\), contain the Lovasz theta function of the graph that has the same vertex set as \(\Gamma(\mathrm{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) and where two vertices adjacent if and only if the distance between them in \(\Gamma(\mathrm{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) is at most \(k\). This value also provides an upper bound on \(\alpha_{k}\); see Remark 23. The columns "\(\mathrm{RT}_{d-1}\)" (for \(d=3,4\)) give the values of the Ratio-Type bound, given by Theorems 19 and 20. The columns "\(\mathrm{iS}_{d}\)", "\(\mathrm{iH}_{d}\)", "\(\mathrm{iP}_{d}\)", "\(\mathrm{iE}_{d}\)", "\(\mathrm{S}_{d}\)", "\(\mathrm{SP}_{d}\)", "\(\mathrm{TD}_{d}\)" (for \(d=3,4\)) give the values of the Induced Singleton, Induced Hamming, Induced Plotkin, Induced Elias, Singleton, Sphere-Packing, Projective Sphere-Packing, and Total-Distance bounds, respectively. These can be found in Section 2. When a bound is not applicable, we return \(0\) as value. By definition, the values of "\(\mathrm{SP}_{3}\)" and "\(\mathrm{PSP}_{3}\)" are the same, so only "\(\mathrm{SP}_{3}\)" is included in the table. The entries of the column "\(\mathrm{RT}_{d-1}\)" in bold correspond to the instances in which the Ratio-Type bound performs better than the applicable code-theoretical bounds. Since the code cardinality is always an integer, the columns only contain integer values, which are sometimes obtained by taking the floor of the real value given by the bound. ## 6 Applications to MSRD Codes We conclude this paper with a short section addressing the problem of the block length of a code meeting the Singleton bound in Theorem 7. Such a code is called **MSRD** (**Maximum Sum-Rank-Distance Code**). MSRD codes generalize the celebrated MDS codes in the Hamming metric, which are codes \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\) whose dimension \(k\) and minimum distance \(d\) satisfy \(k=n-d+1\). It is a long-standing open problem to prove that the length of a linear MDS code cannot exceed \(q+1\), with very few exceptions. This is a celebrated conjecture by Segre; see [35]. The block length of linear MSRD codes was investigated in [7], with existence criteria obtained via weight distributions; see [7, Section 6]. In this section we apply eigenvalue methods and obtain results that are different from those of [7] in two ways: the parameters to which our eigenvalue bounds apply are different, and we are able to apply our results to code that are not necessarily linear. In particular, linearity is not one of the assumptions of our results. The next result shows that certain MSRD codes with minimum distance \(d=3\) have block length \(t\) relatively small. The parameters investigated in in [7, Theorem VI.12] are different from those studied here. **Theorem 32**.: _Let \(\mathbf{n}=(n,1,\ldots,1)\) and \(\mathbf{m}=(m,1,\ldots,1)\) for some \(t\), and suppose there exists an MSRD code \(\mathcal{C}\subseteq\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q})\) of minimum distance \(d=3\). Then_ \[t\leq\begin{cases}1+q^{m}&\text{if }n=1,\\ \frac{q^{2m}-q^{m+1}-q^{m}+2q-1}{q-1}&\text{if }n=2,\\ \frac{q^{2m+1}-q^{2m}-q^{m+n}+q^{m}+q^{n}+q^{2}-3q+1}{(q-1)^{2}}&\text{if }n>2. \end{cases}\] \begin{table} \begin{tabular}{|c c c c|c c c|c c c|c c c|c c c|} \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 10 \\ 2 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 Proof.: We first note that, by definition and Theorem 7, we have \[|\mathcal{C}|=\begin{cases}q^{t-2}&\text{if }n=1,\\ q^{t-1}&\text{if }n=2,\\ q^{m(n-2)+t-1}&\text{if }n>2.\end{cases}\] Here we provide the proof for the case \(n=1\). The cases \(n=2\) and \(n>2\) can be shown in a similar way. Consider the inequality \[q^{m+t-1}\frac{q^{m}-1+(q-1)(t-1)+(-\varepsilon-1)(q-\varepsilon-1)}{(q^{m}+( q-1)(t-1)+\varepsilon)(q^{m}+(q-1)(t-1)+\varepsilon-q)}<q^{t-2}.\] The left hand is the value of the Ratio-Type bound (2) for \(n=1\), which is an upper bound on the size of \(\mathcal{C}\), assuming \(\varepsilon=(t-1)\mod q\). Since \(|\mathcal{C}|=q^{t-2}\), if for some \(t\) the inequality is true, then an MSRD code cannot exist for this \(t\). We divide both sides of the inequality by \(q^{t-2}\) and rewrite it as follows: \[-(q-1)^{2}t^{2}+(q-1)(q^{m+1}-2q^{m}+3q-2\varepsilon-2)t+\\ +(q-1)(q^{m}+1)(q^{m}-2q+1)+\varepsilon^{2}(q^{m+1}-1)- \varepsilon(q^{m+2}-2q^{m+1}+2q^{m}-3q+2)<0.\] We replace \(\varepsilon\) by a variable \(\varepsilon_{0}\) that takes integer values in range from \(0\) to \(q-1\) and introduce the function \[F(t,q,m,\varepsilon_{0})=-(q-1)^{2}t^{2}+(q-1)(q^{m+1}-2q^{m}+3 q-2\varepsilon_{0}-2)t+\\ +(q-1)(q^{m}+1)(q^{m}-2q+1)+\varepsilon_{0}^{2}(q^{m+1}-1)- \varepsilon_{0}(q^{m+2}-2q^{m+1}+2q^{m}-3q+2)\] Note that \(F(t,q,m,\varepsilon_{0})\) is quadratic in \(t\) with \(F(t,q,m,\varepsilon_{0})\to-\infty\) as \(t\) approaches infinity. Then \(F(t,q,m,\varepsilon_{0})<0\) is true for \[t>\frac{q\sqrt{4\varepsilon_{0}(\varepsilon_{0}+1)q^{m-1}-2(2\varepsilon_{0}+ 1)q^{m}+q^{2m}+1}-2\varepsilon_{0}+q^{m+1}-2q^{m}+3q-2}{2(q-1)}.\] Finally, we remove the dependency on \(\varepsilon_{0}\) by assigning it a value in the range from \(0\) to \(q-1\) that maximizes the right hand side. For this, note that the expression under the square root can be seen as a quadratic function of \(\varepsilon_{0}\) with its minimum at \(\varepsilon_{0}=\frac{q-1}{2}\). Hence this expression is maximized by assigning either \(0\) or \(q-1\) to \(\varepsilon_{0}\). Since \(\varepsilon_{0}=0\) also maximizes the term \(-2\varepsilon_{0}\), this is the assignment we select. By substituting \(\varepsilon_{0}=0\) we get \[t>q^{m}+1.\] Hence an MSRD code can only exist if \(t\leq q^{m}+1\). Clearly, MSRD codes cannot exist when the Singleton bound of Theorem 7 is not tight. Table 3 contains a list of parameters for sum-rank-metric graphs \(\Gamma(\operatorname{Mat}(\mathbf{n},\mathbf{m},\mathbb{F}_{q}))\) on \(|V|\) vertices, \(|V|\leq 10^{5}\), which cannot contain an MSRD code because of the constraints imposed by our Ratio-Type bounds from Theorems 19 and 20. We exclude graphs with \(\mathbf{n}=(1,\ldots,1)\), i.e., situations that can be reduced to the Hamming metric with a mixed alphabet. Other graphs that are excluded from the table correspond to \(\mathbf{m}=(m,\ldots,m)\) for some integer \(m\), as we could not find instances where our Ratio-Type bounds give a stronger existence criterion than that presented in [7]. Table 3 includes the values of the Ratio-Type bound "\(\operatorname{RT}_{d-1}\)" as well as other bounds of Theorems 6 and 7: Induced Singleton ("\(\mathrm{iS}_{d}\)"), Induced Hamming ("\(\mathrm{iH}_{d}\)"), Induced Elias ("\(\mathrm{iE}_{d}\)"), Singleton ("\(\mathrm{S}_{d}\)", which would also be the size of an MSRD code in case it exists), Sphere-Packing ("SP\({}_{d}\)"), and Projective Sphere-Packing bounds ("PSP\({}_{d}\)"). Note that in each case presented in Table 3, both the Induced Plotkin bound of Theorem 6 and the Total Distance bound of Theorem 7 cannot be applied, since the conditions mentioned in the respective theorems are not satisfied. Finally, Table 3 contains the parameters for which the value of the Ratio-Type bound is the _only_ bound that outperforms the Singleton bound. In other words, the table only contains parameters for which the existence of an MSRD code cannot be excluded by applying any of the known bounds. ### Acknowledgements We thank Alexander Gavrilyuk for his careful reading of this manuscript and suggestions. Aida Abiad is partially supported by the Dutch Research Council via grants VI.Vidi.213.085, OCENW.KLEIN.475, and by the Research Foundation - Flanders via grant 1285921N. Antonina P. Khramova is supported by the Dutch Research Council via grant OCENW.KLEIN.475. Alberto Ravagnani is supported by the Dutch Research Council via grants VI.Vidi.203.045, OCENW.KLEIN.539, and by the Royal Academy of Arts and Sciences of the Netherlands.
2303.10835
Bifurcation analysis of the Keynesian cross model
This study rigorously investigates the Keynesian cross model of a national economy with a focus on the dynamic relationship between government spending and economic equilibrium. The model consists of two ordinary differential equations regarding the rate of change of national income and the rate of consumer spending. Three dynamic relationships between national income and government spending are studied. This study aims to classify the stabilities of equilibrium states for the economy by discussing different cases of government spending. Furthermore, the implication of government spending on the national economy is investigated based on phase portraits and bifurcation analysis of the dynamical system in each scenario.
Xinyu Li
2023-03-20T02:41:16Z
http://arxiv.org/abs/2303.10835v1
Bifurcation analysis of the Keynesian cross model ###### Abstract This study rigorously investigates the Keynesian cross model of a national economy with a focus on the dynamic relationship between government spending and economic equilibrium. The model consists of two ordinary differential equations regarding the rate of change of national income and the rate of consumer spending. Three dynamic relationships between national income and government spending are studied. This study aims to classify the stabilities of equilibrium states for the economy by discussing different cases of government spending. Furthermore, the implication of government spending on the national economy is investigated based on phase portraits and bifurcation analysis of the dynamical system in each scenario. ## 1 Introduction The original Keynesian model was first proposed to explain the persistent unemployment after the Great depression [1]. Since then, it has evolved into many modified models such as the neoclassical Keynesian model [2], the post Keynesian model [3], and the IS-LM model [4]. This paper focuses on one modification of the original Keynesian model - the Keynesian cross model [5]. In this paper, a simple model of a national economy is studied based on the Keynesian cross model. The model in this paper aims to quantify the dynamic relationship between national income and consumer spending, with special attention to the impact of government spending on economic stabilization. This paper applies bifurcation analysis, a technique commonly used in dynamical systems in mathematics, to investigate the dynamics between national income and consumer spending. The rest of the paper is organized as follows: Section 2 describes the methodology, variables, and the mathematics model used in this study; Results of the economic equilibrium are discussed in Section 2. Finally, Section 4 concludes the paper. ## 2 Method The Keynesian cross model builds upon two ordinary differential equations [6]: \[=I-\alpha\,\cdot\,C \tag{1}\] \[\dot{C}=\beta\,\cdot\,\,(I-C-G) \tag{2}\] where \(C\geq\,0\) is the rate of consumer spending, \(I\geq\,0\) is the national income, and \(G\geq\,0\) is the rate of government spending. The parameters \(\alpha\) and \(\beta\) satisfy \(1<\alpha<\infty,1\leq\beta<\infty\). Three relations between government spending and national income are discussed in the following subsections. Consider a model consisting of equations (1) and (2) along with a constant government spending G. To determine the equilibrium state for this model, I find the point where \(\,=\,\dot{\mathsf{C}}\,=\,0\). Rearranging terms, I obtain the following equilibrium: \[(\text{I, C})=\,(\tfrac{\alpha\cdot G}{\alpha-1},\tfrac{G}{\alpha-1}) \tag{3}\] In order to calculate the stability of this fixed point, I compute the Jacobian matrix and eigenvalues: \[\mathbf{J}=\begin{bmatrix}\frac{\partial\dot{I}}{\partial\dot{I}}&\frac{ \partial\dot{I}}{\partial\dot{C}}\\ \frac{\partial\dot{C}}{\partial\dot{I}}&\frac{\partial\dot{C}}{\partial\dot{C }}\end{bmatrix} \tag{4}\] \[\lambda_{1,2}\,\,=\,\,\tfrac{(1-\beta)\pm\sqrt{\beta^{2}+2\beta+1-4\alpha \beta}}{2} \tag{5}\] The eigenvalue expression shows that, as long as \(\beta\,>\,1\), there exist two fixed points if \[\Delta\,\,=\sqrt{\beta^{2}+\,2\beta\,+\,1\,-\,4\alpha\beta}>\,0,\,\text{one fixed point if}\,\,\,\Delta\,\,=\sqrt{\beta^{2}+\,2\beta\,+\,1\,-\,4\alpha\beta}=0,\] and no fixed point if \(\Delta\,\,=\sqrt{\beta^{2}+\,2\beta\,+\,1\,-\,4\alpha\beta}<\,0\). In the first case \(\Delta\,>\,0,\,\lambda_{1}\) and \(\lambda_{2}\) have opposite signs. Accordingly, there exists a stable spiral when \(\alpha\,>\,\frac{(\beta-1)^{2}}{4\beta}+\,1\). A stable star exists in case 2 when \(\alpha\,=\,\frac{(\beta-1)^{2}}{4\beta}+\,1\) and a stable node exists in case 3 when \(\alpha\,<\,\frac{(\beta-1)^{2}}{4\beta}+\,1\). When \(\beta\,=\,1\), the eigenvalues are imaginary and there exists a center. Figure 1 shows three phase portraits of three different \(\alpha\) values when \(\beta\,>\,1\). Figure 2 displays the phase portrait of the center when \(\beta\,=\,1\). than \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\), equal to \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\), and greater than \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\). ### Linear Relation between G and I Figure 1: Phase portraits from left to right correspond to a stable node, a stable star, and a stable spiral respectively. G = 1, \(\beta\) = 4, and \(\alpha\) from left to right is less than \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\), equal to \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\), and greater than \(\frac{\left(\beta-1\right)^{2}}{4\beta}+1\). Figure 2: Phase portrait displaying a center as the fixed point with G =1, \(\beta\)=1. In the second model, the government spending increases linearly with the national income: \(G=G_{0}+k^{*}I\), where \(k>0\). Now equation (2) becomes: \[\dot{C}=\beta\cdot\left(I-C-G_{0}-k^{*}I\right) \tag{6}\] Setting and \(\dot{C}\) to zero and rearranging terms, I obtain the following equilibrium: \[\text{(I, C)}=\left(\frac{\alpha\cdot G_{0}}{\alpha(1-k)-1},\frac{G_{0}}{ \alpha(1-k)-1}\right) \tag{7}\] and there exists a critical value \(k_{c}=1-\frac{1}{\alpha}\). To find an economically sensible equilibrium in the first quadrant, \(k\) must be less than \(k_{c}\). The Jacobian matrix and eigenvalues are then computed to determine the stability of the equilibrium: \[\mathbf{J}=\begin{bmatrix}\frac{\partial\dot{I}}{\partial I}&\frac{\partial \dot{I}}{\partial C}\\ \frac{\partial\dot{C}}{\partial I}&\frac{\partial\dot{C}}{\partial C}\end{bmatrix} \tag{8}\] \[\lambda_{1,2}=\frac{\left(1-\beta\right)\pm\sqrt{\left(\beta-1\right)^{2}-4 \left(\alpha(1-k)-1\right)\beta}}{2} \tag{9}\] Using the same three alpha values in the first model, Figure 3 displays the phase portraits for \(k<k_{c}\): In the case when \(k>k_{c}\) and \(\beta>0\), two eigenvalues will have opposite signs. Accordingly, there exists a saddle node and its stability can be determined by analyzing the correspondent eigenvalues: \[v_{1}=\left(\begin{array}{c}\frac{\beta+\lambda_{1}}{\beta(1-k)}\\ \mathbf{1}\end{array}\right)\quad v_{2}=\left(\begin{array}{c}\frac{\beta+ \lambda_{2}}{\beta(1-k)}\\ \mathbf{1}\end{array}\right) \tag{10}\] where \(v_{1}\)is the attracting eigendirection and corresponds to the stable manifold; \(v_{2}\) is the repelling eigendirection and corresponds to the unstable manifold. A visualization of this case can be found in Figure 4. The main diagonal from the top left corner to the bottom right corner is the unstable manifold; The antidiagonal from the top right corner to the bottom left corner is the stable manifold. \(1\), \(\beta=4\), and \(\alpha\) from left to right is \(1.1\), \(1.5625\), and \(5\). Figure 4: Phase portraits showing unstable saddle-shaped trajectories. \(k>k_{c}\), G = 1, \(\beta=4\), and \(\alpha\) from left to right is \(1.1\), \(1.5625\), and \(5\). Figure 3: Phase portraits from left to right correspond to a stable node, a stable star, and a stable spiral respectively. \(k<k_{c}\), G = 1, \(\beta=4\), and \(\alpha\) from left to right is \(1.1\), \(1.5625\), and \(5\). ### Nonlinear Quadratic Relation between G and I In the third model, the government spending increases quadratically with the national income: \(G=G_{0}+k\ast I^{2}\), where \(k>0\). Now equation (2) becomes: \[\dot{C}=\beta\cdot\left(I-C-G_{0}-k\ast I^{2}\right) \tag{11}\] Setting and \(\dot{C}\) to zero and rearranging terms, I obtain the following equilibrium: \[\text{(I, C)}=\left(\frac{\left(1-\alpha\right)\pm\sqrt{\left(\alpha-1\right) ^{2}-4k\alpha^{2}G_{0}}}{-2\alpha k},\frac{\left(1-\alpha\right)\pm\sqrt{ \left(\alpha-1\right)^{2}-4k\alpha^{2}G_{0}}}{-2\alpha^{2}k}\right) \tag{12}\] The above fixed point coordinate shows that there exist two fixed points if \[\Delta=\sqrt{\alpha-1\right)^{2}-4k\alpha^{2}G_{0}}>0,\text{ one fixed point if }\Delta=\sqrt{\alpha-1\right)^{2}-4k\alpha^{2}G_{0}}=0,\] and no fixed point if \(\Delta=\sqrt{\alpha-1)^{2}-4k\alpha^{2}G_{0}}<0\). In other words, there are two fixed points if \(G_{0}<\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\), one fixed point if \(G_{0}=\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\), and no fixed point if \(G_{0}>\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\). Using the same procedure to compute the Jacobian matrix and eigenvalues as in the first and the second model, I computed the stabilities of fixed points. Figure 5 visualizes fixed points and their trajectories for different \(G_{0}\) values. ## 3 Results In the first model, where government spending is a constant, the economic equilibrium is stable as long as \(\beta>1\) because the real parts of the eigenvalues are always negative. Hence, there exists a stable equilibrium state for the economy, which can be represented by the coordinate \((\frac{\alpha\cdot G}{\alpha-1},\frac{G}{\alpha-1})\). In the limiting case where \(\beta=1\), the eigenvalues are purely imaginary and the solutions have periodic forms. Accordingly, the trajectories are closed curves and the economy is predicted to oscillate. In the second model, where the government spending increases linearly with the national income, the economic equilibrium is related with a critical value \(k_{c}=1-\frac{1}{\alpha}\). If \(k<k_{c}\), there exists an economically sensible equilibrium in the first quadrant, whereas if \(k>k_{c}\), the equilibrium becomes a saddle node with the main diagonal representing the unstable manifold and the antidiagonal representing the stable manifold. The economy will follow either branch of the unstable manifold depending on the initial condition. In the third model, where the government spending increases quadratically with the national income, the system can have two, one, or no economic equilibrium depending on how big the initial government spending is. If \(G_{0}\) is less than \(\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\), there are two equilibriums in the first quadrant and one of them is a stable spiral. Hence, with the right initial conditions, the economy will approach that equilibrium. If \(G_{0}\) is equal to \(\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\), there is a semi-stable equilibrium. If \(G_{0}\) is greater than \(\frac{\left(a-1\right)^{2}}{4\alpha^{2}k}\), there is no equilibrium and the national income will grow continuously. ## 4 Conclusion In this paper, I studied the Keynesian cross model of a national economy with a focus on models that describe dynamic relationships between government spending and consumer spending. Using bifurcation diagrams, I analyzed the economically sensible equilibrium states between national income and consumer spending. I discovered that in an economy with constant government spending, its equilibrium is stable if \(\beta>1\) and has a periodic form if \(\beta=1\); In an economy with linearly increasing government spending, its equilibrium is stable if \(k>k_{c}\). Otherwise, the equilibrium becomes a saddle node and the economy will follow either branch of the unstable manifold depending on the initial condition; In an economy with quadratically increasing government spending, there are two equilibriums in the first quadrant and one of them is a stable spiral if the initial government spending \(G_{0}\) is less than \(\frac{\left(\alpha-1\right)^{2}}{4\alpha^{2}k}\). In the case that \(G_{0}\) is equal to \(\frac{\left(\alpha-1\right)^{2}}{4\alpha^{2}k}\), there exists a semi-stable equilibrium. Interestingly, when \(G_{0}\) is greater than \(\frac{\left(\alpha-1\right)^{2}}{4\alpha^{2}k}\), the national income is predicted to grow. Future researchers can investigate other nonlinear relationships between government spending and national income such as the case when government spending increases cubically with the national income.
2305.05608
The Role of Relevance in Fair Ranking
Online platforms mediate access to opportunity: relevance-based rankings create and constrain options by allocating exposure to job openings and job candidates in hiring platforms, or sellers in a marketplace. In order to do so responsibly, these socially consequential systems employ various fairness measures and interventions, many of which seek to allocate exposure based on worthiness. Because these constructs are typically not directly observable, platforms must instead resort to using proxy scores such as relevance and infer them from behavioral signals such as searcher clicks. Yet, it remains an open question whether relevance fulfills its role as such a worthiness score in high-stakes fair rankings. In this paper, we combine perspectives and tools from the social sciences, information retrieval, and fairness in machine learning to derive a set of desired criteria that relevance scores should satisfy in order to meaningfully guide fairness interventions. We then empirically show that not all of these criteria are met in a case study of relevance inferred from biased user click data. We assess the impact of these violations on the estimated system fairness and analyze whether existing fairness interventions may mitigate the identified issues. Our analyses and results surface the pressing need for new approaches to relevance collection and generation that are suitable for use in fair ranking.
Aparna Balagopalan, Abigail Z. Jacobs, Asia Biega
2023-05-09T16:58:23Z
http://arxiv.org/abs/2305.05608v2
# The Role of Relevance in Fair Ranking ###### Abstract. Online platforms mediate access to opportunity: relevance-based rankings create and constrain options by allocating exposure to job openings and job candidates in hiring platforms, or sellers in a marketplace. In order to do so responsibly, these socially consequential systems employ various fairness measures and interventions, many of which seek to allocate exposure based on _worthiness_. Because these constructs are typically not directly observable, platforms must instead resort to using proxy scores such as _relevance_ and infer them from behavioral signals such as searcher clicks. Yet, it remains an open question whether relevance fulfills its role as such a worthiness score in high-stakes fair rankings. In this paper, we combine perspectives and tools from the social sciences, information retrieval, and fairness in machine learning to derive a set of desired criteria that relevance scores should satisfy in order to meaningfully guide fairness interventions. We then empirically show that not all of these criteria are met in a case study of relevance inferred from biased user click data. We assess the impact of these violations on the estimated system fairness and analyze whether existing fairness interventions may mitigate the identified issues. Our analyses and results surface the pressing need for new approaches to relevance collection and generation that are suitable for use in fair ranking. relevance, ranking, measurement theory, fair exposure, validity + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + Footnote †: journal: Information systems – Retrieval models and ranking; Human-centered computing \(\rightarrow\) ranking, fairness. + + Footnote †: journal: Information can violate some of the desiderata of relevance for fair ranking. In particular, click-based relevance scores violate the criteria of availability and comparability - as the feedback is collected only for the top-ranked items (K formulations may not be known a priori, and in practice one would work with _estimates_ from some predictive model. In particular, in the systems we consider, these scores may be obtained from the ranking model itself. ### Probing Relevance as a Proxy for Worthiness #### 2.3.1. Defining Worthiness Fairness interventions seek to produce rankings that are more equitable, i.e., rankings that are capturing something not totally captured by inferred relevance. Several recent papers explicitly or implicitly refer to various notions of "merit" or "worthiness" as a basis for their intervention: * "allocation of exposure based on merit (i.e, relevance)" (Friedman, 2017), * "We define the merit of a document as a function of its relevance to the query [...] and we state that each document in the candidate set should get exposure proportional to its merit" (S judgments), of their measurement process (such as misspecifications in browsing model parameters (Steintein et al., 2017)), and the consequences thereof, have also primarily been studied in the context of ranking utility. In the context of fairness, to address the unavailability of relevance scores, Kirnap et al. (Kirnap et al., 2019) proposed a methodology to estimate fairness metrics using incomplete relevance judgments. However, these approximations are useful for system evaluation, not for training algorithms that fairly allocate exposure. Similarly, fair ranking evaluation benchmarks rely on exhaustive annotations that are infeasible at the scale of real systems (Kirnap et al., 2019). In addition to the _incompleteness issues_, using relevance as a proxy for worthiness involves making certain implicit assumptions about their relationship: for example, that the relative ordering of items according to worthiness is consistent with the ordering by relevance. Moreover, high variance or noisiness in relevance scores may also cause fairness measurements to be less robust or more arbitrary (Kirnap et al., 2019). It is essential to elucidate and test the assumptions of how the two concepts relate to each other if relevance is to be a _valid_ and _reliable_ candidate for approximating worthiness. Attending to the two-sided nature of search systems helps us further understand the way that values and positionality are embedded in relevance: Relevance defined as utility to the searcher/consumer and relevance defined as utility to the items/providers can be in conflict in fair ranking systems (Kirnap et al., 2019; Kirnap et al., 2019). Moreover, search systems themselves may have additional goals as well, such as maximizing the time users spend on the platform. It is thus crucial to surface _whose judgment of worthiness a given relevance proxy represents_. Table 1 summarizes several common relevance measurement approaches and explores their potential issues as proxies of worthiness. ## 3. Case Study: Relevance from Clicks In this paper, we focus on relevance inferred from clicks as a case study. An overview of the inference process shown in Figure 1.1 Footnote 1: Code: [https://github.com/Aparna-B/FairRankingRelevance](https://github.com/Aparna-B/FairRankingRelevance) ### Ranking with Click-Based Relevance Unbiased learning-to-rank is a well studied problem in information retrieval to learn accurate rankings from biased training data (Kirnap et al., 2019; Kirnap et al., 2019; Kirnap et al., 2019; Kirnap et al., 2019; Kirnap et al., 2019). Some are online ranking algorithms that involve interventions and collecting real-time user feedback (Kirnap et al., 2019; Kirnap et al., 2019; Kirnap et al., 2019). Many of these systems also rely on a counterfactual model of estimations to debias the ranking loss. One common approach is to use Inverse Propensity Weighting or Score (IPS) (Steinteintein et al., 2017) to account for various user cognitive biases in standard ranking losses, such as position bias (Kirnap et al., 2019), trust bias (Kirnap et al., 2019), or selection bias (Kirnap et al., 2019). Other work proposes jointly training a ranking model and an examination propensity model (Kirnap et al., 2019; Kirnap et al., 2019). Here we introduce a prototypical learning-to-rank framework with click-based relevance, including the learning algorithms, inference methods, and evaluation metrics. #### 3.1.1. From Clicks to Relevance via Browsing Models Click or browsing models - derived using empirical studies of user information retrieval/search behavior - make it possible to simulate ranking systems, approximate item relevance, and evaluate search systems (Kirnap et al., 2019). The relevant-intent hypothesis states that an item is clicked if and only if it is relevant and examined, which is then used to infer the latent relevance using algorithms such as Inverse Propensity Scores. The main variables being modelled with different assumptions in click or user browsing models is the probability of examination of items in a ranked list: \(o_{r}\)@rank \(r\) is a binary variable indicating whether an item at position-\(r\) is examined or not, and the relevance of items being ranked. Most browsing models parameterize the examination probability with known distributions identified using domain knowledge. When the browsing model changes, the probability of examination at a specific rank \(p(o_{r}=1)\), and the model for user behaviour changes. As a high-level summary, a browsing model is a graphical model with two unseen nodes - examination and relevance - and an observable node of clicks. (For a comprehensive overview of click models, see (Kirnap et al., 2019).) In this paper, we use a position-biased browsing model of user behavior. This model assumes that examination probability is constant given a specific rank \(r\). (We also note that browsing models are also used in other applications, such as debiasing user logs.) #### 3.1.2. Unbiased Learning to Rank with Propensity-Weighting The Propensity-Weighted ranking algorithm infers propensity weights to minimize a specific, pre-defined loss function \(l(f)\) and supervise a ranking model \(f\). In most ranking systems, the expected loss can be expressed as a sum of loss terms over relevant items in a list (or a pair of items). The ranking algorithm proposed by Joachims et al. (Joachims et al., 2017) uses Inverse Propensity Score (IPS) weighting to account Figure 1. Relevance inferred using searcher clicks. A searcher inputs a query, with a latent notion of relevance (\(R\)) of results. Then, they examine (\(E\)) and click (\(C\)) on the ranked results, thereby allocating exposure to the items. However, they may exhibit cognitive biases (such as examining and clicking on top-ranked items with higher probability). In order to infer relevance of items from these click logs, the data has to be _debiased_. Browsing models perform this operation by parameterizing the examination probability and relevance of ranked items, and then defining a model linking the click probability with these for a query and item. With such a model, the probability of an item being relevant can be estimated or relevance can be _inferred_. Then, to predict the relevance for an unseen query/item, a machine learning model is trained, with loss terms influenced by parameters from the browsing model (e.g., with propensity-weighting). Finally, the quality and fairness of exposure allocation is measured. for the biased nature of clicks (e.g., position bias as described in Section 3.1.1), and weight each items's loss correspondingly. In IPS weighting, each item's loss is inversely weighted with its examination probability if it is clicked. Note that only items that are clicked contribute to this IPS-weighted loss. In the next section, we describe how the ranking model \(f\) is trained and optimized. #### 3.1.3. Learning a Ranking Model This is the machine learning model \(f\) that predicts relevance used to produce a ranking. We consider models optimized with a propensity-weighted listwise loss using the outputs of deep neural network models, following Joachims et al. (Joachims et al., 2016). We optimize a list-wise softmax-based cross-entropy loss while training the ranking model (Beng et al., 2015; Chen et al., 2016). This loss is further weighted by propensity and optimized during model training, with best parameters chosen based on validation set performance. In the next section, we list evaluation metrics for assessing the utility of ranking systems. #### 3.1.4. Performance Evaluation Finally, the quality of the ranking algorithm is evaluated. Typically, the ranking produced by the model is compared against a ground truth ranking (assumed to be an unbiased gold standard). In this paper we focus on the Normalized Discounted Cumulative Gain (NDCG), a normalized measure of ranking quality favoring occurrence of more relevant documents in the higher ranking positions through a logarithmic discount (Zhu and Chen, 2017). ## 4. Desiderata of Relevance in Fair Ranking In this section, we describe a set of desiderata that the relevance scores ought to satisfy to be meaningful and useful for fairly allocating exposure. Drawing on domain knowledge about relevance-based ranking systems and the assumptions within, these desiderata offer a pathway towards _valid_ and _reliable_ measures of relevance, where the properties of proxy scores should match their theoretical ideal across a range of qualitative dimensions (Zhu and Chen, 2017). In the definitions below, let \(w\) denote true relevance (worthiness) scores, and \(r\) denote inferred relevance scores, i.e., the measurements of \(w\) that will be used to allocate exposure. \(K\) denotes the number of items to be ranked. _Credibility_. For inferred relevance scores to be useful, they must behave as we expect. In a ranking setup, we can ask: do items with high true relevance (worthiness) have higher inferred relevance scores across runs? (And lower, lower?) Absence of credibility would imply that the inferred relevance scores are not capturing how the 'true' relevance scores would be expected to, and so lacks face validity and content validity--and may even lack reliability across similar inputs to the model (Zhu and Chen, 2017). Thus, it might be an unsuitable proxy under views of fairness such as "equal opportunity" where exposure should ideally be allocated in proportion to worthiness. **Definition 4.1** (Credibility).: If \(i\) and \(j\) are two items to be ranked, credibility necessitates, in expectation: \[w_{i}\geq w_{j}\equiv r_{i}\geq r_{j} \tag{6}\] _Consistency_. A core assumption in unbiased learning-to-rank is that of consistency: in the limit of sufficient data, the estimated expected relevance scores converge. Thus, meeting this statistical property necessitates the criterion: do the inferred relevance scores converge? If this criterion remains unmet, then fair ranking metrics relying on relevance may not converge either. **Definition 4.2** (Consistency).: In the limit of sufficient data, the predicted scores converge within an acceptable error range. Let \(N\) denote the number of expected training steps (e.g., the size of the training set), and \(\hat{r}_{i}\) the predicted relevance score for item \(i\) at the \(N\)th training step. If \(S_{n}=\frac{1}{K}\sum_{i=1}^{K}[(r_{i}^{n}-\hat{r}_{i})^{2}]\) measures the mean squared deviation between relevance scores obtained by training the relevance prediction model for \(n\) steps and \(\{\hat{r}_{i}\}_{i=1}^{K}\), and \(\epsilon\) is the acceptable error value (close to 0), then consistency requires: \[\exists N_{0}\text{ such that }S_{n}\leq e\forall n\geq N_{0} \tag{7}\] _Stability_. Relevance ought to have test-retest reliability, i.e., that measurements from the same model for the same input do not vary more than an expected limit (e.g., due to unavoidable stochasticity). In the ranking setup, the criterion to test then is: do the inferred relevance scores vary more than an established limit across runs with minor variations in stochastic parameters or initialization? This is related to robustness and how noisy estimates of fairness would be. **Definition 4.3** (Stability).: If \(n\) denotes the number of experimental runs with variations in unimportant parameters, and \(\epsilon\) is the acceptable variation across runs, then: \[\frac{1}{K}\sum_{j=1}^{K}[\frac{1}{n}\sum_{i=1}^{n}[(r_{ij}-\mu_{j})^{2}]]\leq\epsilon \tag{8}\] where \(\mu_{j}=\frac{1}{n}\sum_{i=1}^{n}r_{ij}\), and \(r_{ij}\) denotes relevance for the \(j\)th item under the \(i\)th run. _Comparability_. Several fairness interventions suggest that exposure should be allocated in proportion to inferred relevance (Kohn et al., 2016; Chen et al., 2016), with the underlying assumption that this proportionality would hold for true relevance (worthiness) of items/groups. Thus we must verify that the relative ratios of relevance scores match the corresponding ratios of true worthiness, i.e., that our model has structural validity--that these relative properties behave as expected (Zhu and Chen, 2017). For group fairness, the average relevance is taken as the aggregate across all items belonging to a group. For individual fairness, this criterion is evaluated at the per-item level. \begin{table} \begin{tabular}{c c c} \hline \hline Worthiness Proxy & Positionality of Value & Potential Issues with Measurement \\ \hline Crowd-sourced labels (Crowd-sourced labels (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels), (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels), (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels), (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels, (Crowd-sourced labels labels, (Crowd-sourced labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels, (Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, ((Crowd-sourced labels labels labels labels), ((Crowd-sourced labels labels labels, (Crowd-sourced labels labels labels, ((Crowd-sourced labels labels labels labels(Crowd-sourced labels labels labels labels), ((Crowd-sourced labels labels labels, ((Crowd-sourced labels labels labels labels(Crowd-sourced labels labels labels labels), ((Crowd-sourced labels labels labels(Crowd-sourced labels labels labels), ((Crowd-sourced labels labels labels(Crowd-sourced labels labels labels, ((Crowd-sourced labels labels labels labels(Crowd-sourced labels labels labels labels labels), ((Crowd-sourced labels labels labels labels(Crowd-sourced labels labels labels), (((Crowd-sourced labels labels labels labels(C labels labels labels(C labels labels labels(C labels(C labels(C(C(C(C(C((C(C(C(C(C(C(C(C(C(C(C(C(C_{C_{CCCCCCCCCCCCCCrowdrowd^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ )))))))))))))))))})})}}}}}}}}}----------^^^^^^{ ((((((((((((((((((((((((((((((((((1-1-1-1-)))))))))))))))))))))))))))))))))))))))))))))))}}}}}}}}}}}}}}}}\]]\]\] } \) \) \((((((((((())))))))))))))))))))))}}}}}}}}^^^^ **Definition 4.4** (Comparability).: If \(i\) and \(j\) are two items or groups of items (e.g., defined on the basis of a sensitive attribute) to be ranked, comparability necessitates: \[\frac{w_{i}}{w_{j}}\propto t\equiv\frac{r_{i}}{r_{j}}\propto t \tag{9}\] _Availability._ Past research in Information Retrieval (IR) has established that searchers exhibit position-bias in viewing and inspecting ranked results (Krishnan et al., 2017). As a result, this variance in attention across different positions is modeled while allocating exposure for an item or group of ranked items. Then, exposure is allocated in proportion to uncorrected (i.e., without any bias-correction) relevance scores. Thus, an underlying assumption is that relevance scores are _available_ for all items, and are _unbiased_ estimates of true relevance (or worthiness). The availability criterion tests this assumption that unbiased inferred relevance scores are available for all items--without which it would be difficult to claim validity or reliability of relevance scores. **Definition 4.5** (Availability).: If \(f\) is a function that outputs \(1\) if \(f(x)\) is defined and \(0\) if not, then, \[f(r_{i})=1\forall i\in\{1,2\ldots K\} \tag{10}\] Additionally, we define that for the availability property to be met, the distribution of \(w\) across items should be statistically indistinguishable from that of \(r\). Note that some of these properties are _inter-related_. For example, availability may be a necessary condition for comparability to be tested. Thus, it may be possible to design a fixed order in which these properties could be tested. ## 5. Experimental Setup We describe the setup for testing if desiderata described in Section 4 are met in practice when relevance is inferred using searcher clicks. ### Data Datasets in our experiments consist of a list of items which are to be ranked in response to a query. Each item is associated with some features and a relevance label. We assume that the ranking with highest-utility is one where items are arranged in decreasing order of these ground-truth relevance labels. Items also have labels denoting (protected) group membership. We split each dataset in 70-10-20 proportions as train, validation, and test splits. The details of each dataset are below and summarized in Table 2. #### 5.1.1. Synthetic Data To experiment under controlled conditions, we synthesize two datasets from a graphical model using the synthetic data generation methodology proposed by Yang et al. (Yang et al., 2017). In each case, the correlation Directed Acyclic Graph (DAG) consists of four attributes: continuous attributes for 2 features and a relevance label, and one binary group membership attribute. All three continuous attributes follow a Pareto or Normal distribution (see appendix for details). After sampling, we discretize all relevance scores to 5 grades uniformly based on the value, similar to standard learning-to-rank setups. We consider binary protected groups for all fairness analyses (e.g., men and women). We sample equal number of items from both groups. We sample \(N\)=50,000 datapoints from both synthetic distributions. **Pareto distribution:** We sample relevance from a Pareto distribution with \(P(2.0,1.0)\). This dataset is referred to as synth-pareto. **Normal distribution:** Relevance is sampled from a normal distribution with mean and standard deviation set to: \(\mu=2,\sigma=1\) respectively. This dataset is referred to as synth-normal. #### 5.1.2. FairTREC 2021 The FairTREC 2021 (Yao et al., 2020) dataset consists of Wikimedia articles ranked in response to text queries. In total, there are 50 train and 50 test queries. To align the setup of this dataset with our study, we choose a single query for analysis. We select a train-set query with atleast 25,000 number of relevance annotations available (train query ID 6). Additionally, only positive relevance annotations are available in the FairTREC dataset. We make the assumption that all the other Wikimedia articles in the corpus are not relevant to the query. Finally, we subsample the dataset to a similar size as the synthetic datasets to obtain a dataset with 50,000 items. Each item in this dataset thus consists of a query, article text, and a binary relevance judgment. To obtain a representation for ranking, we utilize a pre-trained checkpoint of a cross-encoder model trained on the ms-marco (Yao et al., 2020)2 task. Finally, we embed the high-dimensional representations into three dimensions using Principal Component Analysis (PCA). This is the final "item representation". Additionally, the geographical location metadata is utilized as the sensitive attribute for fair ranking. We binarize these categories into "majority" location (Europe only) and all others (including no available metadata). This dataset is severely imbalanced in terms of group membership with a 90% majority in group distribution. Footnote 2: [https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2) ### Simulation Protocol We experiment in a single-query setup without personalization. In each round, we generate synthetic clicks to simulate a set of users clicking on items based on the position-biased examination probability and relevance of each item to be ranked, using a position-biased click model (as specified in Sec. 3.1.1; with a positive click probability of 1, negative click probability of 0.1, and maximum relevance grade of 4). Then, the average click through-rate of each item is accumulated to compute its propensity and the ranking model is optimized with inverse-propensity weighted optimization (Sec. 3.1.2). In our experiments, we use pre-computed propensity weights using result randomization (i.e., a pre-trained propensity model) for the position-biased click model3 while training the ranking model. The metric optimized in each step is the list-wise ranking loss with a \begin{table} \begin{tabular}{c c|c c c c} \hline \hline Dataset & \(d\) & \multicolumn{3}{c}{Relevance Distribution, \% (4-highest)} \\ & & 0 & 1 & 2 & 3 & 4 \\ \hline synth-normal & 2 & 0.13 & 11.00 & 61.51 & 26.65 & 0.71 \\ synth-pareto & 2 & 86.78 & 11.39 & 1.58 & 0.21 & 0.04 \\ fairtrec & 3 & 50.00 & 50.00 & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 2. Datasets statistics. \(d\) denotes the number of features. Each column \(i\) indicates the relative frequency of relevance grade \(i\) in a given dataset (4: most relevant, 0: least relevant). Each dataset contains 50000 samples selection-bias cutoff (set to 10). The final ranking system output is a list of items, ranked in decreasing order of expected utility to users. This is repeated for 500 iterations in total during training, and performance is evaluated, with models checkpointed every 50 iterations. Model loss is optimized with stochastic gradient descent with a learning rate of 0.01.4 Performance on the test set is evaluated with the best model checkpoint (based on validation performance). This process is repeated with 10 random seeds, and averaged for all metrics. Note that there are several sources of randomness in this setup, including the items that are clicked, the initialization of parameters in the ranking model, and sampling of data batches during training. Footnote 4: We validated that an Adagrad optimizer produced similar train loss. ## 6. Experiments and Results In this section, we seek to answer the question: To what extent can click-based relevance be used to guide fair exposure allocation? ### Studying Properties of Relevance We empirically test if the desired criteria for relevance are met in practice on three datasets. We assume that the true relevance labels are proportional to true worthiness, and we refer to ground truth relevance judgment labels as true relevance, and inferred relevance labels as predicted relevance. We note that both the tests for operationalizing the formal criteria (see Section 4) and the thresholds for meeting these criteria are themselves choices. For example, credibility can be measured using the mean deviation between the inferred relevance scores and true relevance scores for point-estimates, or by measuring the overall Spearman correlation between them (which may be closer to the rank ordering definition). Further, to say that a criterion is met, one may set acceptance thresholds, e.g., that a given criterion might be met only if a specific correlation is greater than 0.5. Thus we emphasize that meeting these criteria may be a continuum, not a binary outcome. Summary of findings is in Table 3. Figure 4. Distributions of \([0,1]\)-normalized inferred and true relevance scores: the inference process does not preserve distributions. Figure 3. Consistency of predictions. Relevanceconsistently converges, as measured using mean squared error (MSE). Figure 2. Distributions of inferred vs true graded relevance scores. Items with high true relevance judgments must have high inferred relevance for the _credibility_ criterion to be met. We observe this to be true on average (i.e., ordering of medians is monotonic, though there is considerable overlap between inferred scores for different relevance grades). #### 6.1.1. Credibility We study if relevance inferred from clicks under this prototypical setup is credible. Specifically, we test if the top-items across multiple runs of the learning-to-rank system (i.e., different seeds) correspond to top items (i.e., the ordering is retained). We construct boxplots showing the distribution of relevance scores (with full list size being the size of the test set; all softmax scores within a given ranking for ease of interpretation) for each relevance judgment label grade. Ideally, we expect that items with higher relevance grades would occupy lower ranks by having high inferred relevance scores. We assert that the relevance scores are not credible if the population medians corresponding to different relevance grades are not significantly different at a level of 0.05. **Results.** From Figure 2, we observe that for increasing values of true relevance judgments, the medians of the inferred relevance scores for items are monotonically increasing for all the datasets. With a Kruskal Wallis-H test, we observe that these medians are significantly different (\(p<0.001\)). This lets us conclude that the relevance scores are indeed reliable across runs for all three datasets. However, post-hoc tests may be required to test if medians of specific groups (e.g. those with relevance grades of 2 vs 3) are significantly different. #### 6.1.2. Consistency We measure the mean squared error between the predicted relevance scores at iteration \(i\) and the final iteration (i.e., \(i=500\)) for \(i=10,20,30\ldots 490\). Note that we use the logit-score predictions, before the _softmax_ operation. These scores are computed on the validation rather than test set (since test set is unseen throughout training). Then, we assess if this mean squared error converges. Based on the definition for consistency (see Section 4), we need to choose a \(\epsilon\) value to assess consistency. Here, we set \(\epsilon=0.1\) **Results.** We observe that the mean squared error converges to the final value for all datasets, though with some noise in fairtrec. As a result, the consistency criterion is satisfied all datasets on the validation set. (Consistency also met for datasets on the train set.) #### 6.1.3. Stability We measure the deviation in inferred relevance values across multiple runs. We compute the average standard deviation in relevance measurement of a specific item across the ten random seeds, and average this value across all items. To ensure that all values can be compared across datasets, we scale predicted scores for each item in given list by subtracting the mean and dividing by standard deviation, all derived from the predictions for all items in that list. **Results.** Among the synthetic datasets, we find that the mean variation is is between 0.17 - 0.18. Since all relevance scores have been normalized with unit-variance, this corresponds to less than one standard deviation of the full relevance distribution. The results for the fairtrec dataset show similar trends (variances scores \(\sim\)0.41). As a result, we conclude that the relevance scores are reasonably stable for all datasets. #### 6.1.4. Comparability We compute two metrics to assess the comparability criterion: (1) the Spearman correlation between predicted and true relevance scores for all items, (2) the ratio between average relevance scores for groups for true and predicted relevance scores. We assess that the criterion is met if the correlation is \(\geq 0.3\) from (1) and the difference in ratios from (2) is not more than 0.05. Note that (1) is closely tied to individual fairness since it considers prediction for all items. On the other hand, (2) is directly consequential to group-level exposure fairness: specifically, exposure is allocated in proportion to expected group relevance (either predicted or true) as per several fair ranking formulations (Friedman, 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). **Results.** Spearman correlation between true and predicted relevance scores is 0.09 (0.09 for subgroups) for synth-normal. For synth-pareto, the Spearman correlation between true and predicted relevance scores is 0.23 (0.20, 0.26 for female and male subgroup respectively). This indicates that the comparability criterion of relevance scores is better for one subgroup here. In contrast, the trends observed on the fairtrec dataset are different: the correlation is 0.27 for all items (subgroup correlations of 0.27 and 0.29). The ratio between normalized average relevance for subgroups using true relevance scores are close to 1 for both synthetic datasets (by construction). Ratios between average inferred group-relevance scores are also close to 1.00 for synth-normal and synth-pareto respectively. The difference for fairtrec is similar, with the two ratios being 1.04 and 1.02 with true and inferred relevance respectively. Thus, the target for exposure ratios may change slightly on using true vs predicted relevance scores. In this case, as per the criteria set, the comparability criterion is not satisfied for all datasets. The comparability criterion related to group-fairness are met on all datasets. #### 6.1.5. Availability We analyze the distribution of inferred relevance scores and compare them to the distribution of true relevance labels. **Results.** By design, since a machine learning model can make predictions for all items, the continuous relevance score predictions are available for all items. However, when we visualize the histogram of predicted and true relevance (plots in Figure 4), we observe that the nature of distributions is not retained. We computed the two-sample Kolmogorov-Smirnov test (Zalaloza-Friedman, 2017) for goodness of fit with null hypothesis being that the two samples - true and inferred relevance scores - arise from the same distribution. We find that the null hypothesis can be rejected in all cases with \(p\ll 0.001\). As a result, we conclude that the availability criterion is not satisfied for at least subset of ranked items. This makes as sense as available "worthiness" judgements here are graded, while predictions are continuous. ### Impact of Violating Criteria in Practice While relevance is the construct measured in ranking systems, the user-facing output is a ranked list of items (which is a function of the inferred relevance). Hence, system evaluations form an important part of optimizing ranking systems. We evaluate ranking systems along two axes: (1) quality or utility of ranking, (2) fairness of ranking. We measure the quality of rankings using NDCG@10 score for all datasets, and obtain scores of 0.90 (near perfect score)5, 0.29, and 0.26 for fairtrec, synth-pareto, and synth-normal respectively. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & Credibility & Consistency & Stability & Comparability & Availability \\ \hline fairtree & ✓ & ✓ & ✓ & ✗ & ✗ \\ synth-normal & ✓ & ✓ & ✓ & ✗ & ✗ \\ synth-pareto & ✓ & ✓ & ✓ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 3. Empirical tests of relevance score desiderata on three datasets. In this table, we summarize whether the five desired criteria are empirically met in the datasets. To test if violating the criteria matters in practice, we measure exposure fairness using the metrics defined in Sec. 2, all computed either using true, _graded_ or inferred _continuous_ relevance scores on the test set. From Table 4b and Table 4a, we observe that fairness assessments may vary depending on whether true or inferred relevance scores are used, and the size of difference is often dataset-dependent. In particular, the difference does not seem particularly stark for the synthetic datasets. However, on computing Wilcoxon signed-rank tests paired by the random seed (which influences the stochastic initialization of the prediction model), the exposure fairness metrics are significantly different for all datasets. Similar trends are observed for individual fairness: the difference is significantly different, where pairing is performed at item-level (i.e., 10000 items). We observe that these results are sensitive to normalization of relevance scores. Thus, fairness metrics vary depending on whether true or predicted relevance scores are used. Notably, the comparability criterion is also violated (especially for individual fairness) on all three datasets (Section 6.1). Analyzing the degree to which a criterion is violated may yield interesting insights. For example, the difference in ratios of group-level relevance assessed during comparability evaluation is generally small for the synthetic datasets (Sec. 6.1.4), as is the difference in group-level exposure fairness (Table 4b). Thus, the difference between fairness metrics using true and predicted relevance may be closely tied to the comparability criterion of predicted scores. ### Mitigating Measurement Issues In this section, we highlight examples of conditions and interventions that might mitigate relevance measurement issues. #### 6.3.1. Mechanisms: Fairness Interventions We study a post-hoc fairness processing setup with two algorithms proposed by Geyik et al. (2017). Both fair re-ranking algorithms (DetCons, DetConstSort) re-arrange the top-k items in a ranked list such that the distribution of groups matches a user-specified distribution in a greedy manner (see Geyik et al. (2017) for details). With a group-fairness lens, we set the desired proportion of a group among the top-k items to be the average relevance of items in that group divided by the sum of average relevance of items for all groups. We study the impact of these interventions on estimated group-level exposure fairness. Do the interventions _mitigate_ the measurement issues in fairness? From Table 4c, the fairness interventions are successful in reducing system unfairness (ratios are closer to 1 than pre-intervention scores in Table 4b). In some of the six cases (two algorithms, three datasets), the re-ranking algorithm modifies the system fairness to be indistinguishable from the pre-intervention value with true relevance scores (e.g., the fairtree+DetCons experiment, and the synth-pareto+DetConsSort experiment). We observe that results are sensitive to the top "k" chosen for re-ranking. Thus, fairness interventions may be a mechanism to reduce the error in fairness assessment, but impact may vary based on the algorithm. #### 6.3.2. Data: Imbalanced Groups During synthetic data generation (Sec. 5.1), we ensured equal group sample sizes. While class imbalance is not explicitly a component of the measurement model of relevance, it is often an artefact of ranking system design, and therefore could affect system fairness evaluation. Here, we consider various levels of data imbalance in the datasets, and study the resultant effects on exposure fairness-i.e., does imbalanced training data mitigate or hide the effects of relevance desiderata violations? We simulate imbalance ranging from 50-50% (i.e., no imbalance) to 90-10% (i.e., high imbalance) by subsampling the synthetic datasets to 25,000 points. For both datasets, we observe that the degree of imbalance impacts the difference between exposure fairness metrics as computed with true and predicted relevance: the difference ranges from about \(-0.10\) to \(0.12\), and about \(-0.03\) to \(0.00\) for synth-pareto and synth-normal respectively. A negative value here implies that the exposure fairness assessed using the predicted relevance score has a lower value than that using true relevance. When fairness measurements here are greater than 1 (true for most of imbalance ratios here), this implies that the system appears more fair when predicted relevance is used. Thus, at specific imbalance ratios, the system may appear _more fair_ than it is. In summary, group sample size imbalance may impact the inferred relevance scores and thus system fairness. ## 7. Future Perspectives for IR A turn towards examining fairness (Krizhevsky et al., 2017) and equity (Krizhevsky et al., 2017) in ranking has inspired new research agendas in IR. Meanwhile attending to the two-sided nature of ranking platforms has broadened the desiderata of fair ranking systems (Krizhevsky et al., 2017; Geyik et al., 2017; Geyik et al., 2018). Past works propose interventions to fairly allocate opportunities and avoid exacerbating social biases based on some latent, complex notion of worthiness (Han et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2020; Goyal et al., 2021; Goyal et al., 2021). Ultimately these interventions rely on _relevance_ as a convenience proxy to allocate exposure. Our work points to a pressing need for information retrieval to engage with research on relevance in the context of fair ranking. **Critically assessing limitations of relevance as a proxy for worthiness.** Relevance can be justified as a proxy for worthiness; however, this justification must be done by meaningfully establishing the validity and reliability of relevance in a given setting. For the goal of fair ranking, this requires showing that the properties of the relevance scores and resulting rankings are aligned with the intended fairness goals. In our paper, we demonstrate an example of validating several such properties. Concrete use cases (e.g., hiring, marketplaces) and intended normative goals should guide the acceptable limitations of relevance. The properties proposed in this paper were conceptualized based on extensive discussions between the authors of this paper using interdisciplinary lenses of machine learning, sociotechnical systems, measurement theory, and information retrieval. Based on the application at hand, some of these properties may be more important than others. We also emphasize that additional properties may still need to be identified and tested for in various use-cases. Still, we believe that the proposed framework of elucidation followed by examination of properties offers a step towards assessing if a specific proxy (such as relevance) can be reliably used as a target for fair ranking. In concurrent work with an alternate view, Schumacher et al. (Schumacher et al., 2019) derive desired properties for group fairness metrics and highlight similar nuances of metrics that rely on relevance, further validating our findings. In contrast, we derive desiderata for _relevance_ to guide fair exposure allocation. **Defining 'worthiness' for different ranking application domains.** Different theoretical conceptualizations of worthiness will correspond to different normative goals and theories of justice. For instance, one could distinguish between the fairness of process (where the worthiness score might encode the merit of the ranked subjects) and the fairness of consequence (where the worthiness score might correspond to the utility to a selected stakeholder of a subject being top-ranked). The design goals of a system will also reflect their application domains: a marketplace of online sellers might think differently about worthiness than a hiring platform. Precisely connecting different conceptualizations of worthiness to how they are operationalized in fair ranking systems will aid researchers, developers, and auditors to enhance system equity. **New methods for obtaining worthiness scores.** Beyond the need for new definitions, this paper points to the importance of considering how worthiness is operationalized in the context of fair ranking as well as revealing the need for new operationalizations. Potential approaches might vary from finding better proxies, developing new methods for direct worthiness crowdsourcing, finding new ways of accounting for annotator biases, calibrating predictions of relevance from browsing models, or proposing new approaches that go beyond existing IR relevance methodologies. ## 8. Conclusions Ranking systems mediate opportunity in a variety of high impact settings (Goyal et al., 2020). In this paper, we leveraged an interdisciplinary perspective to show how fair ranking systems may fail to engage with the intended goals of fair ranking, relying instead on a tenuous assumption of relevance as worthiness. Drawing on domain knowledge about relevance-based fair ranking, we derived a set of criteria that relevance must satisfy in order to be a _valid_ and _reliable_ target to guide fair exposure allocation: credibility, consistency, stability, comparability, and availability. Using click-based relevance as a case study, we tested if these criteria can be empirically met on three datasets. We observe that a subset of these criteria may be unfulfilled in practice. Beyond click-based relevance, similar criteria will have to be tested for other forms of relevance estimation (e.g., crowdsourced relevance judgments). The contributions of our work are in establishing a measurement theory-based framework for thinking about the role of relevance in fair exposure allocation settings. More broadly, our work highlights the need for novel approaches to generate and collect relevance scores in a valid, reliable manner in fair ranking settings. Together, this reveals several open problems in relevance measurement at scale for fair exposure allocation. ###### Acknowledgements. This paper is part of the FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) project that received funding from the European Union's Horizon Europe research and innovation program under grant agreement No 101070212.
2310.19097
Direct stellarator coil design using global optimization: application to a comprehensive exploration of quasi-axisymmetric devices
Many stellarator coil design problems are plagued by multiple minima, where the locally optimal coil sets can sometimes vary substantially in performance. As a result, solving a coil design problem a single time with a local optimization algorithm is usually insufficient and better optima likely do exist. To address this problem, we propose a global optimization algorithm for the design of stellarator coils and outline how to apply box constraints to the physical positions of the coils. The algorithm has a global exploration phase that searches for interesting regions of design space and is followed by three local optimization algorithms that search in these interesting regions (a ``global-to-local" approach). The first local algorithm (phase I), following the globalization phase, is based on near-axis expansions and finds stellarator coils that optimize for quasisymmetry in the neighborhood of a magnetic axis. The second local algorithm (phase II) takes these coil sets and optimizes them for nested flux surfaces and quasisymmetry on a toroidal volume. The final local algorithm (phase III) polishes these configurations for an accurate approximation of quasisymmetry. Using our global algorithm, we study the trade-off between coil length, aspect ratio, rotational transform, and quality of quasi-axisymmetry. The database of stellarators, which comprises approximately 200,000 coil sets, is available online and is called QUASR, for `QUAsi-symmetric Stellarator Repository'.
Andrew Giuliani
2023-10-29T18:18:56Z
http://arxiv.org/abs/2310.19097v2
Direct stellarator coil design using global optimization: application to a comprehensive exploration of quasi-axisymmetric devices ###### Abstract Many stellarator coil design problems are plagued by multiple minima, where the locally optimal coil sets can sometimes vary substantially in performance. As a result, solving a coil design problem a single time with a local optimization algorithm is usually insufficient and better optima likely do exist. To address this problem, we propose a global optimization algorithm for the design of stellarator coils and outline how to apply box constraints to the physical positions of the coils. The algorithm has a global exploration phase that searches for interesting regions of design space and is followed by three local optimization algorithms that search in these interesting regions (a "global-to-local" approach). The first local algorithm (phase I), following the globalization phase, is based on near-axis expansions and finds stellarator coils that optimize for quasisymmetry in the neighborhood of a magnetic axis. The second local algorithm (phase II) takes these coil sets and optimizes them for nested flux surfaces and quasisymmetry on a toroidal volume. The final local algorithm (phase III) polishes these configurations for an accurate approximation of quasisymmetry. Using our global algorithm, we study the trade-off between coil length, aspect ratio, rotational transform, and quality of quasi-axisymmetry. The database of stellarators, which comprises almost 140,000 coil sets, is available online and is called QUASR, for 'QUAsi-symmetric Stellarator Repository'. ## 1 Introduction Stellarator coil design is typically framed as an optimization problem, where the objective function targets charged particle confinement, other physics properties, and engineering requirements on the electromagnetic coils. Many of these problems are non-convex and have multiple local minima with a large variability in performance of the discovered designs. One remedy of this problem is to use stochastic optimization (Glas _et al._, 2022; Wechsung _et al._, 2022), which appears to reduce the variability of the discovered minima. Another approach is to use global optimization algorithms that attempt to fully explore the range of allowable stellarator coil designs. (Glas _et al._, 2022) combines both remedies by searching for global optima of a stochastic objective function. In this work, we propose a novel technique for applying global optimization algorithms to deterministic coil design problems plagued by many local minima. Our coil design workflow is decomposed into three phases, wrapped in a globalization procedure (Figure 1) to _automatically_ design a large database of vacuum field stellarators for various design targets. During phase I, the first algorithm finds initial coil sets with nested magnetic surfaces in the neighborhood of a magnetic axis by using the near-axis expansion formalism (Giuliani _et al._, 2022). During phase II, the second algorithm takes these coil sets and expands the region of nested flux surfaces with a good approximation of quasi-axisymmetry (QA) (Giuliani _et al._, 2023). Finally, during phase III, the third algorithm polishes these coil sets for precise QA (Giuliani _et al._, 2022). We compare two approaches to globalization. The first, somewhat naive approach attempts to find a global minimum of the objective by perturbing initial guesses provided to a local optimization algorithm. The disadvantage is that if the perturbation is too large, then the optimizer might be sent to an uninteresting region of parameter space. Conversely, if the perturbation is too small, the optimizer might not sufficiently explore the design space. Despite this downside, it can yield good results with some tuning (Wechsung _et al._, 2022). A second, much less ad-hoc approach, relies on the global optimization algorithm called TuRBO (Eriksson _et al._, 2019). For this algorithm, the user must provide lower and upper bounds on the design variables, known as box constraints. This algorithm has been used before in coil design problems (Glas _et al._, 2022), but our approach is notably different as we reformulate our optimization problem to accept constraints on the geometry of the coils in physical, rather than Fourier space. Since we are generating a large data set of stellarators, we also study the trade-offs between multiple competing design targets, which is the aim of multi-objective optimization. Recently, multi-objective optimization was applied to stellarator coil design (Bindel _et al._, 2023), where a first order continuation algorithm to construct a local Pareto front was developed. The algorithm was based on a Taylor expansion of the optimality condition that points on the Pareto front satisfy. In this work, we do not use this continuation approach but nevertheless study the trade-offs in stellarator coil design using our database. To summarize our contributions, we propose a simple way to incorporate box constraints on the physical positions of stellarator coils so that global optimization algorithms can be applied to the first phase of the coil design procedure. Following this first phase are two volume QA algorithms which target nested flux surfaces and precise QA. Scanning over various physics design targets, we have compiled a comprehensive database of almost 140,000 stellarator devices, and examined the trade-off between competing design objectives (quality of quasisymmetry, total coil length, rotational transform, and device aspect ratio). A subset of the devices have comparable quasisymmetry to highly optimized configurations in (Giuliani _et al._, 2023). We would also like to point out some limitations of this work. First, the globalization is _only_ applied to the near-axis coil design algorithm, and not to the volume QA phases of coil design. Because of this, we may miss out on performant stellarators should precise first-order near-axis QA in phase I not correlate with precise volume QA in phases II and III. Second, this database only contains curl-free stellarators with optimized quasi-axisymmetry, though our algorithms are generic and apply to other flavors of quasisymmetry. These vacuum-field devices are can also useful as they might be used as initial guesses during an optimization as plasma pressure is progressively increased (Boozer, 2019). Third, there may be duplicate devices in the database and possible mechanisms for this are outlined in Section 5. Finally, there are specialized algorithms for visualizing the Pareto front, which we do not use here. ## 2 Phase I: stellarators optimized for quasisymmetry on the magnetic axis The goal of the first optimization problem in phase I is to find an initial set of coils that has nested flux surfaces and produces a good approximation of quasisymmetry, at least locally to the magnetic axis. To this end, we recall here the optimization problem in (Giuliani _et al._, 2022_a_) that computes coils based on near-axis expansions: \[\min_{\mathbf{c},\mathbf{I},\mathbf{a},\boldsymbol{\sigma},\iota_{a}, \overline{\eta}} \hat{f}_{\mathrm{axis}}(\mathbf{c},\mathbf{I},\mathbf{a},\boldsymbol{ \sigma},\overline{\eta},\iota_{a})\] (1a) subject to \[\mathbf{g}(\mathbf{a},\boldsymbol{\sigma},\overline{\eta}, \iota_{a}) =0\text{ on axis}, \tag{1b}\] \[\iota_{a} =\iota_{\mathrm{target}},\] (1c) \[L_{i} =L_{\mathrm{target}},\] (1d) \[\kappa_{i} \leqslant\kappa_{\mathrm{max}},\ i=1,\ldots,N_{c},\] (1e) \[\frac{1}{L_{c}^{(i)}}\int_{\boldsymbol{\Gamma}^{(i)}}\kappa_{i}^ {2}\ dl \leqslant\kappa_{\mathrm{msc}},\ i=1,\ldots,N_{c},\] (1f) \[\|\boldsymbol{\Gamma}^{(i)}-\boldsymbol{\Gamma}^{(\mathrm{axis})}\| \geqslant d_{\mathrm{min}},\] (1g) \[\|\boldsymbol{\Gamma}^{(i)}-\boldsymbol{\Gamma}^{(j)}\| \geqslant d_{\mathrm{min}}\ \ \text{for}\ i\neq j,\] (1h) \[\|\boldsymbol{\Gamma}^{\prime(i)}\|-L^{(i)} =0\ \ \text{for}\ i=1,\ldots,N_{c}, \tag{1i}\] where \[\begin{split}\hat{f}_{\mathrm{axis}}(\mathbf{c},\mathbf{I}, \mathbf{a},\overline{\eta},\boldsymbol{\sigma},\iota_{a})&=\int _{\mathrm{axis}}(\|\mathbf{B}(\mathbf{c},\mathbf{I},\mathbf{a})\|-B_{ \mathrm{QS}}(\mathbf{a}))^{2}dl\\ &+\int_{\mathrm{axis}}\|\nabla\mathbf{B}(\mathbf{c},\mathbf{I}, \mathbf{a})-\nabla\mathbf{B}_{\mathrm{QS}}(\mathbf{a},\overline{\eta}, \boldsymbol{\sigma},\iota_{a})\|^{2}dl,\end{split} \tag{2}\] and \(\mathbf{B},\nabla\mathbf{B}\) are the magnetic field and its gradient generated by the coils, \(\mathbf{B}_{\mathrm{QS}},\nabla\mathbf{B}_{\mathrm{QS}}\) are the magnetic field and its gradient, which is quasisymmetric to first order for the given magnetic axis shape. The vector of coil degrees of freedom are stored in \(\mathbf{c}\in\mathbb{R}^{3N_{c}(2N_{f,c}+1)}\). The \(x\), \(y\), \(z\) positions of the \(N_{c}\) base modular coils are each represented using a Fourier series with \(N_{f,c}\) modes. The electromagnetic coils also have an associated current stored in the vector \(\mathbf{I}\in\mathbb{R}^{N_{c}-1}\), where the dimensionality is one less than the number of coils because the first coil's current is fixed to prevent the currents from approaching zero. The stellarator symmetric magnetic axis is represented using a Fourier series of the \(R\) and \(Z\) positions of the magnetic axis, represented in cylindrical coordinates. In total, this results in \(2N_{f,a}+1\) degrees of freedom associated to magnetic axis, stored in \(\mathbf{a}\in\mathbb{R}^{2N_{f,a}}\), where the dimensionality is one less than the number of Fourier coefficients of the axis since we require that the mean radius of the magnetic axis be \(1\,\mathrm{m}\). Finally, \(\iota_{a}\in\mathbb{R}\) is the on-axis rotational transform, and \(\overline{\eta}\in\mathbb{R}\backslash\{0\}\) is a scalar that describes how much that magnetic field strength varies on surfaces in the neighborhood of the axis. Given \(\mathbf{a}\), and \(\overline{\eta}\), the vector \(\boldsymbol{\sigma}\in\mathbb{R}^{2N_{f,a}}\) and on-axis rotational transform \(\iota_{a}\) are fully determined via the constraint \(\mathbf{g}\) in (1b), which is a periodic Ricatti equation. The data \(\iota_{a}\), \(\overline{\eta}\), \(\boldsymbol{\sigma}\) fully define the gradient of the target quasi-symmetric magnetic field on axis. The full device is obtained by applying discrete rotational symmetry with \(n_{\mathrm{fp}}\) field periods and stellarator symmetry to \(N_{c}\) base modular coils. Figure 1: Current workflow wraps phase I (the near-axis expansion algorithm) in globalization subroutines to generate interesting initial guesses for the subsequent volume QA optimizations in phase II and III. Using the implicit function theorem, we minimize the reduced objective \[f_{\rm axis}({\bf c},{\bf I},{\bf a},\overline{\eta})=\hat{f}_{\rm axis}({\bf c},{ \bf I},{\bf a},\overline{\eta},\mathbf{\sigma}({\bf a},\overline{\eta}),\iota_{a}({\bf a},\overline{\eta})) \tag{3}\] by eliminating \(\mathbf{\sigma},\iota_{a}\) via the constraint (1b). Thus, this constraint is satisfied exactly. The constraint in (1c) ensures that a stellarator is found with the target on-axis rotational transform \(\iota_{\rm target}\). The constraint in (1d) ensures that each electromagnetic coil has the same length \(L_{\rm target}\). The constraints in (1e) and (1f) ensure that the maximum curvature and mean squared curvature do not exceed \(\kappa_{\rm max}\), \(\kappa_{\rm msc}\), respectively. The constraints in (1g), (1h) prevent the inter-coil and coil-to-axis distances from decreasing below \(d_{\rm min}\). Finally, the constraint in (1i) ensures that the coil incremental arclength stays uniform. Constraints (1c)-(1i) are enforced using a penalty method, and are satisfied to 0.1% precision. A local minimum of (1) is a set of electromagnetic coils that produce a magnetic field that is close to the quasisymmetric magnetic field on a magnetic axis. This optimization problem was solved using local quasi-Newton optimization methods in (Giuliani _et al._, 2022; Wechsung _et al._, 2022), where the analytical gradient of the reduced objective \(f_{\rm axis}\) was obtained using both forward sensitivities and adjoint approaches. We always use the Broyden-Fletcher-Goldfarb-Shannon (BFGS) quasi-Newton method in phase I of the workflow. In the next two sections, we both illustrate the need for globalization when solving (1) and outline two different techniques to accomplish this. ### Naive globalization of phase I A first naive approach to globalization is to perturb the geometry and currents of flat electromagnetic coils, as well as the parameters of the magnetic axis to generate a set of initial starting points. These starting points are used to initialize Phase I, where the local gradient-based optimization algorithm BFGS is used to find a local minimum of the objective in the neighborhood of the perturbed initial guess. The physics and engineering quantities that we target are \(\iota_{\rm target}=0.9\) using \(n_{\rm fp},n_{\rm coils\;per\;hp}=2\), \(d_{\rm min}=0.1\,\)m, \(\kappa_{\rm max}=5\,\)m\({}^{-1}\)\(\kappa_{\rm msc}=5\,\)m\({}^{-2}\), the target coil length \(L_{\rm target}\) is varied between 40 and 80 meters. For each value of coil length, we solve (1) with distinct initial guesses 16 times. The initial guesses are obtained by perturbing the Fourier coefficients of initially flat coils with normally distributed noise with zero mean and standard deviation \(\epsilon\). One has substantial freedom to decide many Fourier coefficients to perturb, as well as the standard deviation of the perturbation and how it decays. If \(\epsilon\) is too large then the local optimizer might be sent into an uninteresting region of coil parameter space. If \(\epsilon\) is too small, and it might not fully explore the set of feasible coil designs. For the experiment here, we perturb the currents, \(\overline{\eta}\), and only the first two Fourier harmonics of the coils, and magnetic axis. Extensive (but tedious and computationally intensive) tuning revealed that the standard deviation of the noise that produced the best results was \(\epsilon=0.01\,\)m. We find that the quality of the minima found depends strongly on the choice of \(\epsilon\), the number of Fourier modes that are perturbed, and that there is no way to know a priori that this value works well for other stellarator designs. In Figure 2, we plot the trade off between the on-axis quasisymmetry error and total coil length. We find as expected that lower quasisymmetry errors are attainable with longer coils. It is clear that making no attempt at globalization to solve this optimization problem might result in a sub-optimal coil set. The problem of multiple minima becomes more pronounced for devices with longer coil lengths because there are more, intricate ways for the coils to arrange themselves. Finally, at around 50 m, there is a device that appears as an outlier with remarkably low near-axis quasisymmetry error. This hints that the naive approach is missing interesting solution branches. ### TuRBO globalization of phase I An alternative to the naive approach is to use a global optimization algorithm. The coil set it finds is then given as the starting point in Phase I to a local gradient-based optimization algorithm BFGS. This global-to-local approach was explored in (Glas et al., 2022), where stochastic global optimization of coil geometries was done using the DTuRBO algorithm (Padidar et al., 2021), an extension of the gradient-free global optimization algorithm TuRBO (Eriksson et al., 2019) that takes into account gradients. The TuRBO algorithm solves the minimization problem \[\begin{split}\min_{\mathbf{x}\in\mathbb{R}^{N}}& f( \mathbf{x})\\ \ell_{i}&\leqslant x_{i}\leqslant u_{i}\ \ \text{for}\ i=1 \dots N,\end{split} \tag{4}\] where \(N\) is the dimension of the optimization problem, \(\ell_{i},u_{i}\) are respectively lower and upper bounds on the components of the control variables. That is, the algorithm attempts to find the global minimum \(f(\mathbf{x})\) on an \(N\)-dimensional hyper-rectangle. The TuRBO algorithm completes a preliminary exploration phase by quasi-uniformly sampling the hyper-rectangle to find interesting areas of parameter space. Then, it uses trust-region based Bayesian optimization algorithms to further refine the solution. It is difficult to define sensible lower and upper bounds in (4) on the unknowns when they are the Fourier coefficients of the coils. There should be some decay in the size of the boxes with increasing mode number, but it unclear how this decay should be chosen. In (Glas et al., 2022), manufacturing errors were projected onto a Fourier basis, from which reasonable bounds on the Fourier coefficients we inferred. In this work, we propose a different way of determining the box constraints for deterministic coil design problems, though, our technique is generic enough to be used in the stochastic context as well. Figure 2: On the left, we plot the on-axis quasisymmetry error with respect to the total coil length used, where we observe that there are multiple minima. On the right, we plot four distinct local minima when the total coil length is \(49.81\,\mathrm{m}\), where the colors of the coils correspond to those of the squares on the left. The coil sets’ associated magnetic axes are not plotted to avoid cluttering in the image. ### Bounding boxes It is straightforward to constrain the coil currents to \(-1\leqslant\mu_{0}I_{i}\leqslant 1\), where \(\mu_{0}=4\pi\times 10^{-7}\) is the magnetic constant. The parameter \(\overline{\eta}\) can also be constrained by \(0\leqslant\overline{\eta}\leqslant 2\). We still use a Fourier series to represent the coils and axis, but change the degrees of freedom from Fourier coefficients to spatial coordinates, linked to each other by the discrete Fourier transform. In this way, physically meaningful box constraints can be provided to TuRBO. These positions, or anchor points, in cylindrical coordinates, \((r_{i},\theta_{i},z_{i})\), are constrained to the box \([0,1+R_{\rm minor}]\times[\theta_{i}-\Delta\theta/2,\theta_{i}+\Delta\theta/2 ]\times[-R_{\rm minor},R_{\rm minor}]\), where \(R_{\rm minor}=L_{\rm target}/2\pi\), and \(\Delta\theta=\rho(2\pi/2n_{\rm fp}n_{\rm coils\ per\ hp})\), \(\theta_{i}=(2\pi/2n_{\rm fp}n_{\rm coils\ per\ hp})(i+1/2)\), \(R_{\rm minor}\) is the radius of the perfectly circular coil with length \(L_{\rm target}\), \(\Delta\theta\) defines a cylindrical sector that the coil can occupy, and the \(\rho\) multiplier ensures that the coil bounding boxes overlap somewhat. Randomly sampling \(2N_{f,c}+1\) points on the box will frequently generate coils with complex geometries (Figure 3, left). These geometries are not particularly useful and we would like TuRBO to avoid wasting time in these uninteresting areas parameter space. These configurations can be avoided, while still using the same anchor points, by re-indexing them so that the points are ordered counterclockwise about their barycenter in the \(RZ\) plane (Figure 3, center). This unravels the coils. Finally, the these coil coordinates are converted to Cartesian coordinates, projected onto a Fourier basis, then used to evaluate the near-axis objective (3). We also generate \((R_{i},\phi_{i},Z_{i})\) positions, or anchor points, for the magnetic axis, where \((R_{i},Z_{i})\) are constrained to the box \([1-1/(1+n_{\rm fp}^{2}),1+1/(1+n_{\rm fp}^{2})]\times[-0.2,0.2]\), and \(\phi_{i}\) is a uniformly spaced cylindrical angle in \(2\pi/n_{\rm fp}\). The bounding boxes for \(R_{i}\) and \(Z_{i}\) are informed by the fact that quasi-axisymmetric magnetic axes have zero helicity. Helicity is an integer associated to the axis geometry that measures how many poloidal transits the axis normal vector makes as the axis traces one toroidal revolution. It was shown in (Rodriguez _et al._, 2022) that magnetic axes of the form \[R(\phi) =1+a\cos(n_{\rm fp}\phi),\] \[Z(\phi) =b\sin(n_{\rm fp}\phi),\] have zero helicity when \(a<1/(1+n_{\rm fp}^{2})\). The situation is more delicate if additional Fourier harmonics are used to represent the axis (as we do here), but in the case of a single dominant harmonic, this bound on \(a\) is a well-informed estimate. The box associated to \(Z_{i}\) was chosen to prevent large excursions from the \(Z=0\) plane, though the size of this box was somewhat arbitrary and other values might have been used. There is one final detail that ensures that the resulting axis curve is stellarator symmetric. After sampling \(N_{f,a}+1\) points from the bounding box associated to the radial coordinates \(R_{i}\), they are arranged in an array as follows: \[[R_{0}-s,R_{1}-s,\ldots,R_{N_{f,a}}-s,R_{N_{f,a}}-s,\ldots,R_{1}-s],\] where \(s\) is a shift to ensure the array has mean 1. After the shift, the radial positions of the magnetic axis may lie slightly outside the original bounding box. Similarly, we sample \(N_{f,a}\) points from the bounding box associated to the vertical coordinates \(Z_{i}\), and arrange them in an array as follows: \[[0,Z_{1},\ldots,Z_{N_{f,a}},-Z_{N_{f,a}},\ldots,-Z_{1}].\] The stellarator symmetric \(R\) and \(Z\) harmonics of the magnetic axis are obtained by discrete Fourier transform of these arrays and provided to the near-axis objective (3). The bounding boxes of different coils can overlap each other and all the bounding boxes of the coils necessarily overlap the one of the magnetic axis, since we do not search for windowpane coils. Thus, it is possible for the generated coils are linked, or for the coils to not be linked with the magnetic axis. In addition, it is still possible that the magnetic axis sampled from the bounding boxes to have nonzero helicity. It is for this reason that we add penalty terms to the objective \(f_{\mathrm{axis}}\) that avoids these undesirable configurations: \[f(\mathbf{c},\mathbf{I},\mathbf{a},\overline{\eta})=f_{\mathrm{axis}}( \mathbf{c},\mathbf{I},\mathbf{a},\overline{\eta})+w\biggl{[}\sum_{i}\sum_{j>i }l_{i,j}(\mathbf{c})+\sum_{i}|l_{i,\mathrm{axis}}(\mathbf{c},\mathbf{a})-1|+h (\mathbf{a})\biggr{]}, \tag{5}\] where we abuse notation here and use \(\mathbf{c},\mathbf{a}\) to represent the vector of anchor points on the coils and axis, respectively. These positions can be converted to the Fourier coefficients by using a discrete Fourier transform. The value \(l_{i,j}\) is the absolute value of the linking number between coils \(i\) and \(j\), and \(l_{i,\mathrm{axis}}\) is the absolute value of the linking number between coil \(i\) and the magnetic axis. For the linking number calculation, we use the implementation in (Bertolazzi et al., 2019) since it returns a linking number with certified accuracy by tracking rounding errors in the calculation. Finally, \(h(\mathbf{a})\) is the absolute value of the axis helicity. The weight \(w=10^{7}\) is chosen to make the terms that it multiplies greater than \(f_{\mathrm{axis}}\). The penalty terms multiplied by \(w\) are zero when the design variables describe a stellarator with unlinked coils that each wrap once poloidally around a magnetic axis with zero helicity. Since the objective is no longer differentiable due to the linking number and helicity calculation, we use TuRBO rather than DTuRBO. We also designed stellarators using an axis bounding box independent of \(n_{\mathrm{fp}}\), \((R_{i},Z_{i})\in[0.85,1.15]\times[-0.15,0.15]\), and neglecting to include the helicity penalty. In a post-processing step, devices that had an axis with nonzero helicity were discarded. This approach risks discarding devices and wasting computational resources but it is capable of producing comparable results. ### Illustration of TuRBO globalization followed by phase I To illustrate the global-to-local procedure, we examine the TuRBO globalization procedure followed by the phase I optimization for a single stellarator. We use a batch size of 100 and restrict TuRBO to a budget of 15,000 function evaluations, of which 1,000 are used in the initial quasi-random exploration phase. TuRBO's internal Gaussian process model is only trained using the most recent 1,000 function evaluations for speed. Finally, the underlying Fourier discretization uses \(N_{f,c}=N_{f,a}=2\) for the coils and axis during this global exploration phase. The coil set and magnetic axis from the global search are used as the initial guess to the BFGS algorithm to polish the solution. During this phase, the number of Fourier modes used is increased to \(N_{f,c}=N_{f,a}=6\). Before BFGS obtains a good enough approximation to the inverse Hessian, especially during the Figure 3: The randomly generated coil in cylindrical coordinates on the left is too complex. Re-indexing the nodes by ordering them about their barycenter in the \(RZ\) plane, the the coil untangles in the center figure. On the right, a set of coils, the magnetic axis, and their associated bounded boxes in physical space are illustrated. The full stellarator is not shown, but can be obtained by applying symmetries. first few iterations, the line search might accept a large enough step such that the coils become interlocked or unlinked with the magnetic axis. To avoid this, we use a linking number calculation to trigger the line search. In Figure 4 we provide the objective function evaluations over the course the TuRBO globalization then phase I of Figure 1. The TuRBO phase of the run has large variations in the objective function, varying over several orders of magnitude. When the objective is larger than \(10^{7}\), this likely corresponds to a device that has an axis with nonzero helicity, linked coils, or coils unlinked with the axis. TuRBO quickly finds an interesting region of parameter space and it could have used a smaller budget for the initial function evaluations. The notable jump in the objective function value around 11,000 function evaluations occurs when TuRBO no longer can make progress in the trust region, so it is discarded and another one is initialized. The best configuration found by TuRBO is provided and labeled Design 4.A. The coil set at the end of Phase I (Design 4.B) has changed substantially from those obtained after the TuRBO phase. ### Comparison of devices from TuRBO and naive globalization after phase I In this section, we compare the performance of TuRBO, with the more naive approach by solving (1) for many target coil length values when \(\iota=0.9\), \(n_{\mathrm{fp}},n_{\mathrm{coils\ per\ hp}}=2\). To this end, we execute the TuRBO globalization followed by phase I, as illustrated in Section 2.3 and Figure 4. In both naive and TuRBO approaches, the workflow is executed 16 times per target coil length. The quasisymmetry error of the resulting configurations are provided in Figure 5. TuRBO finds devices that outperform those found by the naive approach, and we plot two devices from each algorithm, called Designs 5.A and 5.B. We find that Design 5.B is from a genuinely different solution branch. To drive this home, we use it as an initial guess, we solve (1) for multiple different target coil lengths. This Figure 4: The value of the objective function when applying TuRBO then BFGS. For the TuRBO phase, the objective values are evaluations of (5), which includes the non-differentiable penalties and the ‘iteration #’ is actually number of function evaluations. For the BFGS phase, the objective values are evaluations of (2), and ‘iteration #’ refers to the objective function values after an accepted line search. The red line is the best function evaluation so far. On the right, we plot the best stellarator found by TuRBO, labeled \(\blacksquare\) Design 4.A, provided as the initial guess to BFGS in phase I. Then, the stellarator found by BFGS after increasing the number Fourier modes is given by \(\blacksquare\) Design 4.B. solution branch persists on both sides, and was evidently even missed by TuRBO. This highlights the difficulty of finding global minima. With very modest input from the practitioner, TuRBO does a good job of localizing performant minima. We find that the choice of the overlap factor \(\rho\) only modestly affected the quality of the minimizers found. In contrast, one has to decide on how many Fourier modes to perturb and the size of each perturbation to use with the naive approach. This can have a large effect on the quality of the minimizers found. ## 3 The near-axis coil sets obtained after phase I We solve (1) multiple times with the combinations of following design targets \[\begin{split}\iota_{\mathrm{target}}&=0.1,0.2,0.3, \ldots,0.9,\\ L_{\mathrm{target}}&=4.5,4.75,5,\ldots,8.5,8.75,9,\\ n_{\mathrm{coils\ per\ hp}}&=1,2,3,4,\ldots,13,\\ n_{\mathrm{fp}}&=1,2,3,4,5,\end{split} \tag{1}\] using both naive and TuRBO globalization approaches, then combining the data sets obtained. To avoid unrealistic configurations, we only use combinations of the above design targets such that the total coil length used \(2n_{\mathrm{fp}}n_{\mathrm{coils\ per\ hp}}L_{\mathrm{target}}\leqslant 120\,\mathrm{m}\). For all devices and physics targets, we use \(d_{\mathrm{min}}=0.1\,\mathrm{m}\), \(\kappa_{\mathrm{max}}=5\,\mathrm{m}^{-1}\)\(\kappa_{\mathrm{msc}}=5\,\mathrm{m}^{-2}\). In Figure 6, we provide the on-axis quasi-symmetry error for a subset of the stellarators, where we observe the expected trade-off between axis quasisymmetry and total coil length: as the total coil length gets longer, the minimum attainable quasisymmetry error decreases and levels off. If one fixes the total coil length used, but increases the number of coils per half-period, this can drastically improve the on-axis quasisymmetry error. This is illustrated in Designs A and B plotted on the bottom row of Figure 6. Both stellarators Figure 5: On the left, we provide the trade-off between axis quasisymmetry error and total coil length used for local minima found using the naive approach and TuRBO with \(\iota=0.9\), and \(n_{\mathrm{fp}},n_{\mathrm{coils\ per\ hp}}=2\). We have plotted two coil sets, called Designs 5.A and 5.B, corresponding respectively to the red (), and green () square markers. Using Design 5.B as an initial guess, we solve (1) for different target coil lengths, resulting in the devices corresponding to the blue crosses (\(\star\)). The axis QA error refers to the terms in (2). use the same length of coil (54 m), but have 1 and 2 coils per half-period respectively, where shorter, more numerous coils are preferred. The coils in Design B also interleave with one another, in a similar fashion to the coils in Figure 5. It appears that we can find configurations with a good approximation of on-axis quasi-symmetry for all values of \(n_{\mathrm{fp}}\). Moreover, for a fixed coil length increasing the number of field periods seems to improve the on-axis quasisymmetry error, though the coil-to-axis distance decreases. As we will observe in the next section, this observation does not bear out when optimizing for quasisymmetry on a volume. The generated coil sets can produce nested flux surfaces, but sometimes only over a small volume in the neighborhood of the magnetic axis, making it a good set of initial conditions for the next phase of coil optimization. ## 4 Phase II and III: stellarators optimized for quasisymmetry on a volume Taking coil sets optimized for near-axis quasisymmetry obtained during phase I, we now optimize for quasisymmetry on a volume with varying aspect ratios during phases II and III. The objective that we minimize is the sum of the average (normalized) quasisymmetry error and the Boozer residuals on the surfaces \(\boldsymbol{\mathit{\Sigma}}_{s,k}\): \[\begin{split}\hat{f}_{\mathrm{surface}}(\mathbf{c},\mathbf{I}, \mathbf{s}_{1},\ldots,\mathbf{s}_{N_{s}},\,\boldsymbol{\iota},\,\mathbf{G})& :=\frac{1}{N_{s}}\sum_{k=1}^{N_{s}}\biggl{\{}\frac{\int_{ \boldsymbol{\mathit{\Sigma}}_{s,k}}B_{\mathrm{non-QA}}(\mathbf{c},\mathbf{s} _{k})^{2}\ dS}{\int_{\boldsymbol{\mathit{\Sigma}}_{s,k}}B_{\mathrm{QA}}( \mathbf{c},\mathbf{s}_{k})^{2}\ dS}\\ &+\frac{1}{2}w\int_{0}^{1}\int_{0}^{1/n_{\mathrm{fp}}}\|\mathbf{ r}_{k}(\mathbf{s}_{k},\iota_{k},G_{k},\mathbf{c},\mathbf{I})\|^{2}\ d\varphi\ d\theta\biggr{\}},\end{split} \tag{1}\] where \(w>0\) is a weighting parameter and the ratio \(B_{\mathrm{non-QA}}/B_{\mathrm{QA}}\) is defined in (Giuliani _et al._, 2022_b_) From the optimized magnetic axis, we use a first order near-axis expansions Figure 6: The near axis QA error with respect to total coil length used when \(\iota=0.5\), each point on the scatter plot corresponds to a different device and each panel corresponds to a different number of field periods, and the color refers to the number of coils per half period used. Below, we plot two configurations (Designs 6.A and 6.B) that use the same total coil length but different number of coils, 1 and 2 coils per half period, respectively. The axis QA error refers to the terms in (2). to generate an initial surface geometry in the neighborhood of the axis with minor radius \(r=0.05\,\mathrm{m}\): \[\boldsymbol{\Sigma}_{s,1}(\varphi,\theta)=\boldsymbol{\Gamma}^{\mathrm{axis}}( \varphi)+rX_{1}(\varphi,\theta)\mathbf{n}(\varphi)+rY_{1}(\varphi,\theta) \mathbf{b}(\varphi), \tag{10}\] where \(\mathbf{n},\mathbf{b}\) are the normal and binormal vectors associated to the Frenet frame of the magnetic axis, \(X_{1},Y_{1}\) are defined in (Landreman _et al._, 2019; Landreman, 2023), and \(r\) is a radial distance from the axis. Using this initial guess of a surface, our goal is to find a set of coils that solves the following optimization problem: \[\min_{\mathbf{c},\mathbf{I},\mathbf{s}_{1},\ldots,\mathbf{s}_{N_{s}},\, \boldsymbol{\iota},\,\mathbf{G}}\,\hat{f}_{\mathrm{surface}}(\mathbf{c}, \mathbf{I},\,\mathbf{s}_{1},\ldots,\mathbf{s}_{N_{s}},\boldsymbol{\iota}, \mathbf{G})\] (11a) subject to \[\mathbf{g}_{k}(\mathbf{c},\mathbf{I},\mathbf{s}_{k},\iota_{k},G_{k})=0\] for \[k=1,\ldots,N_{s}\], (11b) \[\iota_{\mathrm{target}}-\frac{1}{N_{s}}\sum_{k=1}^{N_{s}}\iota_{k} =0, \tag{11c}\] \[R_{\mathrm{major}}(\mathbf{s}_{N_{s}}) =1,\] (11d) \[\sum_{i=1}^{N_{c}}L_{i} \leqslant L_{\mathrm{max}},\] (11e) \[\kappa_{i} \leqslant\kappa_{\mathrm{max}},\,\,i=1,\ldots,N_{c},\] (11f) \[\frac{1}{L_{c}^{(i)}}\int_{\boldsymbol{\Gamma}^{(i)}}\kappa_{i}^ {2}\,\,dl \leqslant\kappa_{\mathrm{msc}},\,\,i=1,\ldots,N_{c},\] (11g) \[\|\boldsymbol{\Gamma}^{(i)}-\boldsymbol{\Gamma}^{(j)}\| \geqslant d_{\mathrm{min}}\,\,\,\,\mathrm{for}\,\,i\neq j,\] (11h) \[\|\boldsymbol{\Gamma}^{\prime(i)}\|-L^{(i)} =0\,\,\,\,\mathrm{for}\,\,i=1,\ldots,N_{c}. \tag{11i}\] Analogously to the first optimization problem solved, the vector of coil degrees of freedom are stored in \(\mathbf{c}\in\mathbb{R}^{3N_{c}(2N_{f,a}+1)}\). The \(x\), \(y\), \(z\) positions of the \(N_{c}\) base modular coils are each represented using a Fourier series with \(N_{f,c}\) modes. The electromagnetic coils also have an associated current stored in the vector \(\mathbf{I}\in\mathbb{R}^{N_{c}-1}\), where the dimensionality is one less than the number of coils because the first coil's current is fixed, preventing the currents from approaching zero. Each surface has \(n_{s}\) degrees of freedom associated to it, stored in \(\mathbf{s}_{k}\in\mathbb{R}^{n_{s}}\). The surfaces' rotational transforms \(\iota_{k}\), and scalar \(G_{k}\) are stored in \(\boldsymbol{\iota},\mathbf{G}\in\mathbb{R}^{N_{s}}\). The constraint \(\mathbf{g}_{k}\) is a system of equations that relates the coil and surface degrees of freedom. The constraint is a discretization that solves the partial differential equation \(\mathbf{r}_{k}(\mathbf{s}_{k},G_{k},\iota_{k})=0\), where the residual is \[\mathbf{r}_{k}(\mathbf{s}_{k},G_{k},\iota_{k}):=G_{k}\frac{\mathbf{B}}{\| \mathbf{B}\|}-\|\mathbf{B}\|\left(\frac{\partial\boldsymbol{\Sigma}_{s,k}( \mathbf{s}_{k})}{\partial\varphi}+\iota_{k}\frac{\partial\boldsymbol{\Sigma}_{ s,k}(\mathbf{s}_{k})}{\partial\theta}\right). \tag{12}\] The input to the PDE are the electromagnetic coils, and the output are the surface degrees of freedom \(\mathbf{s}_{k}\), \(G_{k}\), and rotational transform \(\iota_{k}\). The dependence of the residual on the electromagnetic coils is hidden here, because for a given surface solve, they are fixed. We have devised two approaches to solve \(\mathbf{r}_{k}(\mathbf{s}_{k},G_{k},\iota_{k})=0\) based on a spectral collocation method. The approach used during phase II, called BoozerLS surfaces, is useful for regimes where nested flux surfaces do not exist, so the PDE is solved in a least squares sense. In this case, (11b) is the optimality condition of the inner least squares optimization problem solved by BFGS. The approach used during phase III, called BoozerExact surfaces, is useful for regimes when nested flux surfaces do exist and we would like to polish the quality of QA. The residual (12) is evaluated on a grid of collocation points in \((\varphi,\theta)\), where the number of collocation points is the same as the number of degrees of freedom that specify the surface. In this case, (4.3_b_) is the vector of residual evaluations at those collocations, which we require to be zero. This approach works well in the regime of nested flux surfaces, which is not always guaranteed. In both approaches, constraint (4.3_b_) allows us to compute the surface degrees of freedom \(\mathbf{s}_{k},t_{k},G_{k}\) in terms of the electromagnetic coils \(\mathbf{c},\mathbf{I}\). Using the implicit function theorem, we minimize the reduced objective \[f_{\text{surface}}(\mathbf{c},\mathbf{I})=\hat{f}_{\text{surface}}(\mathbf{c},\mathbf{I},\mathbf{s}_{1}(\mathbf{c},\mathbf{I}),\ldots,\mathbf{s}_{N_{x}}( \mathbf{c},\mathbf{I}),\boldsymbol{\iota}(\mathbf{c},\mathbf{I}),\mathbf{G}( \mathbf{c},\mathbf{I}))\] by eliminating \(\mathbf{s}_{k}\), \(\boldsymbol{\iota}\), \(\mathbf{G}\) via the constraint (4.3_b_). Thus, this constraint is satisfied exactly, and is either a discretized PDE solved in least squares form (BoozerLS) for robustness and nested flux surfaces (Giuliani _et al._, 2023), or using a collocation method (BoozerExact) for accurate quasisymmetry (Giuliani _et al._, 2022_b_). Constraint (4.3_c_) ensures that the mean rotational transform on the volume is the target value \(\iota_{\text{target}}\). Constraint (4.3_d_) ensures that the sum of coil lengths per half-period is less than \(L_{\text{max}}=n_{\text{coils per hp}}L_{\text{target}}\), where \(L_{\text{target}}\) is the coil length used in the near-axis optimization problem (1). This is notably different to previously, where each coil was constrained to have the same length \(L_{\text{target}}\). Constraint (4.3_e_) ensures that the major radius of the outermost surface is 1. Constraints (4.3_f_), and (4.3_g_) ensure that the maximum coil curvature, and mean squared curvature do not respectively exceed \(\kappa_{\text{max}}\) and \(\kappa_{\text{msc}}\). Constraint (4.3_g_) ensures that the minimum intercoil distance does not decrease below \(d_{\text{min}}\). Finally constraint (4.3_i_) ensures that the coils have uniform incremental arclength. Constraints (4.3_c_)-(4.3_i_) are enforced using a penalty method, and are satisfied to 0.1% precision. We always use the BFGS quasi-Newton method in phases II and III of the workflow. ### Illustration of phase II and III In this section, we give more details on the final two phases of the coil design algorithm to illustrate how it works. The configurations obtained from the near-axis formulation (phase I) only attempt to find coils that target quasisymmetry on the magnetic axis. The formulation does not control anything about the magnetic field away from the axis, and as a result magnetic surfaces may not exist on a large volume (Lee _et al._, 2022). BoozerLS surfaces (phase II) are robust and capable of healing generalized chaos (Giuliani _et al._, 2023), though it can be computationally expensive. Finally, the BoozerExact (phase III) surfaces polish the coil set for precise quasi-axisymmetry and are computationally much cheaper. The algorithm for phase II is given in Algorithm 1. The first step of the algorithm is to use the magnetic axis obtained from phase I to compute an approximate magnetic surface from (4.2). Then, the optimization problem (4.3) is solved, but we do not attempt to fully converge the coil sets and only complete a total of 300 iterations of BFGS. Each time BFGS is restarted, an attempt is made to increase the number of Fourier modes used to represent the surface. Progressively increasing the number of Fourier modes makes the procedure more robust as we find self-intersecting surfaces are less likely to occur. The weight \(w\) is chosen such that the Boozer residual term is an order of magnitude larger than the non-quasisymmetry penalty. This term favors nested flux surfaces and improves the robustness of the optimization algorithm. However, it also competes with the quasi-axisymmetry error and so the quality of quasisymmetry obtained at this stage can be limited. Since the stellarators obtained here are not fully converged and strongly depend on the value of \(w\), we do not analyze the physics properties of the stellarators here yet. In Figure 7, we provide Poincare plots of a device just after the near-axis optimization, and then as BoozerLS surfaces are added to optimize for nested flux surfaces. The first panel shows that the near-axis optimization algorithm does not guarantee nested surfaces far away from the magnetic axis. The final two panels illustrate that as the surfaces are added, the volume with nested flux surfaces increases. ``` compute an initial BoozerLS surface with aspect ratio 20 from near-axis formulas (4.2). for aspect ratio (AR) in \(\{20,10,6.66,5,4,3.33,2.85\}\)do attempt to increase the degree of the BoozerLS surfaces up to \(m_{\mathrm{pol}},n_{\mathrm{tor}}\leqslant 4\) for\(i\) in \(1,2,3\)do solve (4.3) on surfaces with a budget of 100 iterations. \(\triangleright\) Island and chaos healing endfor add another BoozerLS surface with the next smallest AR. endfor ``` **Algorithm 1** Phase II: BoozerLS After phase II terminates, we proceed to phase III where we optimize for precise quasisymmetry. It can be shown that island width scales like \(\sqrt{|B_{n,m}|/m\iota^{\prime}}\)(Hudson _et al._, 2001), where \(\iota^{\prime}\) is the magnetic shear. Thus, for large enough \(m\), the island width should be small. Informed by this estimate, we retain the BoozerLS formulation when the rotational transform on the surface is within \(1\%\) of a rotational transform of the form \(\iota=n_{\mathrm{fp}}/m,2n_{\mathrm{fp}}/m\) for \(m=1,\ldots,15\). During the this final phase, the weight \(w\) is set to zero on BoozerExact surfaces, but it maintained for BoozerLS surfaces in the neighborhood of low-order rationals. When all surfaces are of type BoozerExact, we cap the total number of BFGS iterations to 20,000. When there is a surface of type BoozerLS, we cap the total number of BFGS iterations 4,000. The algorithm for phase III is given in Algorithm 2. Figure 7: Poincaré plots at \(\phi=0\) after phase I, and II for \(AR=20,10\). The point where the magnetic axis intersects the \(\phi=0\) plane corresponds to the red dot in the first panel. Cross sections of the surfaces on which QA is optimized are red in the second and third panels. During these last two phases of coil optimization, we also increase the number of Fourier harmonics in the coils to \(N_{f,c}=16\), because using too few Fourier modes in the coils can limit the attainable quality of quasisymmetry. ### Comparison of devices from TuRBO and naive globalization after phase III After the full workflow is completed (globalization, then phases I, II, then III), we compare the devices found depending on whether TuRBO or naive globalization approaches were applied just before phase I. In Figure 8, we plot volume quasisymmetry error for configurations found for both globalization techniques. It appears that the highly favorable devices discovered by TuRBO do not necessarily translate to devices that have comparatively favorable volume quasisymmetry. We also find the TuRBO devices resulted in many more configurations for shorter coils than the ones found by the naive approach, though there is a larger spread of device performance. The Design 5.B does not persist upon optimizing for volume quasisymmetry, which we suspect is because the coils are too close to the magnetic axis (\(16.5\,\mathrm{cm}\)). As a result, when the surface generated by the near-axis formulas (19) is used at the start of phase II, the BoozerLS surface solve in (20) does not converge and the optimization cannot proceed. The devices discovered by TuRBO also do not appear to outperform those found by the naive approach. Nevertheless, we emphasize that this may only be because the globalization is performed on the near-axis problem just before phase I and _not_ directly to the volume QA optimization, just before phase II. If globalization is applied directly to the BoozerLS phase of the workflow, it is possible that new and interesting devices could still be discovered. Finally, the magnitude and form of the perturbation \(\epsilon\) used in the naive approach was determined after somewhat lengthy tuning (Section 2.2). There are much fewer ad-hoc choices in the bounding box approach used by TuRBO. ### Computational details The run time of each phase can vary depending on the design targets, as the difficulty of the optimization problems strongly depends on the total coil length used by the device. In previous work, we used various levels of parallelism simultaneously such as MPI, OpenMP, and SIMD. In this work, we take the approach of only allocating one core per stellarator and only use SIMD parallelism locally to a core when possible. The full workflow for a single device takes on the order of a day or two on a single core. However, this duration can vary substantially depending on how accurately each individual optimization problem is solved or if more cores are used per problem. Figure 8: Taking the stellarators from Figure 5, and using them as initial guesses for the volume QA optimization phase (Section 4), we obtain the stellarators plotted above. So that the data sets can be more easily distinguished, we use smaller marker sizes for devices corresponding to the naive algorithm. ## 5 QUASR: the Quasisymmetric Stellarator Repository The algorithms presented in the previous sections have culminated in the comprehensive database that we detail in this section. Before discussing the stellarators in the database, we would like to repeat some of the database's limitations: * The database only contains curl-free, stellarator symmetric magnetic fields with optimized quasi-axisymmetry. Note that our algorithms are generic however, and may also be applied to other flavors of quasisymmetry as well. * Our goal was both to produce a large data set of stellarators and to visualize the trade-off of target physics characteristics. Given that this approach might find stellarators that do not lie on the Pareto front, it may not be the most computationally efficient. Approaches based on continuation might reduce this computational cost (Bindel _et al._, 2023), at the expense of some parallelism. * The algorithm might have discovered the same device multiple times. Three possible ways that this can occur are as follows. Given a local minimum of (2.1) or (4.3), a visually distinct local minimum can be found by reflecting the device about the \(XY\) plane, i.e., applying the transformation \(-Z\to Z\). Another visually distinct local minimum can also be found by rotating the device by a half-period. We have also noticed that local minima can lie in tricky valleys of the objective. In particular, initially the gradient-based optimizer makes a lot of progress, and does so quickly. But after a few thousand iterations, progress slows, especially for devices that use longer coil lengths. Therefore, it might also be that visually distinct local minima will merge with one another after more iterations, but with only marginal reduction of the objective. This is the price paid for an extensive scan. * Due to the computational expense of generating these devices, we also include in the database the ones discovered after executing previous versions of the workflow described above. In the following sections, we compare devices in the database to ones discussed previously in the literature, highlight a device with rotational transform profiles that pass through low-order rationals, and perform a couple of trade-off analyses of the devices. We look at how accurately quasisymmetry can be attained when the total coil length, number of coils per half period, number of field periods, device aspect ratio, and target mean rotational transform are varied. In addition, we examine the relationship between elongation and quality of quasisymmetry. Due to the multiple possible analyses that might be performed, we only scratch the surface here. ### Comparison to previous devices As a first examination, we select the devices in our database that have the closest design targets to previous configurations computed in (Giuliani _et al._, 2022_b_), where devices with aspect ratio 6, \(\iota=0.42\), \(n_{\mathrm{coils\ per\ hp}}=4\), total coil length \(72\,\mathrm{m},80\,\mathrm{m},88\,\mathrm{m},96\,\mathrm{m}\), and \(n_{\mathrm{fp}}=2\). The devices in QUASR that are closest to these devices have the same number of coils, aspect ratio 6.66 or 5, and target mean rotational transform 0.4. In Figure 9, these devices are compared, where we observe that the QUASR devices perform comparably, though notably the precise QA devices with length \(96\,\mathrm{m}\) is better. This might be because we only allow a total of 20,000 iterations of BFGS during phase III, while in generating the precise QA coil sets, more than double the number of iterations were allowed. ### A stellarator with a rotational transform profile passing through low-order rational Consider a devices with two field periods, \(n_{\rm coils\ per\ hp}=3,\iota_{\rm target}=0.5\) and aspect ratio \(4\). Since the target mean rotational transform is a low-order rational, we would expect there to be a strong likelihood of an island chain somewhere in the toroidal volume. However, the island chains are small, as shown in Figure 10. ### Impact of design targets on attainable quasisymmetry In Figure 11, we plot the volume QA error with respect to coil length for various values of \(n_{\rm fp}=2,3,4\), when optimizing for QA on a single surface of aspect ratio \(20\). Since this is such a high aspect ratio surface, this set up is a close approximation to the near-axis problem in Section 2, though the objective is asking for more out of the magnetic field. A notable difference with the near-axis results is that increasing the number of field periods does not appear to improve the attainable on-axis quasisymmetry, as is observed Figure 10: Poincaré plots and rotational transform profiles of a device with aspect ratio \(4\), mean rotational transform \(0.5\), and \(n_{\rm fp}=2\). We visually could not identify trajectories associated to islands in the first configuration. The horizontal blue lines on the rotational transform profiles indicate the low order rational \(\iota=1/2\). The vertical red lines on the rotational transform profiles indicate the normalized toroidal flux label on which quasisymmetry was optimized. Figure 9: Comparison of devices from QUASR with aspect ratio \(6.66\) and \(5\), target mean rotational transform \(\iota=0.4\), and the precise QA coil sets of (Giuliani _et al._, 2022_b_) with aspect ratio \(6\) and mean rotational transform \(0.42\). All devices have \(n_{\rm fp}=2\) and \(n_{\rm coils\ per\ hp}=4\) here. The grey region corresponds to the Earth’s background magnetic field of \(50\,\mathrm{\SIUnitSymbolMicro T}\), and volume QA error refers to the mean non-QA ratio in (4.1). in Figure 6. One possible reason for this is that the near-axis design problem (1) only optimizes for quasisymmetry to first-order and introducing second-order near-axis penalties might explain this discrepancy. There do appear to be some highly performant devices that are outliers, so it might also be that our algorithm is just not discovering those solution branches. There is also a clear preference for \(n_{\mathrm{fp}}=2\) stellarators as the algorithm has difficulty finding devices with precise quasisymmetry for \(n_{\mathrm{fp}}=3,4\). Devices with \(n_{\mathrm{fp}}=1,5\) are not shown in the figure, but in both cases, the algorithm had varying difficulty finding devices with precise quasi-axisymmetry. We also examine the trade-off between the quality of quasisymmetry, total coil length, and device aspect ratio in Figure 12. Increasing the target mean rotational transform does not appear to greatly affect the quality of attainable quasisymmetry, while as expected increasing the device aspect ratio appears to have a much larger negative impact. ### Maximum elongation Finally, we study the values of elongation appearing in the database when \(\iota_{\mathrm{target}}=0.6\) and aspect ratio \(10\). High elongation increases the ratio of surface area to volume of the magnetic surfaces, moreover, it can increase bunching of the flux surfaces and this can have negative implications on stability (Goodman _et al._, 2022). For each stellarator in our database, we compute the maximum elongation in the device by computing a surface with aspect ratio \(80\) in the neighborhood of the magnetic axis. Then, we compute \(N=10\) cross sections of that surface at cylindrical angles \(\phi=\pi(i+1/2)/n_{\mathrm{fp}}/N\) for \(i=0,\ldots,N-1\). For each cross section, we fit an ellipse in a least squares sense (Halir & Flusser, 1998) and compute the ratio of the ellipse's major to minor axes. The value reported here is the maximum ratio observed at these cross sections. We did not target any particular value of elongation but nevertheless there do appear to be favored values (Figure 13). For \(n_{\mathrm{fp}}=1\) and \(2\), the lowest values of quasisymmetry error occur for an elongation around \(7\), while for \(n_{\mathrm{fp}}=3\) an elongation of \(4\) is favored. It is unclear whether this picture would change if a stage one optimization for QA were done instead, without coils. In (Goodman _et al._, 2022), various QI stellarator designs were proposed, where a maximum elongation below approximately \(6\) was targeted. The stellarators in QUASR here illustrate that a good approximation of QA can also be found when requiring a maximum elongation below \(6\) too. We also observe that as expected, the lower quasisymmetry errors occur for longer coil lengths. ## 6 Conclusions We have proposed a direct stellarator coil design algorithm that is globalized using the TuRBO optimization algorithm, where box constraints on anchor points of the coils Figure 11: Devices on which quasisymmetry is optimized on a surface of aspect ratio \(20\), with a target rotational transform \(\iota=0.5\). The grey region corresponds to the Earth’s background magnetic field of \(50\,\mathrm{\SIUnitSymbolMicro T}\), and volume QA error refers to the mean non-QA ratio in (18). are applied. The algorithm combines three direct coil optimization algorithms, and has allowed the construction of a large database of almost 140,000 vacuum field stellarators for various design targets, e.g., aspect ratio, \(n_{\mathrm{fp}}\), \(n_{\mathrm{coils\ per\ hp}}\), rotational transform, and total coil length. Using the database, we have examined the trade-off between accuracy of quasi-axisymmetry, total coil length, rotational transform, and aspect ratio of the device. Since the techniques in this work are quite general, there are many other directions that we would like to explore. Applying all of these approaches to other flavors of quasisymmetry, such as quasi-helical symmetry is the next logical step. Adding windowpane coils and other coil geometries, e.g., helical coils, to the database would enrich the database. There is still room to improve the globalization algorithms and one possibility is to directly globalize the BoozerLS phase of the optimization. Since a magnetic surface could Figure 12: Trade-off between quality of quasisymmetry and total coil length for various device aspect ratios, and target mean rotational transform when \(n_{\mathrm{fp}}=2\). The grey region corresponds to the Earth’s background magnetic field of \(50\,\mathrm{\SIUnitSymbolMicro T}\), and volume QA error refers to the mean non-QA ratio in (1). be viewed as a sequence of coils, we would like to globalize a stage one and stage two optimization problem using the approach presented in this work. Finally, we have only scratched the surface of possible physics analyses of the data set. Since it is publicly available, we hope that the stellarator community might explore it further. ## 7 Data availability 1. [label=()] 2. Scripts that wrap Phase I with the TuRBO globalization are available at [https://github.com/andrewgiuliani/Global-Direct-Coil-Optimization-I](https://github.com/andrewgiuliani/Global-Direct-Coil-Optimization-I). 3. Scripts for Phase II and III are available in SIMSOPT (Landreman _et al._, 2021) and [https://github.com/andrewgiuliani/Global-Direct-Coil-Optimization-II](https://github.com/andrewgiuliani/Global-Direct-Coil-Optimization-II). 4. The entire set of stellarators in VMEC and SIMSOPT formats is archived on Zenodo at [https://doi.org/10.5281/zenodo.10050656](https://doi.org/10.5281/zenodo.10050656). 5. More data such as Poincare plots are hosted at [https://users.flatironinstitute.org/~agiuliani/QUASR/](https://users.flatironinstitute.org/~agiuliani/QUASR/). 6. Jeff Soules has written a web application for navigating the data set, temporarily hosted at [https://users.flatironinstitute.org/~jsoules/QUASR/](https://users.flatironinstitute.org/~jsoules/QUASR/). ## 8 Acknowledgements The author would like to thank the Flatiron Institute's Scientific Computing Core for their support, Georg Stadler, Gabriel Provencher Langlois, and Rogerio Jorge for the helpful discussions, and Misha Paddar for the tips to get set up with TuRBO. We also would like to thank Rogerio Jorge for linking PyQSC to SIMSOPT for obtaining initial surface guesses at the start of Phase II in the workflow. Finally, the author would like to thank Jeff Soules for writing the web application to navigate the database.
2302.11802
Patch Network for medical image Segmentation
Accurate and fast segmentation of medical images is clinically essential, yet current research methods include convolutional neural networks with fast inference speed but difficulty in learning image contextual features, and transformer with good performance but high hardware requirements. In this paper, we present a Patch Network (PNet) that incorporates the Swin Transformer notion into a convolutional neural network, allowing it to gather richer contextual information while achieving the balance of speed and accuracy. We test our PNet on Polyp(CVC-ClinicDB and ETIS- LaribPolypDB), Skin(ISIC-2018 Skin lesion segmentation challenge dataset) segmentation datasets. Our PNet achieves SOTA performance in both speed and accuracy.
Weihu Song, Heng Yu, Jianhua Wu
2023-02-23T06:29:31Z
http://arxiv.org/abs/2302.11802v1
# Patch Network for medical image Segmentation ###### Abstract Accurate and fast segmentation of medical images is clinically essential, yet current research methods include convolutional neural networks with fast inference speed but difficulty in learning image contextual features, and transformer with good performance but high hardware requirements. In this paper, we present a Patch Network (PNet) that incorporates the Swin Transformer notion into a convolutional neural network, allowing it to gather richer contextual information while achieving the balance of speed and accuracy. We test our PNet on Polyp(CVC-ClinicDB and ETIS- LaribPolypDB), Skin(ISIC-2018 Skin lesion segmentation challenge dataset) segmentation datasets. Our PNet achieves SOTA performance in both speed and accuracy. Keywords:Semantic segmentation lightweight deep learning medical image ## 1 Introduction Colorectal cancer (CRC) is the world's third most frequent cancer after lung cancer, and the majority of CRC cases are caused by polyp transformation. It can successfully prevent polyp from changing into CRC if it can be detected and removed in time at an early stage, Therefore, the precise prediction of the lesion area becomes a central task. The most prevalent treatment technique is colonoscopy, which is often performed manually by trained clinicians. Because finding polyps is difficult and time-consuming, automated precision segmentation approaches are critical. Skin lesions are another common and under-appreciated ailment. While most Skin lesions are just mildly detrimental to the body, owing to their diverse kinds, some Skin lesions can become permanent losses if not treated promptly and lead to other diseases and even progeny. The exact segmentation of a Skin lesion's site, which offers clinicians the location information about diseased areas for their follow-up work, is the emphasis of Skin lesion treatment. In summary, CRC and Skin lesions both require accurate segmentation of the lesion location. Most of the current research methods effectively prove the feasibility of deep learning for medical image segmentation. In medicine, this segmentation field's accuracy and speed requirements are rigorous, and these methods are challenging to meet the needs. Suitable performance methods will be relatively high hardware requirements that consume a lot of time, and detection speed methods will be some of the target object details features ignored. These problems in the medical field need to be effectively addressed. To address this, we propose a Patch block to incorporate the idea of Swin Transformer, which has good performance, into the fast convolutional neural network(CNN) for segmentation to obtain more contextual information in the form of patching, effectively merging the advantages of the two and at the same time, we propose a novel lightweight network Patch Network for medical image segmentation, we test our model on three datasets, intersection over union(IOU) and Dice similarity coefficient(Dice) reach 0.9332 and 0.9599 on CVC, IOU and Dice reach 0.9405 and 0.9646 on ETIS, IOU and Dice get 0.8946 and 0.9340 on Skin, and The number of model parameters and floating point operations per second(FLOPs) is only 1/10 of that of UNet++, while the fps is more than three times that of UNet++. ## 2 Related work The current mainstream semantic segmentation models are divided into two main structures, CNN and Transformer methods. **CNN**: CNN mainly uses the convolutional layer to extract image feature information by sliding in the image to obtain the whole image feature information to classify the image. Semantic segmentation is a particular form of image classification used to classify the image at the pixel level. It is not difficult to understand why many network models use the model of image classification as the backbone. Use the mature image classification The use of established image classification algorithms to further manipulate the features extracted from images to achieve pixel-level image classification can achieve better performance and simplify network design. To meet the need for pixel-level features for semantic segmentation, PSPNet[15] uses the Spatial Pyramid Pooling (SPP) module behind the backbone to adapt to objects of different sizes while can access to multi-scale feature information. UNet[12] achieves a good segmentation effect by fusing semantic feature information from the higher and lower layers through jump connections. Still, its ability to extract feature information at each layer limits its performance. To extract more feature information at each layer, ENet[11] fuses convolutional operations and pooling operations to obtain feature information during downsampling, CGNet[13] uses Context Guided(CG) block to combine normal convolutional operations and null convolution to obtain contextual feature information and enhances feature extraction by adding an attention structure. BiSeNet[14] uses a two-branch structure to fuse spatial and contextual feature information. The semantic segmentation structure in the form of CNN without backbone is more about better feature extraction at each layer. In contrast, the segmentation network structure with backbone is more about obtaining deeper feature information after the backbone using a multi-branch structure for better segmentation. **Transformer**: Transformer originated from the natural language process(NLP). Because of its outstanding performance in NLP, researchers in computer vision paid attention to it. Soon, a series of related works \({}^{{}^{[7][4][16]}}\) applied Transformer to semantic segmentation, and these works got good segmentation results. In a special structural form, a transformer can cover every pixel value in the image and obtain global feature information, but it brings colossal computation. Swin Transformer\({}^{{}^{[9]}}\) proposed a solution to this problem, cutting the image into pieces and extracting features according to first local and then whole. It made good use of CNN's idea of locality, significantly reducing the computation, but compared with CNN, it still has a big gap, and its hardware environment requires high requirements, and reasoning speed is slow. To summarize, we propose a Patch block, which incorporates Swin Transformer's slicing idea into CNN structure in a different way to better solve the inadequacies of Transformer and CNN. For reference, we use CNN's module to extract context information so that our module may extract more extensive feature information at each layer of the network. ## 3 Methodology In this work, we propose Patch Network (PNet), which adopts the classical encoder-decoder structure. In this section, we describe the two parts of encoder and decoder, describe the components of PNet, and compare similar units of Patch block. ### Encoder It is mainly composed of a downsampling module and Patch block at this stage. Compared with max-pooling or average pooling, which directly retains features Figure 1: Proposed PNet architecture. through simple extraction methods, we prefer to use convolution operations to achieve downsampling operations through learning. Many experiments have proved that the downsampling operation implemented by convolution is better than the pooling operation. At the same time, inspired by [6], we did not use the conventional 3x3 convolution kernel. And the bold attempt to use a 5x5 convolution kernel got better performance, so the downsampling module is implemented by a 5x5 convolution kernel, the stride of 2, and padding of 2. For the CNN network model, the ability of each layer to extract image features dramatically affects the final performance of the model. For example, the classic network deeplab [5] that uses an image classification network as a backbone, the default image classification network, has learned good features. Next, using aspp in the decoder stage (as shown in Fig 2) further complements the contextual features through a larger receptive field, achieving good results. Like CGNet, which does not use a backbone network model, it is necessary to design a module to extract feature information. It uses a CG block (as shown in Fig 2 ) dual-branch structure to enhance the feature extraction capability of each layer to improve the performance of the model. The original intention of our design was to use a module with aspp feature extraction capability to replace CG block, and the emergence of Swin Transformer has become the spark of our module design. Its idea of learning image blocks inspired us. Since the implementation and characteristics of CNN and Transformer are different, copying their structure is too poor to explain. We designed a module that first uses convolution with a small atrous rate to learn the features and then uses a large atrous. The convolution of the rate is used for further learning, which is similar to the form of "patching," so we call it a Patch block(as shown in Fig 2), and the two atrous rates are set to 2 and 6, respectively. For intuitive understanding, we show a standard 3x3 convolution (Fig 3), a 3x3 convolution with both atrous rate and padding of 2 (Fig 3), and a 3x3 convolution with both atrous rate and padding of 6 (Fig 3), We temporarily call the standard 3x3 convolution area the standard area. In the first atrous convolution, it can be seen that some areas of the standard area are used, and the surrounding areas are jointly learned, and the sliding window process will combine the upper three areas and the lower three areas. Learn each area, and then use the second atrous convolution with a more significant atrous rate to learn the middle area, and at the same time learn with a broader range of context information, and finally use the original input and the result here to perform residual learning through add, Further improve the feature extraction ability and make up for the feature information of the missing area. ### Decoder Considering the design of the lightweight model and the powerful feature extraction capability of the Encoder stage, we have simplified the design of the Decoder. First, perform an 8-fold upsampling operation on the features learned in the previous step and concat them with the first downsampled features to fuse the deep semantic information and shallow spatial information. Since Patch block also brings about the redundant information, so we first use 3x3 convolution to learn deep and shallow fusion information features and then use a dropout of 0.3 to suppress redundant information, which also prevents the model from overfitting to a certain extent and then use two 1x1 convolution, the former is optimized across channels, the latter outputs the number of classification categories, and finally returns to the original image input size using an upsampling operation again. ### Patch Network Four downsampling operations are performed in the Encoder stage. After each downsampling, the Patch block is used to obtain rich contextual information. In the decoder stage, the deep semantic information and shallow spatial information are fused, and dropout is used to prevent overfitting. At this point, a well-designed lightweight CNN network for medical image segmentation was born (As shown in Fig 1). Figure 3: atrous convolution. Figure 2: Comparison of different feature extractors. ## 4 Experiments and Results ### Datasets #### 4.1.1 Polyp Segmentation Accurate detection of colon polyps is of great significance for colon cancer prevention. CVC-ClinicDB[2](CVC for short)[2] includes 612 colon polyp images. We use the original size 384\(\times\)288 image and split it into train set(80%) and test set(20%). #### 4.1.2 Etis Like CVC, ETIS - LaribPolypDB(ETIS for short) is also a polyp dataset containing 196 images from 29 sequences, all from different devices. We resize all the original images to 512\(\times\)384 and split them into train set(80%) and test set(20%). #### 4.1.3 Skin Lesion Segmentation Computer-aided automatic diagnosis of Skin cancer is an inevitable trend, and Skin lesion segmentation is urgent as the first step. The data set is from MICCAI 2018 Workshop - ISIC2018: Skin Lesion Analysis Towards Melanoma Detection[8][14](Skin for short). It contains 2594 images and is randomly split into train set (80%) and test set (20%). For better model training and result display, we resize all the original images to 224\(\times\)224. ### Implementation details For three benchmarks and multiple segmentation models, we set consistent training parameters. We set epochs as 200 in the three data sets. We use a learning rate(LR) equal to 1e-4 for all tasks. In addition, we use batch size equal to 2 for ETIS and CVC tasks and 4 for the Skin task. Cross entropy loss and Adam are used as loss function and optimizer, respectively. All experiments run on the NVIDIA TITAN V GPU with 12GB. Intersection over Union (IOU), Dice coefficient, FPS, and computational complexity(FLOPs) are selected as the evaluation metrics in this paper. We used these evaluation metrics for all datasets. At the same time, we use the random rotation of 90 degrees, mirror surface, random brightness, random contrast, and other methods to perform data enhancement on these training data, increase the data set to prevent overfitting, and improve the robustness of the model. ### Experimental Results To further demonstrate the superiority of our model, we evaluate it on three datasets and use the four evaluation metrics of IOU, Dice, FLOPs, and FPS to show the performance comparison of multiple models. The quantitative results are shown in Table 1. At the same time, we also show the results visualization in Fig 4. results of polyp datasets As shown in Table 1, our model significantly outperforms other models on both polyp datasets, CVC and ETIS, especially on ETIS, a small dataset with only 196 images. Our model is superior to IOU and Dice They reached 0.9405 and 0.9646, respectively, which greatly surpassed other models, indicating that our model still has good performance in small datasets. In terms of model parameters and FLOPs, our model is also more lightweight. Compared with other models, it is only 1/10 of UNet++, and the FPS is three times faster than Unet++. This comparison is also reflected in the visualization results. From Fig 4, we can see that the segmentation effect of our model is better than other models in the overall and edge performance, which is more evident in the ETIS dataset. The segmentation is rough, our model is the closest to the real mask, and the edge processing is perfect. results of Skin lesion Segmentation From the comparison results of the Skin dataset in Table 1, our model outperforms all other models on the IOU and Dice evaluation indicators. Our model parameters and FLOPs are much smaller than all other models, making our model has a faster FPS. The performance of the evaluation indicators of some models is close to our model, the best-performing model is PSPNet. Still, from the visualization results in Fig 4, our model still has more obvious advantages. Other models, including PSPNet, segment the edges too much Smoothing, there is a big gap with the real mask, and this segmentation effect on complex edges extensively tests the segmentation ability of the model because the Patch block in our model can capture the feature extraction ability of more prominent context information, The segmentation performance is greatly improved, and the segmentation performance at the edge can still be close to the real mask. \begin{table} \begin{tabular}{c c c c c c c} \hline Dataset & Methods & IOU & Dice & Params(M) & FLOPs(G) & FPS \\ \hline CVC & UNet & 0.8713 & 0.9174 & 34.53 & 110.5 & 21.83 \\ & PSPNet & 0.9152 & 0.9491 & 60.09 & 100.84 & 13.63 \\ & SegNet [11] & 0.8790 & 0.9252 & 29.44 & 67.67 & 21.28 \\ & AttU\_Net [10] & 0.8458 & 0.9021 & 34.87 & 112.34 & 18.26 \\ & DenseUnet [32] & 0.9209 & 0.9541 & 19.33 & 26.01 & 14.47 \\ & DoubleUNet [8] & 0.9219 & 0.9533 & 18.84 & 74.52 & 16.02 \\ & UNet++ [17] & 0.9046 & 0.9424 & 36.63 & 232.98 & 13.53 \\ & PNet & **0.9332** & **0.9599** & **3.38** & **23.4** & **44.34** \\ \hline ETIS & UNet & 0.8708 & 0.9226 & 34.53 & 196.45 & 7.68 \\ & PSPNet & 0.9135 & 0.9508 & 60.09 & 179.23 & 7.52 \\ & SegNet & 0.6477 & 0.7361 & 29.44 & 120.31 & 9.09 \\ & AttU\_Net & 0.8335 & 0.8973 & 34.87 & 199.72 & 7.13 \\ & DenseUnet & 0.9274 & 0.9602 & 19.33 & 46.24 & 7.40 \\ & DoubleUNet & 0.9040 & 0.9442 & 18.84 & 132.47 & 7.45 \\ & UNet++ & 0.8809 & 0.9254 & 36.63 & 414.19 & 5.37 \\ & PNet & **0.9405** & **0.9646** & **3.38** & **41.61** & **16.39** \\ \hline Skin & UNet & 0.8681 & 0.9159 & 34.53 & 100.2 & 59.19 \\ & PSPNet & 0.8887 & 0.9300 & 60.09 & 91.5 & 43.55 \\ & SegNet & 0.8586 & 0.9101 & 29.44 & 30.7 & 85.32 \\ & AttU\_Net & 0.8458 & 0.9021 & 34.87 & 50.97 & 60.36 \\ & DenseUnet & 0.8844 & 0.9289 & 19.33 & 11.8 & 66.57 \\ & DoubleUNet & 0.8832 & 0.9256 & 18.84 & 33.81 & 56.85 \\ & UNet++ & 0.8837 & 0.9276 & 36.63 & 105.7 & 41.04 \\ & PNet & **0.8946** & **0.9340** & **3.38** & **10.62** & **126.23** \\ \hline \end{tabular} \end{table} Table 1: Evaluation of proposed PNet ### Ablation Study To further confirm the effectiveness of our proposed module, we conduct ablation experiments on the proposed module. First, the atrous rate of the two atrous convolutions in our model's core module Patch block is verified. From the experimental results in Table 2, it can be obtained that the atrous rates of our proposed two atrous convolutions are 2 and 6, respectively. The best effect is consistent with the image interpretation in Fig 3. When the atrous rate of the first convolution is 2, we compare the performance of the second convolution with the atrous rate of 5, 6, and 7. The second dilation rate needs to cover the first convolution range according to the design concept. When the first dilation ratio is 2, the convolution range is 5x5, and when the dilation ratio is 6, that is, every two. The distance between the convolution kernels is 5, which covers the first convolution range. Similarly, when the first convolution rate is 3, the second convolution rate should be 8. The experiment was also carried out as a comparison, and the segmentation performance was not good. The reason for the guess is that the convolution range is too extensive. For some small resolution images, the image size after four times of downsampling will be tiny, and the convolution kernel is 3. When the dilation ratio is 8, the convolution range is 17x17. At this time, the convolution kernel may be larger than the image itself, resulting in the need to fill too many boundaries for calculation, which will result in poor results. In addition, we also conducted a comparative experiment on the downsampling module in the Encoder stage, using a 3x3 convolution kernel with a stride of 2, and using a 3x3 convolution combined with maximum pooling for comparison, as can be seen from the experimental results in Table 2, Our downsampling module achieves optimality, and these ablation experiments demonstrate the performance of our module. \begin{table} \begin{tabular}{c c c c c c c} \hline Dataset & \multicolumn{2}{c}{CVC} & \multicolumn{2}{c}{ETIS} & \multicolumn{2}{c}{Skin} \\ \hline Methods & IOU & Dice & IOU & Dice & IOU & Dice \\ PNet(3X3) & 0.9176 & 0.9501 & 0.8938 & 0.9364 & 0.8820 & 0.9240 \\ PNet(pool) & 0.9283 & 0.9565 & 0.9258 & 0.9567 & 0.8809 & 0.9242 \\ PNet(5X5) & **0.9332** & **0.9599** & **0.9405** & **0.9646** & **0.8946** & **0.9340** \\ \hline \end{tabular} \end{table} Table 2: Ablation study on the shifted windows approach and different position embedding methods on three benchmarks, using the PNet: 3x3 convlution, 3x3 convolution and maxpooling, 5x5 convlution \begin{table} \begin{tabular}{c c c c c c c} \hline Dataset & CVC & \multicolumn{2}{c}{ETIS} & \multicolumn{2}{c}{Skin} \\ \hline Methods & IOU & Dice & IOU & Dice & IOU & Dice \\ PNet(25) & 0.9187 & 0.9506 & 0.9308 & 0.9594 & 0.8653 & 0.9134 \\ PNet(26) & **0.9332** & **0.9599** & **0.9405** & **0.9646** & **0.8946** & **0.9340** \\ PNet(27) & 0.9289 & 0.9573 & 0.9287 & 0.9579 & 0.8885 & 0.9300 \\ PNet(38) & 0.9166 & 0.9501 & 0.9048 & 0.9443 & 0.8898 & 0.9292 \\ \hline \end{tabular} \end{table} Table 3: Ablation study on the shifted windows approach and different position embedding methods on three benchmarks, using Patch block to set different void rates for the first and second convolutions: 2 and 5, 2 and 6, 2 and 7, 3 and 8 Figure 4: Qualitative comparison of segmentation results for CVC, ETIS, and Skin datasets, from top to bottom are Image, Ground Truth, U-Net, PSPNet, SegNet, AttU-Net, DenseUnet, DoublueUNet, U-Net++, PNet ## 5 Conclusion Conclusion In this paper, we propose an efficient feature extraction module Patch block, and based on it, and we propose a Patch Network for medical image segmentation. The IOU and Dice obtained by our experiments on three benchmarks are significantly better than other models. The segmentation performance on the image edge shows the superior feature extraction ability of the Patch block. The experimental results of ETIS also show that our model works on a small data set of outstanding performance. The model size and FLOPs are only 1/10 of UNet++, and the inference speed is still better than three times.
2307.05071
Mining for Unknown Unknowns
Unknown unknowns are future relevant contingencies that lack an ex ante description. While there are numerous retrospective accounts showing that significant gains or losses might have been achieved or avoided had such contingencies been previously uncovered, getting hold of unknown unknowns still remains elusive, both in practice and conceptually. Using Formal Concept Analysis (FCA) - a subfield of lattice theory which is increasingly applied for mining and organizing data - this paper introduces a simple framework to systematically think out of the box and direct the search for unknown unknowns.
Bernard Sinclair-Desgagné
2023-07-11T07:14:53Z
http://arxiv.org/abs/2307.05071v1
# Mining for Unknown Unknowns ###### Abstract Unknown unknowns are future relevant contingencies that lack an ex ante description. While there are numerous retrospective accounts showing that significant gains or losses might have been achieved or avoided had such contingencies been previously uncovered, getting hold of unknown unknowns still remains elusive, both in practice and conceptually. Using Formal Concept Analysis (FCA) - a subfield of lattice theory which is increasingly applied for mining and organizing data - this paper introduces a simple framework to systematically think out of the box and direct the search for unknown unknowns. There are only two kinds of campaign plans, good ones and bad ones. The good ones almost always fail through unforeseen circumstances that often make the bad ones succeed. - Napoleon Bonaparte - ## 1 Introduction As the recent Covid-19 pandemic reminded us, life is filled with unknown unknowns - i.e. contingencies one cannot be aware of ex ante, much less fit into standard risk analysis. In addition to a wealth of examples coming from history and politics, unknown unknowns are now well-documented, and their importance is acknowledged, in many areas of economics and management such as public policy [24], business strategy [6, 12], entrepreneur [13], contracts and the theory of the firm [41], and security [33]. To be sure, getting hold of such contingencies might allow to achieve significant payoffs or avoid major losses. Substantial research efforts have thus been expended, and notable advances been made, in this direction. To get a rigorous conceptual grasp at the notion of unknown unknowns, one may now draw, notably, from the literatures on Knightian uncertainty (e.g., [5]), undescribable events (e.g., [25]), unforeseen contingencies (e.g., [8, 21]), unawareness (e.g., [36, 35]), and surprises [40, 27]. Yet, for someone who would primarily want to uncover ahead of time the concrete unknown unknowns she might be facing, the task would remain elusive. This paper will now seek to meet this demand. Similar endeavors have already been tried, and results obtained, in areas where unknown unknowns occur frequently: like C-K Theory [18], TRIZ [20], and creativity support systems [14, 45] in innovation management; knowledge spaces [11] in learning; and elicitation methods [22, 32, 39] in engineering. As it turns out, Formal Concept Analysis (FCA), which I will be using here, provides an appropriate language, and especially structure, to build a framework which is both rigorous (it is grounded in lattice theory) and operational (its implementation requires only spreadsheets). The suggested scheme is sketched in the following Section 2. It is next developed rigorously and with more generality in Section 3. A fourth section contains concluding remarks. ## 2 An informal account Consider a 3x3 matrix with horizontal coordinates A, B, C, and vertical coordinates \(\alpha\)\(\beta\), \(\gamma\). For concreteness, the former might refer to different objects, items or events and the latter to various attributes, characteristics, properties or features. Table 1 shows the features respectively held by each specific item: object A, for instance, possesses attributes \(\alpha\), \(\beta\). In Formal Concept Analysis (FCA), such a matrix showing relationships between 'objects' (items, events, etc.) and 'attributes' (properties, features, etc.) is called a context. In practice, FCA users would of course face much more complicated types of contexts, with tables comprising dozens of rows and columns, mitigated relationships between objects and attributes, and (what is crucial for decision-making) value-weighted properties. But this simple example will suffice to convey our main points. The upshot of a discovery, experiment or invention would be the expanded matrix displayed in Table 2. Two new items - D and E - and an extra characteristic \(\delta\) were found. The initial objects A and B now bear attributes \(\delta\) and \(\alpha\) respectively, while D possesses properties \(\alpha\) and \(\delta\), E exhibits features \(\beta\) and \(\gamma\). This matrix forms a context as well. \begin{table} \begin{tabular}{|c|c|c|c|} \hline ObjectsAttributes & \(\alpha\) & \(\beta\) & \(\gamma\) \\ \hline A & \(\times\) & \(\times\) & \\ \hline B & & & \(\times\) \\ \hline C & & \(\times\) & \(\times\) \\ \hline D & \(\times\) & & \(\times\) \\ \hline E & & \(\times\) & \(\times\) \\ \hline \end{tabular} \end{table} Table 1: The existing context \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline ObjectsAttributes & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\delta\) \\ \hline A & \(\times\) & \(\times\) & & \(\times\) \\ \hline B & \(\times\) & & \(\times\) & \\ \hline C & & \(\times\) & \(\times\) & \\ \hline D & \(\times\) & & & \(\times\) \\ \hline E & & \(\times\) & \(\times\) & \\ \hline \end{tabular} \end{table} Table 2: The new context Back in time, the incremental component of Table 2 - i.e. the rows D, E, column \(\delta\) and the small x's - might have been impossible to describe, much less to anticipate even as random outcomes. They were, so to speak, _unknown unknowns_. Altogether, they make the context displayed in Table 3. Let us now see how someone could have a grasp at Table 3 using known knowns **only**, these known knowns being the data available from Table 1. In FCA, the primary mode of organizing the data of a context is through the use of 'concepts'. A concept is defined as a list of objects and attributes such that the mentioned objects are precisely the ones that share the listed attributes, and the mentioned attributes are precisely the ones shared by all the listed objects. Examples of concepts in Table 3 are the objects B, D with their common attribute \(\alpha\), event E with features \(\beta\), \(\gamma\), items A, D with the shared property \(\delta\), and object D with attributes \(\alpha\), \(\delta\). FCA calls an incompletely specified concept, i.e. a list that misses some object and attributes, a preconcept. The list (B;\(\alpha\)) is a preconcept of the concept (B,D;\(\alpha\)), for instance. Since its object B and attribute \(\alpha\) could already be seen in the existing context of Table 1, I shall refer to such specific preconcept as a _seed_. Now, the relationship between B and \(\alpha\) is captured in Table 4, which is actually the negative picture of Table 1. This table constitutes a context as well, and _it is made only of data from Table 1_. Its concepts - which might be called 'anti-concepts', since they are the counterpart of the existing initial concepts - include (B,C;\(\alpha\)), (A;\(\gamma\)) and (B;\(\alpha\),\(\beta\)). This paper's main result is that _a seed_ - like (B;\(\alpha\)) - _will always be the pre-concept of some anti-concept_ - namely, here, (B,C;\(\alpha\)) or (B;\(\alpha\),\(\beta\)). This fact has at least three ramifications. First, although one cannot say anything about which objects or which attributes will be discovered, the structure of the existing context bears some implications for the structure among discovered objects and attributes.1 This opens the door for establishing a systematic procedure to get some grasp at, and eventually uncover, unknown unknowns: \begin{table} \begin{tabular}{|c|c|c|c|} \hline ObjectsAttributes & \(\alpha\) & \(\beta\) & \(\gamma\) \\ \hline A & & & Z \\ \hline B & Z & Z & \\ \hline C & Z & & \\ \hline \end{tabular} \end{table} Table 4: The negative existing context \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline ObjectsAttributes & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\delta\) \\ \hline A & & & & \(\times\) \\ \hline B & \(\times\) & & & \\ \hline C & & & & \\ \hline D & \(\times\) & & & \(\times\) \\ \hline E & & \(\times\) & \(\times\) & \\ \hline \end{tabular} \end{table} Table 3: The discovery context (i) Build the negative picture of the existing context; (ii) Examine the preconcepts of each anticoncept; (iii) If a seed is found, dig into it to uncover some concepts in the unknown unknown. Second, the latter procedure might be seen as an instance of abduction, a mode of reasoning associated with creativity and the generation of ideas [23, 42]. Unlike deduction, which draws the logical ramifications of previously given assertions, or induction, which infers general laws from the observation of recurrent facts, abduction looks for the best justification (which is here a concept of the discovery context) after hitting a singular event (a seed). Third, as the upcoming section will show, it allows for the deployment of a potentially powerful tool for data exploration and exploitation - namely, Galois connections. ## 3 Formal Developments Let's now present the mathematics which underlie the scheme outlined above. Subsection 3.1 revisits the basics of Formal Concept Analysis (FCA). Subsection 3.2 next introduces the notion of'revelation mappings'. Subsection 3.3, finally, develops the systematic procedure for thinking out of the box. The treatment is meant to be self-contained. Only set-theoretic arguments are used throughout. ### Basic FCA notions A _formal context_ is referred to as a triplet \(K=(G,M;R)\), where \(G\) is a set of _objects_, \(M\) a set of _attributes_ these objects may have, and \(R\) is a _relation_ between \(G\) and \(M\), i.e. a subset of the Cartesian product \(G\times M\) with the interpretation that \((g,m)\in R\), or \(gRn\), if object \(g\) has attribute \(m\). Denote \(\wp(G)\) and \(\wp(M)\) the respective power sets (or sets of all subsets) of \(G\) and \(M\). Set inclusion \(\subseteq\) provides a partial order on the elements of these sets.2 The following set-to-set functions \(I_{R}\) and \(E_{R}\) defined as Footnote 2: A set \(Q\) is a _partially ordered set_ (or _poset_) if there is a relation \(\leq\) on \(Q\) (called a _partial order_) such that: (i) for \(q\in Q\), \(q\leq q\) (reflexivity property); (ii) for \(q_{1}\), \(q_{2}\in Q\), \(q_{1}\leq q_{2}\) and \(q_{2}\leq q_{1}\) implies \(q_{1}=q_{2}\) (antisymmetry); for \(q_{1}\), \(q_{2}\), \(q_{3}\in Q\), \(q_{1}\leq q_{2}\) and \(q_{2}\leq q_{3}\) implies \(q_{1}\leq q_{3}\) (transitivity). \[\mbox{for }\,\,\mbox{S} \subseteq G,\,\,I_{R}(S)=\{m\in M:gRm\,\,\mbox{ for all }\,\,\mbox{g}\in S\}\] \[\mbox{for }\,\,\mbox{T} \subseteq M,\,\,\,E_{R}(T)=\{g\in G:gRm\,\,\mbox{ for all }\,\,\mbox{m}\in T\}\] are called the _Birkhoff Operators_ for \(G\) and \(M\) respectively. For a set of objects \(S\), \(I_{R}(S)\) - the _intent_ of \(S\) - gives all the attributes in \(T\) which these objects have in common. For a given set of attributes \(T\), \(E_{R}(T)\) - the _extent_ of \(T\) - gives all the objects in \(S\) that share these attributes. In the context displayed in Table 1, \(I_{R}(\mbox{A,C})=\{\beta\}\) and \(E_{R}(\alpha,\beta)=\{\mbox{A}\}\). A well-known property of the Birkhoff Operators is that of _duality_: knowing \(I_{R}(\cdot)\) completely determines \(E_{R}(\cdot)\), and vice-versa, specifying \(E_{R}(\cdot)\) also defines \(I_{R}(\cdot)\). A _formal concept_ in the context \(K=(G,M;R)\) is now a pair of sets \((Q;V)\), with \(Q\subseteq G\) and \(V\subseteq M\), such that \(I_{R}(Q)=V\) and \(E_{R}(V)=Q\). The extent of a concept \((Q;V)\) is thus \(Q\), while its intent is \(V\).3 A _preconcept_ in \(K\), finally, is a pair \((P;U)\), with \(P\subseteq G\) and \(U\subseteq M\), such that \(P\subseteq E_{R}(U)\) or, equivalently, \(U\subseteq I_{R}(P)\). Preconcepts can be ordered as follows [9]: \((P;U)\sqsubseteq(P^{\prime};U^{\prime})\), meaning that \((P;U)\) is _less extensive than_\((P^{\prime};U^{\prime})\), if \(P\subseteq P^{\prime}\) and \(U\subseteq U^{\prime}\). ### Revelation mappings From now on, \(K=(G,M;R)\) will denote the existing context and \(K^{+}=(G^{+},M^{+};R^{+})\) the new context after the previous unknown unknowns have been revealed.4 Footnote 4: Power sets, Birkhoff Operators, formal concepts, and preconcepts are similarly defined on their context of reference, be it \(K\), \(K^{+}\), or any other context. Let's then call _revelation mappings_ the functions \(\Phi:\wp(G^{+})\to\wp(M^{+})\), \(\Psi:\wp(M^{+})\to\wp(G^{+})\) such that5 Footnote 5: Let’s agree that \(I_{R}(g)=\varnothing\) when \(g\notin G\), and \(E_{R}(m)=\varnothing\) when \(m\notin M\). \[\mbox{for S}^{+} \subseteq G^{+},\ \Phi(S^{+})=I_{R^{+}}(S^{+})\ \setminus\ \underset{g\in S^{+}}{\cup}I_{R}(g)\] \[\mbox{for T}^{+} \subseteq M^{+},\ \Psi(T^{+})=E_{R^{+}}(T^{+})\ \setminus\ \underset{m\in T^{+}}{\cup}E_{R}(m)\] If one takes a set \(S\subseteq G\) of objects from the existing context \(K\), \(\Phi(S)\) delivers the set of attributes (old or new) in \(M^{+}\) which are newly associated with these objects. In Table 2, for instance, \(\Phi(\mbox{B,E})=\varnothing\) and \(\Phi(\mbox{A})=\{\delta\}\). Similarly, for a subset of initial attributes \(T\subseteq M\), \(\Psi(T)\) gives all (and only) the initial or new objects that now possess these attributes. For example, \(\Psi(\gamma,\delta)=\varnothing\) but \(\Psi(\beta,\gamma)=\{\mbox{E}\}\). As for the Birkhoff operators, there is a _duality property_ between \(\Phi(\cdot)\) and \(\Psi(\cdot)\): each one uniquely characterizes the other. These mappings also hold additional features which are spelled out in the upcoming propositions. First, say that a function \(\pi:X\to Y\) between two sets \(X\) and \(Y\), partially ordered by \(\leq\) and \(\preceq\) respectively, is _antitone_ (or order-reversing) if, for \(p_{1},p_{2}\in X\), \(p_{1}\leq p_{2}\) implies \(\pi(p_{2})\preceq\pi(p_{1})\). A first statement is now at hand. Proposition 1: The revelation mappings \(\Phi\) and \(\Psi\) are antitone. Proof: First, consider \(\Phi\). Take two sets \(S^{+}_{1}\), \(S^{+}_{2}\in\wp(G^{+})\) such that \(S^{+}_{1}\subseteq S^{+}_{2}\); we must show that \(\Phi(S^{+}_{2})\subseteq\Phi(S^{+}_{1})\). If \(m\in\Phi(S^{+}_{2})\), then \(m\in I_{R^{+}}(S^{+}_{2})\) so \(gR^{+}m\) for all \(g\in S^{+}_{2}\). Since \(S^{+}_{1}\subseteq S^{+}_{2}\), we have that \(gR^{+}m\) for all \(g\in S^{+}_{1}\), hence \(m\in I_{R^{+}}(S^{+}_{1})\). Now, if \(m\notin M\), \(m\notin I_{R}(g)\) for any \(g\in S^{+}_{1}\); it follows that \(m\in I_{R^{+}}(S^{+}_{1})\)\(\setminus\ \underset{g\in S^{+}_{1}}{\cup}I_{R}(g)=\Phi(S^{+}_{1})\). Suppose, alternatively, that \(m\in M\). Since \(m\in\Phi(S^{+}_{2})\), it must be the case that \(not(gRm)\) for all \(g\in S^{+}_{2}\), hence \(not(gRm)\) as well for all \(g\in S^{+}_{1}\) since \(S^{+}_{1}\subseteq S^{+}_{2}\); it follows again that \(m\in\Phi(S^{+}_{1})\). This shows that \(\Phi(S^{+}_{2})\subseteq\Phi(S^{+}_{1})\). The same line of reasoning works for \(\Psi\) (as can be expected from duality). \(\blacksquare\) This property of revelation mappings means that, the more objects or attributes one starts with, the more demanding it is to find new relationships that fit them all. This intuitive result is also instrumental in deriving other important characteristics of revelation mappings. A key notion to introduce at this point is that of a _Galois connection_.6 Let \(X\) and \(Y\) be two sets partially ordered by \(\leq\) and \(\preceq\) respectively. A (antitone) _Galois connection_\((\pi,\theta)\) on \(X\) and \(Y\) is a pair of functions \(\pi:X\to Y\) and \(\theta:Y\to X\) such that the following equivalent properties are satisfied. (i) For each \(p\in X\), \(p\leq\theta\pi(p)\) and for each \(q\in Y\), \(q\preceq\pi\theta(q)\). (ii) For \(p\in X\) and \(q\in Y\), \(p\leq\theta(q)\) if and only if \(q\preceq\pi(p)\). It is well-known that the Birkhoff operators \((I_{R},E_{R})\), \((I_{R^{+}},E_{R^{+}})\) are antitone Galois connections on, respectively, the power sets \(\wp(G)\), \(\wp(M)\) and \(\wp(G^{+})\), \(\wp(M^{+})\) ordered by set inclusion (see, e.g., [15], p. 13-14). In this case, property (i) means that the attributes common to a given set of objects might be shared by more objects, while the objects that share a given set of attributes might have more attributes in common. Property (ii), on the other hand, says that some objects are among those sharing a given set of attributes if and only if these attributes are among those common to these objects. As it turns out, the pair of revelation mappings \((\Phi,\Psi)\) forms a Galois connection. Proposition 2: The pair of revelation mappings \((\Phi,\Psi)\) is a Galois connection on the power sets \(\wp(G^{+})\) and \(\wp(M^{+})\) partially ordered by inclusion. Proof: To see this, take two sets \(S^{+}\in\wp(G^{+})\) and \(T^{+}\in\wp(M^{+})\), and notice that \[S^{+} \subseteq \Psi(T^{+})\] \[\mbox{if and only if}\ \ \ \forall g \in S^{+},\,\forall m\in T^{+}\mbox{: }\ gR^{+}m\ \mbox{ and }\ not(gRm)\] \[\mbox{if and only if}\ \ \ \forall m \in T^{+},\,\forall g\in S^{+}\mbox{: }\ gR^{+}m\ \mbox{ and }\ not(gRm)\] \[\mbox{if and only if}\ \ \ T^{+} \subseteq \Phi(S^{+})\] \(\blacksquare\) Proposition 2 underlies a central result. Like any Galois connection ([15], p. 14), \((\Phi,\Psi)\) establishes a relation, noted \(R^{+}_{(\Phi,\Psi)}\), between the set of objects \(G^{+}\) and the set of attributes \(M^{+}\). This relation is defined as \[R^{+}_{(\Phi,\Psi)} = \left\{(g,m)\in G^{+}\times M^{+}\mid g\in\Psi(m)\right\}\] \[= \left\{(g,m)\in G^{+}\times M^{+}\mid m\in\Phi(g)\right\}\] We can show that \(R^{+}_{(\Phi,\Psi)}\) coincides with \(R^{+}\setminus R\), the set of all new relationships. Proposition 3: \(R^{+}_{(\Phi,\Psi)}=R^{+}\setminus R\). Proof: Observe that \((g,m)\in R^{+}_{(\Phi,\Psi)}\) if and only if \(gR^{+}m\) and \(not(gRm)\), if and only if \((g,m)\in R^{+}\setminus R\). \(\blacksquare\) ### Thinking out of the box From now on, let \(R^{+}_{(\Phi,\Psi)}=R^{+}\setminus R\) be referred to as \(R^{*}\). The latter relation defines another formal context, the _discovery_ context noted \(K^{*}=(G^{+},M^{+};R^{*})\), which is the context of the unknown unknowns. Can \(K^{*}\) be inferred from \(K\), at least partly? We will now see that the answer actually errs on the yes side. The ordered pair \((X;Y)\) with \(X\neq\varnothing\), \(Y\neq\varnothing\) is called a _seed_ in \(K\) for \(K^{*}\) if it is a preconcept in \(K^{*}\) while \(X\subseteq G\) and \(Y\subseteq M\). As the next statement confirms, the existence of a seed is guaranteed when the existing context harbors at least one new relationship between the original objects and attributes. Proposition 4: If \(R^{*}\cap(G\times M)\neq\varnothing\), then there is at least one seed in \(K\) for \(K^{*}\). Proof: The assumption implies that there is at least one concept \((Q;V)\) in \(K^{*}\) such that \(Q\cap G\neq\varnothing\) and \(V\cap M\neq\varnothing\). Since \(Q\cap G\subseteq Q=I_{R^{*}}(V)\subseteq I_{R^{*}}(V\cap M)\) and \(V\cap M\subseteq V=E_{R^{*}}(Q)\subseteq E_{R^{*}}(Q\cap G)\), the pair \((Q\cap G;V\cap M)\) is a preconcept in \(K^{*}\). \(\blacksquare\) As suggested in Section 2, looking for seeds might be a reasonable first step to uncover unknown unknowns. The major reason is that, as we will now demonstrate, _it is possible to characterize the location of seeds_. First, according to the following proposition, a seed must combine objects and attributes which are a priori unrelated. Proposition 5: No preconcept (a fortiori concept) in the existing context \(K\) can be a seed for the discovery context \(K^{*}\). Proof: Let \((P;U)\) be a preconcept in \(K\). By definition, \(\Phi(P)=I_{R^{+}}(P)\setminus\underset{g\in P}{\cup}I_{R}(g)\). But \(U\subseteq I_{R}(P)=\underset{g\in P}{\cap}I_{R}(g)\subseteq\underset{g\in P} {\cup}I_{R}(g)\). It follows that \(U\nsubseteq\Phi(P)\), hence \((P;U)\) is not a preconcept in \(K^{*}\). This result tells us something about how not to look for novelties. A corollary is that a seed in \(K\) for \(K^{*}\) must be a pair \((P;U)\), with \(P\subseteq G\) and \(U\subseteq M\), such that \(P\cap(\underset{m\in U}{\cup}E_{R}(m))=\varnothing\) and \(U\cap(\underset{g\in P}{\cup}I_{R}(g))=\varnothing\). This suggests working with the negative of the existing context \(K\), noted \(\overline{K}=(G,M;\overline{R})\), where the relation \(\overline{R}=G\times M\setminus R\) refers to the _reverse relation_\(g\overline{R}m\) which holds when object \(g\)_does not_ have attribute \(m\). The next (key, and somewhat surprising) proposition shows that \(\overline{K}\) - which can be obtained using **only** the initial data - is the appropriate 'outbox' in which mining for unknown unknowns might begin. Proposition 6: A seed is a preconcept of the negative existing context \(\overline{K}\). Proof: Let \((P;U)\) be a seed for \(K^{*}\). Then \(U\subseteq\Phi(P)\cap M=M\cap I_{R^{+}}(P)\setminus\underset{g\in P}{\cup}I_ {R}(g)\subseteq M\setminus\underset{g\in P}{\cup}I_{R}(g)\) \(=\underset{g\in P}{\cap}(I_{R}(g))^{c}=I_{\overline{R}}(P)\). \(\blacksquare\) Seeds for the discovery context \(K^{*}\) - which comprises a priori unknown relationships between objects in \(G\) and attributes in \(M\) - thus happen to point, not only at concepts in \(K^{*}\), but also at the concepts of the negative existing context \(\overline{K}\). This suggests the procedure already outlined in Section 2: * Take the negative context \(\overline{K}\) of \(K\); * Consider a concept in \(\overline{K}\) (i.e. an anti-concept); * Examine the latter's preconcepts; * If one of these preconcepts brings out a new relationship between its objects and attributes, then a seed has been found which anticipates some concepts in the discovery context \(K^{*}\). Whether this scheme can be fruitful in practice remains to be seen. One hurdle could be computational complexity (see the concluding remarks). Interestingly, however, Propositions 5 and 6 suggest that concepts in the negative existing context \(\overline{K}\) can be constructed using the mappings \(\ \widetilde{\Phi}:\wp(G)\to\wp(M)\) and \(\widetilde{\Psi}:\wp(M)\to\wp(G)\) defined as \[\text{for}\ \ \text{S} \subseteq G,\ \widetilde{\Phi}(S)=M\ \setminus\ \underset{g\in S}{\cup}I_{R}(g)\] \[\text{for}\ \ \text{T} \subseteq M,\ \widetilde{\Psi}(T)=G\ \setminus\ \underset{m\in T}{\cup}E_{R}(m)\] respectively. Comparing the latter expressions with the ones corresponding to the above revelation mappings, the functions \(\widetilde{\Phi}\) and \(\widetilde{\Psi}\) can be seen as approximations for \(\Phi\) and \(\Psi\). Whether closer approximations (in a sense to be made precise) can be found, which would then provide a better grasp at unknown unknowns, would be a valuable research topic. ## 4 Concluding remarks This paper submitted a new framework and approach to handle unknown unknowns. The scheme has rigorous foundations in lattice theory. It looks widely applicable, furthermore, since it can incorporate various kinds of data - quantitative and qualitative, objective and subjective, financial and non-financial. And it seems to be user-friendly, boiling down to using only spreadsheets. At this stage, in addition to the extensions suggested at the end of the previous section, other ones could be the following: First, on a technical note, listing all the concepts of a formal context is generally burdensome.7 Yet, the search for seeds requires this exercise. Research and development on how to identify concepts in a given context is very much ongoing. Several algorithms and softwares already exist: many (mentioned in [38], for instance) are subject to a patent but others - GALICIA and JALABA, for example - can be freely downloaded. Two promising trends are to take full advantage of negative information (i.e. the information contained in the negative existing context \(\overline{K}\)), as in [34] or [28], and to assign weights to attributes, as in [4]. Footnote 7: An upper bound on the number of concepts in the context K = (G,M;R) is \(\frac{3}{2}2\sqrt{|R|+1}-1\). See [16], p. 94. Second, the above derivation made minimal assumptions about the use of a priori knowledge, ignoring issues of landscape and timing, and forbidding the use of probabilities. In practice, however, one might be able to tap on probabilistic beliefs based on science, predictive models or sound experience, in order to figure out the plausibility of new relationships between objects and attributes. This endeavor will enhance the search for seeds, hence the prospecting for unknown-unknowns.
2304.13343
Enhancing Large Language Model with Self-Controlled Memory Framework
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall relevant information. Our SCM framework comprises three key components: an LLM-based agent serving as the backbone of the framework, a memory stream storing agent memories, and a memory controller updating memories and determining when and how to utilize memories from memory stream. Additionally, the proposed SCM is able to process ultra-long texts without any modification or fine-tuning, which can integrate with any instruction following LLMs in a plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the effectiveness of SCM for handling lengthy inputs. The annotated dataset covers three tasks: long-term dialogues, book summarization, and meeting summarization. Experimental results demonstrate that our method achieves better retrieval recall and generates more informative responses compared to competitive baselines in long-term dialogues. (https://github.com/wbbeyourself/SCM4LLMs)
Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li
2023-04-26T07:25:31Z
http://arxiv.org/abs/2304.13343v3
Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System ###### Abstract Large-scale Language Models (LLMs) are constrained by their inability to process lengthy inputs. To address this limitation, we propose the Self-Controlled Memory (SCM) system to unleash infinite-length input capacity for large-scale language models. Our SCM system is composed of three key modules: the language model agent, the memory stream, and the memory controller. The language model agent iteratively processes ultra-long inputs and stores all historical information in the memory stream. The memory controller provides the agent with both long-term memory (archived memory) and short-term memory (flash memory) to generate precise and coherent responses. The controller determines which memories from archived memory should be activated and how to incorporate them into the model input. Our SCM system can be integrated with any LLMs to enable them to process ultra-long texts without any modification or fine-tuning. Experimental results show that our SCM system enables LLMs, which are not optimized for multi-turn dialogue, to achieve multi-turn dialogue capabilities that are comparable to ChatGPT, and to outperform ChatGPT in scenarios involving ultra-long document summarization or long-term conversations. Additionally, we will supply a test set, which covers common long-text input scenarios, for evaluating the abilities of LLMs in processing long documents. 12 Footnote 1: Working in progress. Footnote 2: [https://github.com/wbbyourself/SOM4LLMs](https://github.com/wbbyourself/SOM4LLMs) ## 1 Introduction Recently, Large-scale Language Models (LLMs) have attracted significant attention due to their remarkable performance in a diverse range of tasks Brown et al. (2020); Zeng et al. (2023); Ouyang et al. (2022); Thoppilan et al. (2022); Workshop et al. (2023); Muennighoff et al. (2022). The strong foundational capability of LLMs, achieved through large-scale pre-training on massive text corpora (e.g., In-Context Learning Brown et al. (2020), Chain-of-Thoughts Wei et al. (2022), b), among others), is a contributing factor to their success. Instruction tuning Raffel et al. (2020); Wei et al. (2022); Chung et al. (2022) helps LLMs comprehend natural language task descriptions, while Reinforcement Learning with Human Feedback (RLHF) Schulman et al. (2017); Stiennon et al. (2020); Bai et al. (2022) aligns generated text with human preferences. The combined capabilities of LLMs have effectively shattered the boundaries between natural language processing tasks, leading to limitless possibilities in the application and research directions of LLMs. Large Language Models (LLMs) offer numerous advantages, but their utility is hindered by two main factors: the maximum input length and the com Figure 1: An example comparing ChatGPT and our SCM system. The orange text indicates the user’s current question, bold text denotes related historical information, red text indicates an erroneous response from ChatGPT, and green text denotes the correct response from our system. putational complexity of self-attention during the pre-training phase (Wang et al., 2020). Although some models (Press et al., 2022; OpenAI, 2022) are capable of processing long inputs, they may still struggle to capture crucial contextual information in exceptionally lengthy texts. As demonstrated in Figure 1, even the ChatGPT 3 can miss out on essential context from preceding text because of the accumulation of historical noise, which refers to irrelevant or outdated information that can hinder comprehension. Footnote 3: In this study, we utilize OpenAI _gpt-3.5-turbo-0301_. To address this limitation, we present the Self-Controlled Memory (SCM) system, which enables Large Language Models (LLMs) to process text of infinite length without any modification or additional training. The input is partitioned into segments and fed to the LLM as observations (inputs). The SCM expands the LLM with a long-term memory (archived memory), a short-term memory (flash memory), and a memory controller. The archived memory preserves all historical information, while the flash memory captures real-time memory information from previous rounds. The memory controller determines when and how to introduce archived information, allowing the LLM to efficiently handle ultra-long text without sacrificing any essential information. To evaluate the performance of our system, we integrate the SCM with non-dialogue-optimized LLMs and simulate ChatGPT with success. Our findings indicate that our system outperforms ChatGPT in handling ultra-long inputs or conversations. For summarization tasks, we generate a hierarchical summary of the entire archived memory until the summary length meets the user's specifications. By incorporating information from preceding text into local summaries within the memory, our approach preserves the correlations among the original content, in contrast to the conventional approach of directly generating a hierarchical summary of the entire text. Furthermore, our work is still in progress, and we plan to release a comprehensive evaluation dataset designed for long-text tasks, along with standardized human evaluations to evaluate the effectiveness of different methods. ## 2 Related Work Large-scale Language Models.Large-scale Language Models (LLMs) are language models trained on massive amounts of text data, using the Transformer (Vaswani et al., 2017) architecture as their foundation. The earliest Transformer-based pre-trained language model was GPT-1 (Radford et al., 2018). Subsequently, GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) were developed with gradually increasing parameter sizes. GPT-3 has the largest scale, with 175B parameters, along with emergent abilities (Wei et al., 2022, 2020), which has attracted the attention of both academia and industry. Since then, many LLMs have emerged, including LAMBDA (Thoppilan et al., 2022), PaLM (Chowdhery et al., 2022), OPT (Zhang et al., 2022), LLaMA (Touvron et al., 2023), BLOOM (Workshop et al., 2023), Galactica (Taylor et al., 2022), and Pangu (Zeng et al., 2021; Ren et al., 2023). One of the most notable works in this series of research that has attracted widespread industry attention and can be considered a milestone towards Artificial General Intelligence (AGI) is ChatGPT (OpenAI, 2022), which is based on InsctuctGPT (Ouyang et al., 2022) and optimized for multi-turn dialogue ability. ChatGPT has achieved remarkable performance and surpassed the boundaries between NLP tasks. However, current LLMs, including ChatGPT, face significant limitations when processing tasks involving extremely long inputs. Long Text Sequence Processing.Handling long text sequences has been a persistent challenge in natural language processing tasks. This problem has become even more prominent with the advent of pre-training and LLMs, as the fixed input length during pre-training and the high costs of expanding it during the pre-training stage limit the ability to process longer inputs. Existing solutions primarily involve replacing the Attention structure during pre-training to reduce computational costs and expanding the pre-training sequence length (Beltagy et al., 2020; Zaheer et al., 2021; Guo et al., 2022; Phang et al., 2022; Dong et al., 2023). Another alternative approach (Press et al., 2022) uses special positional encoding during pre-training to enable the model to learn relative positions and handle longer input texts during inference. However, the generalizability of these methods and their impact on downstream tasks remain uncertain. In the field of long-text summarization, there are many effective methods. Hierarchical or iterative methods have been used by Wu et al. (2021); Zhang et al. (2022); Cao and Wang (2022) to handle long texts by decomposing a complex prob lem into multiple sub-problems. However, these methods fail to capture the relationships among sub-problems. ## 3 Methodology The Self-Controlled Memory (SCM) system proposed in this paper aims to give large-scale language models (LLMs) the capability to store long-term memories, allowing them to process lengthy inputs and retain information after multiple interactions with the user. ### System Overview In this section, we introduce the workflow of our proposed SCM system. As illustrated in Figure 2, our SCM system comprises three modules, including a language model agent, a memory stream, and a memory controller. The three modules work together to process lengthy documents and provide more accurate and coherent responses. Our system workflow consists of six explicit steps, which are presented as follows: _1. Input Acquisition:_ The agent receives an observation in turn \(T\) (i.e., ultra-long document input or a user question), either through direct input or from an external source. _2. Memory Activation:_ Based on the current observation, the memory controller determines whether it is necessary to activate memory for the current user input. In the case where memory activation is warranted, relevant memories (for detailed memory information, refer to section SS 3.2) is retrieved by executing steps 3 and 4. Otherwise, the process moves directly to step 5. SS 3.3.1 provides a comprehensive explanation of the control flow of the memory controller. _3. Memory Retrieval:_ In this step, we utilize the observation as a query to identify related memories. The score ranking of each memory is computed by considering two dimensions: relevance and recency. With respect to relevance, we evaluate how similar the content of the memory is to the observation. With respect to recency, we consider the time elapsed since the memory was last accessed. Subsequently, we retain the top K-ranked memories. _4. Memory Reorganization:_ In this step, the controller will determine whether to use the original memory directly or the summarized memory. If summarized memory is chosen, the original memory the will be compressed. SS 3.3.2 provides a detailed explanation of the state compression process. Then, the system will combine the memory retrieved in a structured manner to serve as background information for response generation at this point. _5. Input Fusion:_ In this step, we carefully design a prompt that fuses the restructured memory with the present observation to serve as the model's input. A thorough description is given in SS 3.4. _6. Response Generation:_ The model generates a response based on previous step result and incorporates the current interaction, including observation Figure 2: The workflow of our proposed Self-Controlled Memory(SCM) system, where numbers 1-6 represent the sequential process of one iteration with new observation #T. and response, into the memory stream. Please refer to SS 3.4 for further details. ### Memory Stream This section provides an overview of the internal structure of memory stream. The memory stream stores all historical memory items in a designated location named as the archived memory center, which can easily achieve high-speed access through cache storage and access tools such as Redis or Pinecone4. Each memory item consists of an interaction index, an observation, a system response, and an interaction embedding that illustrates the current interaction semantics. In addition, The Activation Memory stores the retrieved memory set, and the Flash Memory indicates the memory of Turn \(T-1\). Footnote 4: Pinecone: [https://www.pinecone.io/](https://www.pinecone.io/) ### Memory Controller This section discusses the reasons for using the memory controller and its workflow, as illustrated in Figure 3. There exist three fundamental reasons. Firstly, not all observations, also referred to as user input or instruction, require access to historical memory usage. For example, the user instruction "Tell me a joke" does not require the retrieval of the user's history memory. However, certain user input such as "Do you remember the conclusion we made last week on the fitness diets" requires retrieving past memories. The second reason is that the amount of memory can be enormous, ranging from hundreds to thousands or even tens of thousands. A controller is needed to retrieve and filter the memory. The third reason is that the input length of the model is limited, and a controller is needed to choose between using the full text of the memory or a summary of the memory, as the original text can be long and may exceed the maximum length of the model. The next two subsections present the details of the controller's workflow and state compression implementation, respectively. #### 3.3.1 Memory Controller Workflow The core of the controller in terms of process control is to ask two questions of the agent: 1. Is it necessary to use memory to accurately answer when executing user commands? 2. Can user commands be executed normally using only the summary of memory? The first question prompt is shown in Figure 4, while the prompt for the second question is shown in Figure 5. Other language versions of the prompt can be found in SS A.1. If the controller determines the necessity of utilizing historical memory, memory retrieval should be carried out. While retrieving memories, we use the current observation (i.e. user instruction) as a query and evaluate each memory's rank score based on two factors: Recency and Relevance. Recency highly prioritizes memory items accessed Figure 4: English prompt for the necessity of using memory. Figure 5: English prompt for whether or not to use the summary of memory. Figure 3: Workflow of the Memory Controller. n the state of interest. The relevance factor assigns a higher score to memory items that are related to the current observation. In our implementation, we created an embedding vector for the text description of every memory through the use of a language model5. The cosine similarity between the embedding vector of the memory and that of the query observation is calculated to determine relevance. The rank score of each memory is the sum of its recency and relevance scores: \(rank\_score=recency\_score+relevance\_score\). Depending on the length limit, we designate the top \(k\) memories with the highest rank scores as activated memories, where \(k\) varies between 3 and 10. Footnote 5: We used OpenAI embedding model _text-embedding-ad-002_ If the controller determines that the employment of a summary can allow for the normal execution of instructions, then it is necessary to summarize the current extracted memories. The specific information regarding the memory summarization is located in the following subsection. #### 3.3.2 Memory Summarization Memory summarization is a crucial aspect in scenarios such as document summarization, where a single interaction or dialogue turn can have a token length exceeding 3000. It enables stacking multiple memories into an activated memory section. Figure 6 shows the English prompt that is specifically designed for memory summarization in individual interactions (i.e., dialogue tasks). In addition, other language versions of the prompt can be found in SS A.3. ### Response Generation As mentioned in the introduction, SCM allows non-dialogue-optimized LLMs to emulate ChatGPT. This necessitates providing explicit instructions to prompt LLMs to use activated and flash memory to generate the desired response. Figure 7 shows an English prompt intended for extremely long multi-interaction dialogues, with other language versions included in the SS A.2. ## 4 Experiments ### Settings Our framework is preliminarily evaluated in two scenarios: ultra-long dialogues and ultra-long document summarization. We conduct experiments to answer three research questions (RQs). * **RQ1**. Can SCM system compete with or even outperform ChatGPT within a specific token limit? * **RQ2**. Can SCM system scale to provide accurate responses to users' questions, which are related to historical contexts that date back hundreds or even thousands of turns? * **RQ3**. Can SCM demonstrate generalization to other scenarios, including long document summarization? The following experiment evaluates the performance of the _text-dayinci-003_ model without dia Figure 6: Prompt for dialogue memory summarization. Figure 7: English Prompt of ultra-long dialogue generation. logue optimization in comparison to the _ChatGPT-Turbo_ model. The evaluation focuses on the tasks of ultra-long dialogues and ultra-long document summarization. ### Qualitative Study **RQ1**. Can SCM system compete with or even outperform ChatGPT within a specific token limit? **Yes**. The example in Figure 1 includes 4000 tokens, wherein the user inquired about their hobbies, discussed 100+ turns ago with the agent. The SCM system provides an accurate response to the query, demonstrating exceptional memory-enhanced capabilities, as apparent from the observation. In contrast, it appears that ChatGPT was distracted by a considerable amount of irrelevant historical data. **RQ2**. Can SCM system scale to provide accurate responses to users' questions, which are related to historical contexts that date back hundreds or even thousands of turns? **Yes**. The example presented in Figure 8 illustrates a ultra-long dialogue comprising over 500 turns. At the outset, the user states that his goal is to reduce weight and intends to initiate a running regime. Subsequently, the user and the model converse daily about progress towards achieving their weight loss goals, among other conversation topics. After ten days, the length of the dialogue reaches 10,000 tokens. The user then asks the model "Do you remember my fir". Our SCM system accurately responds to this question. **RQ3**. Can SCM demonstrate generalization to other scenarios, including long document summarization? **Yes**. Figure 9 illustrates an instance of an incredibly lengthy document summary. Specifically, the report pertains to the unveiling of GPT-4 by OpenAI. Summaries exceeding 4,000 characters pose a challenge for conventional models, thus necessitating the splitting and individual summarization of document parts, which are then united. Nonetheless, this method can lose the dependency relationship between paragraphs. Our framework utilizes a iterative summarization procedure. While summarizing paragraphs, our approach relies on earlier relevant summary memories to generate more precise summaries. Ultimately, the framework incorporates a divide-and-conquer strategy to generate the final document summary. The final summary obtained through the divide-and-conquer method provides a comprehensive summary by utilizing information from each document block. Furthermore, our iterative summary paradigm contains a memory-enhancement feature that allows topic-specific summaries to be generated by integrating a Figure 8: Ultra-long dialogue example. question-asking methodology during single block summarization. For instance, if a user poses a question such as "Please provide a summary of the technical details and evaluation findings of GPT4 in image processing", the model will access prior summary memories and extract the relevant content. We will continue to improve this aspect in the future. ## 5 Limitations and Risks LimitationsA lack of appropriate datasets for evaluating the handling of extremely lengthy texts has resulted in our model being validated solely through manual verification. This method, however, is inadequate for evaluating different scenarios comprehensively and objectively. Therefore, we aim to construct a specific test set that incorporates various key indicators essential for processing long texts in diverse settings. This test set will be accompanied by a manual evaluation standard to enable a more equitable comparison with relevant methods. Moreover, we will assess the efficacy of our system on more open-source models that possess single-turn instruction comprehension capability. RisksOur system has the capability to attach to any LLMs, which may be prone to factual errors, delusions, toxic language, and malicious responses. Consequently, we restrict the usage of our system to academic research purposes for now. ## 6 Conclusion and Future Work In this paper, we propose a Self-Controlled Memory (SCM) system to extend the input length of any LLMs model to an unlimited length and effectively capture useful information from all historical information. This method does not require any training or modification of models and has strong applicability. We validated the effectiveness of our method through manual evaluation of the _ChatGPT_ and the _Text-DaVinci-003_ model based on our system, demonstrating superior performance in certain aspects of long-text scenarios compared to ChatGPT. Our future work will focus on releasing a comprehensive test set and its manual evaluation criteria, and testing our system on various open-source models currently available. ## References * [1] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Sauvav Kadavath, Jackson Kernino, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Figure 9: Ultra-long iterative and hierarchical summarization example. Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback.
2306.02437
Data Quality in Imitation Learning
In supervised learning, the question of data quality and curation has been over-shadowed in recent years by increasingly more powerful and expressive models that can ingest internet-scale data. However, in offline learning for robotics, we simply lack internet scale data, and so high quality datasets are a necessity. This is especially true in imitation learning (IL), a sample efficient paradigm for robot learning using expert demonstrations. Policies learned through IL suffer from state distribution shift at test time due to compounding errors in action prediction, which leads to unseen states that the policy cannot recover from. Instead of designing new algorithms to address distribution shift, an alternative perspective is to develop new ways of assessing and curating datasets. There is growing evidence that the same IL algorithms can have substantially different performance across different datasets. This calls for a formalism for defining metrics of "data quality" that can further be leveraged for data curation. In this work, we take the first step toward formalizing data quality for imitation learning through the lens of distribution shift: a high quality dataset encourages the policy to stay in distribution at test time. We propose two fundamental properties that shape the quality of a dataset: i) action divergence: the mismatch between the expert and learned policy at certain states; and ii) transition diversity: the noise present in the system for a given state and action. We investigate the combined effect of these two key properties in imitation learning theoretically, and we empirically analyze models trained on a variety of different data sources. We show that state diversity is not always beneficial, and we demonstrate how action divergence and transition diversity interact in practice.
Suneel Belkhale, Yuchen Cui, Dorsa Sadigh
2023-06-04T18:48:32Z
http://arxiv.org/abs/2306.02437v1
# Data Quality in Imitation Learning ###### Abstract In supervised learning, the question of data quality and curation has been overshadowed in recent years by increasingly more powerful and expressive models that can ingest internet-scale data. However, in offline learning for robotics, we simply lack internet scale data, and so high quality datasets are a necessity. This is especially true in imitation learning (IL), a sample efficient paradigm for robot learning using expert demonstrations. Policies learned through IL suffer from state distribution shift at test time due to compounding errors in action prediction, which leads to unseen states that the policy cannot recover from. Instead of designing new algorithms to address distribution shift, an alternative perspective is to develop new ways of assessing and curating datasets. There is growing evidence that the same IL algorithms can have substantially different performance across different datasets. This calls for a formalism for defining metrics of "data quality" that can further be leveraged for data curation. In this work, we take the first step toward formalizing data quality for imitation learning through the lens of distribution shift: a high quality dataset encourages the policy to stay in distribution at test time. We propose two fundamental properties that shape the quality of a dataset: i) _action divergence:_ the mismatch between the expert and learned policy at certain states; and ii) _transition diversity:_ the noise present in the system for a given state and action. We investigate the combined effect of these two key properties in imitation learning theoretically, and we empirically analyze models trained on a variety of different data sources. We show that state diversity is not always beneficial, and we demonstrate how action divergence and transition diversity interact in practice. ## 1 Introduction Supervised learning methods have seen large strides in recent years in computer vision (CV), natural language processing (NLP), and human-level game playing [18; 26; 42; 8; 49; 13; 43]. These domains have benefited from large and complex models that are trained on massive internet-scale datasets. Despite their undeniable power, biases present in these large datasets can result in the models exhibiting unexpected or undesirable outcomes. For example, foundation models such as GPT-3 trained on uncurated datasets have resulted in instances of racist behavior such as associating Muslims with violence [1; 5]. Thus offline data curation is immensely important for both safety and cost-effectiveness, and it is gaining in prominence in training foundation models [22; 10; 39]. Data curation is even more important in the field of robotics, where internet-scale data is not readily available and real-world datasets are small and uncurated. Noise or biases present in the data can lead to dangerous situations in many robotics tasks, for example injuring a person or damaging equipment. In such scenarios, deciding which data to collect and how best to collect it are especially important [36; 33; 4]. Of course, the quality of data depends on the algorithm that uses the data. A common paradigm in robot learning from offline datasets is imitation learning (IL), a data-driven, sample efficient framework for learning robot policies by mimicking expert demonstrations. However, when learning from offline data using IL, estimating data quality becomes especially difficult, since the "test set" the robot is evaluated on is an entirely new data distribution due to compounding errors incurred by the model, i.e., action prediction errors take the model to unseen states. This phenomenon is studied and referred to as the distribution shift problem, and prior work has viewed and addressed it through several angles [45; 48]. Broadly, prior work address distribution shift either by taking an _algorithm-centric_ approach to account for biases in the dataset, or by directly modifying the dataset collection process in a _data-centric_ manner. Algorithm-centric approaches learn robust policies by imposing task-specific assumptions [23; 28; 25], acquiring additional environment data [21; 40], or leveraging inductive biases in representing states [51; 30; 37; 12; 12; 36] and actions [50; 47; 12]. What these algorithm-centric works overlook is that changing the data can be as or more effective for policy learning than changing the algorithm. In prior data-centric methods, the goal is usually to maximize _state diversity_ through shared control [45; 44; 16; 27; 31; 35; 46], noise injection [33], or active queries [15; 6; 41; 29]. By only focusing on state coverage, these works are missing the role that actions (i.e., the expert) plays in the quality of data. A more complete understanding of data quality that integrates both state and action quality would not only improve performance but also save countless hours of data collection. To better understand the role of both states and actions in the data for learning good policies, consider a single state transition -- the three core factors that affect distribution shift are: the policy action distribution, the the stochasticity of transitions, and the previous state distribution. Note that the previous states are also just a function of the policy and dynamics back through time. Through this we extract two fundamental properties that can influence data quality in IL: _action divergence_ and _transition diversity_. Action divergence captures how different the learned policy action distribution is from the expert's actions at a given state: for example, if the expert is very consistent but the policy has high variance, then action divergence will be high and distribution shift is likely. Transition diversity captures the inherent variability of the environment at each state, which determines how the state distribution evolves for a given policy: for example, noisy dynamics in the low data regime can reduce overlap between the expert data and the learned policy distribution. Importantly, these factors can compound over time, thus greatly increasing the potential for distribution shift. While state coverage has been discussed in prior work, we are the first work to formalize the roles of both state and action distributions in data quality, along with how they interact through time: this new data-focused framing leads to insights about how to curate data to learn more effective policies. To validate and study these properties empirically, we conduct two sets of experiments: (1) Data Noising, where we ablate these properties in robotics datasets to change the policy success rates, and (2) Data Measuring, where we observe human and machine generated datasets and approximately measure these properties in relationship to the policy success rates. Both experiments show that how state diversity, a commonly used heuristic for quality in prior work, is not always correlated with success. Furthermore, we find that in several human generated datasets, less consistent actions at each state is often associated with decreases in policy performance. ## 2 Related Work Data quality research dates back to the early time of computing and the full literature is out of the scope of this paper. Existing literature in machine learning has proposed different dimensions of data quality including accuracy, completeness, consistency, timeliness, and accessibility [32; 14; 20]. We could draw similarities between action divergence with the concept of data consistency, and state diversity with completeness. However, instantiation of these metrics is non-trivial and domain-specific. Our work is closely related to the imitation learning literature that explicitly addresses the distribution shift problem in various ways. Prior work can be divided into two camps: ones that take an _algorithm-centric_ approach to account for biases in the dataset, and ones that employ _data-centric_ methods for modifying the data collection process. **Algorithm-centric.** Robust learning approaches including model-based imitation learning methods learn a dynamics model of the environment and therefore can plan to go back in distribution when the agent visits out of distribution states [21; 40]. Data augmentation is a useful post-processing technique when one could impose domain or task specific knowledge [23; 28; 25]. Prior work has also investigated how to explicitly learn from sub-optimal demonstrations by learning a weighting function over demonstrations either to guide BC training [9; 3] or as a reward function for RL algorithms [7; 11]. These methods either require additional information such as rankings of trajectories [9], demonstrator identity [3], or collecting additional environment data [7; 11]. Recent efforts on demonstration retrieval augment limited task-specific demonstrations with past robot experiences [38]. A recent body of work build better inductive biases into the model for learning robust state features, like pretrained vision or language representations [51; 30; 37; 12]. Some approaches modify the action representation to be more expressive to capture all the expert's actions, for example using Gaussian or mixture model action spaces [12; 36]. Others consider temporally abstracted action spaces like waypoints or action sequences to reduce the effective task length and thus mitigate compounding errors [50; 47; 12]. What these algorithm-centric works overlook is that changing the data can be as or more effective for policy learning than changing the algorithm. However, we still lack a comprehensive understanding of what properties in the data matter in imitation learning. **Data-centric.** In more data-centric prior works that discuss data quality, the primary goal is often just to maximize state diversity. A large body of research focuses on modifying the data collection process such that the expert experience a diverse set of states. Ross et al. [45] proposed to iteratively collect _on-policy_ demonstration data with shared-control between the expert and the robot, randomly switching with a gradually decreasing weight on the expert's input, such that the training data contains direct samples of the expert policy at states experienced by the learned policy. However, randomly switching the control can make it unnatural for the human demonstrator and leads to noisier human control. To mitigate this issue, methods have been proposed to gate the control more effectively by evaluating metrics such as state uncertainty and novelty [16; 27]. Other methods allow the human to gate the control and effectively correct the robot's behavior only when necessary [31; 35; 24; 46]. Closely related to our work, Laskey et al. [33] takes an optimal control perspective and showed that injecting control noise during data collection can give similar benefits as DAgger-style iterative methods. Active learning methods have also been developed to guide data collection towards more informative samples [15; 6; 41; 29]. By only focusing on state coverage, these works are missing the role that actions (i.e., the expert) plays in the quality of data. ## 3 Preliminaries In imitation learning (IL), we assume access to a dataset \(\mathcal{D}_{N}=\{\tau_{1},\ldots,\tau_{N}\}\) of \(N\) expert demonstrations. Each demonstration \(\tau_{i}\) consists of a sequence of state-action pairs of length \(T_{i}\), \(\tau_{i}=\{(s_{1},a_{1}),\ldots,(s_{T_{i}},a_{T_{i}})\}\), with states \(s\in\mathcal{S}\) and actions \(a\in\mathcal{A}\). Demonstrations are generated by sampling actions from the expert policy \(\pi_{E}\) through environment dynamics \(\rho(s^{\prime}|s,a)\). The objective of imitation learning is to learn a policy \(\pi_{\theta}:\mathcal{S}\rightarrow\mathcal{A}\) that maps states to actions. Standard behavioral cloning optimizes a supervised loss maximizing the likelihood of the state-action pairs in the dataset: \[\mathcal{L}(\theta)=-\frac{1}{|\mathcal{D}_{N}|}\sum_{(s,a)\in\mathcal{D}_{N}} \log\pi_{\theta}(a|s), \tag{1}\] which is optimizing the following objective under finite samples from the expert: \[\mathbb{E}_{s\sim\rho_{\pi_{E}}(\cdot)}\left[D_{\text{KL}}\left(\pi_{E}( \cdot|s),\pi_{\theta}(\cdot|s)\right)\right]=-\mathbb{E}_{s\sim\rho_{\pi_{E}}( \cdot),\;a\sim\pi_{E}(\cdot|s)}\left[\log\pi_{\theta}(a|s)\right]+C \tag{2}\] The \(C\) term here captures the entropy of the expert state-action distribution, which is constant with respect to \(\theta\) and thus it does not affect optimality of \(\theta\). \(\rho_{\pi_{E}}(s)\) is the state visitation of the expert policy, defined for any policy \(\pi\) as follows: \[\rho_{\pi}(s) =\frac{1}{T}\sum_{t=1}^{H}\rho_{\pi}^{t}(s) \tag{3}\] \[\rho_{\pi}^{t}(s^{\prime}) =\int_{s,a}\rho_{\pi}^{t}(s,a,s^{\prime})ds\;da=\int_{s,a}\pi(a|s) \rho(s^{\prime}|s,a)\rho_{\pi}^{t-1}(s)ds\;da \tag{4}\] ### Distribution Shift in IL Behavioral cloning methods as in Eq. (2) often assume that the dataset distribution is a good approximation of the true expert policy distribution. In most applications, however, the learned policy is bound to experience novel states that were not part of the training data due to stochasticity of the environment dynamics and the learned policy. Herein lies the fundamental challenge with imitation learning, i.e., state _distribution shift_ between training and test time. Consider the training sample \((s,a,s^{\prime})\) at timestep \(t\) in demonstration \(\tau_{i}\). If the learned policy outputs \(\tilde{a}\sim\pi(\cdot|s)\) which has a small action error \(\epsilon=\tilde{a}-a\), the new state at the next time step will also deviate: \(\tilde{s}^{\prime}\sim\rho(s^{\prime}|s,a+\epsilon)\), which in turn affects the policy output at the next step. In practice, the change in next state can be highly disproportionate to \(||\epsilon||\), so small errors can quickly lead the policy out of the data distribution. Stochastic dynamics, for example system noise, can compound with the distribution shift caused by policy errors, and as we continue to execute for \(T-t\) steps, both these factors can compound over time to pull the policy out of distribution, often leading to task failure. We can address this distribution shift problem by minimizing the mismatch between state visitation distribution of a learned policy \(\pi\) and the state visitation distribution of an expert policy \(\pi_{E}\) under some \(f\)-divergence \(D_{f}\): \[J(\pi,\pi_{E})=D_{f}(\rho_{\pi}(s),\rho_{\pi_{E}}(s)) \tag{5}\] Next, we connect this distribution shift problem to the question of data quality. ### Formalizing Data Quality How can we define and measure the quality of a dataset \(\mathcal{D}\) in IL? In existing literature, data quality has been used interchangeably with either the proficiency level of the expert at the task or the coverage of state space. However, these notions of quality are loose and incomplete: for example, they do not explain what concretely makes an expert better or worse, nor do they consider how stochasticity in the transitions impacts state space coverage and thus downstream learning. Our goal is to more formally define a measure of quality so that we can then optimize these properties in our datasets. We posit that a complete notion of dataset quality is one that minimizes distribution shift in Eq. (5). This suggests that we cannot discuss data quality in isolation. The notion of data quality is heavily tied to the algorithm \(A\) that leads to the learned policy \(\pi_{A}\) as well as the expert policy \(\pi_{E}\). We thus formalize the quality of a dataset \(\mathcal{D}_{N}\) conditioned on the expert policy \(\pi_{E}\) that generates it along with a policy learning algorithm \(A\), as the negative distribution shift of a learned policy under some \(f\)-divergence \(D_{f}\): \[Q(\mathcal{D}_{N};\pi_{E},A)=-D_{f}\left(\rho_{\pi_{A}}(s),\rho_{\pi_{E}}(s) \right),\text{ where }\pi_{A}=A(\mathcal{D}_{N}) \tag{6}\] Here, \(\pi_{A}\) is the policy learned by algorithm \(A\) using dataset \(\mathcal{D}_{N}\) of size \(N\), which is generated from demonstrator \(\pi_{E}\). Note that quality is affected by several components: the choice in demonstrator, the choice of dataset size, and the algorithm. The choice of expert policy \(\pi_{E}\) changes the trajectories in \(\mathcal{D}_{N}\) (which in turn affects policy learning) along with the desired state distribution \(\rho_{\pi_{E}}(s)\)). The dataset size controls the amount of information about the expert state distribution present in the dataset: note that \(\pi_{A}\) should match \(\rho_{\pi_{E}}(s)\), but \(A\) learns only \(\mathcal{D}_{N}\) not from the full \(\rho_{\pi_{E}}(s)\). The algorithm controls how the data is processed to produce the final policy \(\pi_{A}\) and thus the visited distribution \(\rho_{\pi_{A}}(s)\). To optimize this notion of quality, we might toggle any one of these factors. Prior work has studied algorithm modifications at length [36; 51; 4], but few works study how \(\pi_{E}\) and the dataset \(D_{N}\) should be altered to perform best for any given algorithm \(A\). We refer to this as _data curation_. In practice we often lack full control over the choice of \(\pi_{E}\), since this is usually a human demonstrator; still, we can often influence \(\pi_{E}\), for example through prompting or feedback [24], or we can curate the dataset derived from \(\pi_{E}\) through filtering [19]. ## 4 Properties of Data Quality To identify properties that affect data quality in imitation learning, we can study how each state transition \((s,a,s^{\prime})\in\mathcal{D}\) in the dataset affects the dataset quality in detail. As we defined in Eq. (6), the dataset quality relies on distribution shift given an expert policy \(\pi_{E}\) and an imitation learning policy \(\pi_{A}\). This relies on the state visitation distribution \(\rho_{\pi_{A}}^{t}(s)\), which based on Eq. (4) depends on three terms: \(\pi_{A}(a|s)\), \(\rho(s^{\prime}|s,a)\), and \(\rho_{\pi_{A}}^{t-1}(s)\). Intuitively, three clear factors emerge for managing distribution shift: how different the policy \(\pi_{A}(a|s)\) is from the expert \(\pi_{E}(a|s)\) - which we refer to as _action divergence_, how diverse the dynamics \(\rho(s^{\prime}|a,s)\) are - which we refer to as _transition diversity_, and how these factors interact over time to produce past state visitation distribution \(\rho_{\pi_{A}}^{t-1}(s)\). Importantly, the past state visitation can be controlled through _action divergence_ and _transition diversity_ at previous time steps, so in this section, we will formalize these two properties and discuss their implication on data curation. ### Action Divergence Action divergence is a measure of distance between the learned policy and the expert policy \(D_{f}(\pi_{A}(\cdot|s),\pi_{E}(\cdot|s))\). This can stem from biases in the algorithm or dataset such as mismatched action representations or lack of samples. While this is typically viewed in prior work as a facet of the algorithm or dataset size, importantly action divergence and thus data quality can also be influenced by the expert policy itself. For example, if the demonstrator knows the action representation used by the learning agent _a priori_, then the policy mismatch can be reduced by taking actions that are consistent with that action space. In Theorem4.11, we illustrate the importance of action divergence by showing that our notion of data quality is lower bounded by the action divergence under the visited state distribution. Footnote 1: Proof for all theorems and lemmas in Appendix A **Theorem 4.1**.: _Given a policy \(\pi_{A}\) and demonstrator \(\pi_{E}\) and environment horizon length \(H\), the distribution shift:_ \[D_{\text{KL}}(\rho_{\pi_{A}},\rho_{\pi_{E}})\leq\frac{1}{H}\sum_{t=0}^{H-1}(H-t )D_{\text{KL}}^{s\sim\rho_{\pi_{A}}^{t}}(\pi_{A}(\cdot|s),\pi_{E}(\cdot|s))\] When using KL divergence for distribution shift in Eq.5, we can see how the quality is lower bounded by the policy action divergence from the optimal policy under the visited state distribution, weighted at each step by time-to-go. Many prior works have noted the compounding error problem [45, 44], and here too action divergence at earlier timesteps has an out-sized effect on the overall distribution shift. Thus to optimize for the quality of a dataset, we should reduce the aggregated policy mismatch across visited trajectories. **Optimality**: Many prior ideas around the "optimality" of a demonstration are intimately related to this action divergence property. Suboptimalities that are naturally present in demonstrations like pauses or other action noise have been a source of difficulty when learning imitation policies [34, 4]. These factors make it harder for the model to learn the expert action distributions, thus increasing action divergence. Suboptimality can also come from multi-modal expert policy distributions, and more expressive policy representations (i.e., reducing action divergence between the expert and the policy) has been shown to help [17, 12, 24]. The speed of demonstration is another common notion of optimality, which decreases total action divergence by reducing horizon length \(H\), provided it does not increase the per time step action divergence to compensate. **State Visitation**: Critically, the visitation distribution in Theorem4.1 determines _where_ action divergence should be low, whereas in the standard BC objective in Eq.2, the divergence is only minimized under samples from the expert distribution \(\rho_{\pi_{E}}^{t}\). To better understand how the visitation distribution evolves, we now analyze state transitions in greater detail. ### Transition Diversity Transition diversity encompasses the diversity of next state transitions seen at a given state for a certain policy. What role does transition diversity play in minimizing distribution shift? Intuitively, if we consider the upper bound in Theorem4.1, the expert's state coverage should overlap as much as possible with the visited state distribution, but without increasing the action divergence. **Lemma 4.2**.: _Given a learned policy \(\pi_{A}\) and an expert policy \(\pi_{E}\), assume that the policy is learned such that when \(s\in\operatorname{supp}(\rho_{\pi_{E}}^{t})\), \(D_{\text{KL}}(\pi_{A}(\cdot|s),\pi_{E}(\cdot|s))\leq\beta\). Then:_ \[\mathbb{E}_{s\sim\rho_{\pi_{A}}^{t}}\left[D_{\text{KL}}(\pi_{A}( \cdot|s),\pi_{E}(\cdot|s))\right]\leq\mathbb{E}_{s\in\rho_{\pi_{A}}^{t}} [\beta 1(s\in\operatorname{supp}(\rho_{\pi_{E}}^{t}))\] \[+\mathbb{1}(s\notin\operatorname{supp}(\rho_{\pi_{E}}^{t}))D_{ \text{KL}}(\pi_{A}(\cdot|s),\pi_{E}(\cdot|s))]\] Lemma4.2 shows that with a good learning algorithm (i.e., when \(\beta\) is small), it is important to maximize the overlap between the visited policy and the expert data (minimizes the contribution of the unbounded right term), but as shown in Theorem4.1 this should not come at the expense of increasing the action divergence of the policy. Rather than broadly maximizing state diversity, as prior works do, a more sensible approach is for the expert to maximize the two other factors that affect \(\rho_{\pi_{A}}^{t}\): _system noise_ and _initial state variance_ in the expert data. **Finite Data and Low Coverage**: While the above analysis holds for infinite data, maximizing transition diversity can lead to thinly spread coverage in the finite data regime. To analyze the finite data regime in isolation, we consider Gaussian system noise for simplicity. First, we formulate an empirical estimate of state "coverage" of the expert (i.e. the deviation from states in the expert dataset). We then show that higher system noise can cause the learned policy \(\pi_{A}\) to deviate more from states sampled in the dataset. **Definition 1**.: _For a given start state \(s\), define the next state coverage probability over \(N\) samples from \(\pi_{E}\) for tolerance \(\epsilon\) as \(P_{S}(s;N,\epsilon)\coloneqq P(\min_{i\in\{1\dots N\}}||s^{\prime}-s^{\prime}_{ \star,i}||_{\infty}\leq\epsilon)\), where \(\{s^{\prime}_{\star,i}\sim\rho^{t}_{\pi_{E}}(\cdot|s)\}\) is a set of \(N\) sampled next state transitions under the expert policy \(\pi_{E}\) starting at \(s\), and \(s^{\prime}\sim\rho^{t}_{\pi_{A}}(\cdot|s)\) is a sampled next state under the policy \(\pi_{A}\) starting at \(S\)._ We can think of \(P_{S}(s;N,\epsilon)\) as the probability under potential datasets for \(\pi_{E}\) of seeing a next state at test time (under \(\pi_{A}\)) that was nearby the next states in the dataset, conditioned at a starting state \(s\) (i.e., coverage). This is related to a single step of distribution shift, but measured over a dataset rather than a distribution. Defining coverage as the distance to the dataset being below some threshold is reasonable for function approximators like neural networks [2]. **Theorem 4.3**.: _Given a policy \(\pi_{A}\) and demonstrator \(\pi_{E}\), assume that for state \(s\), if \(\rho^{t-1}_{\pi_{E}}(s)>0\), then \(\pi_{A}(a|s)=\pi_{E}(a|s)\). Assume that transitions are normally distributed with fixed and diagonal variance, \(\rho(s^{\prime}|s,a)=\mathcal{N}(\mu(s,a),\sigma^{2}I)\), then the next state coverage probability is \(P_{S}(s;N,\epsilon)=1-(1-\operatorname{erf}(\frac{\epsilon}{2\sigma})^{d})^{N}\), where \(d\) is the dimensionality of the state._ In Theorem4.3, we see that even under a policy that is perfect when in distribution, the probability of next state coverage decreases as the variance of system noise \(\sigma^{2}\) increases, for a fixed sampling budget \(N\). However, increasing \(N\) has a much stronger positive effect on coverage than decreases in \(\sigma\), suggesting that system noise is only an issue when the dataset size is sufficiently small. Furthermore, \(\epsilon\) here represents some simplistic form of the _generalization_ capacity of the policy, and we see here that increasing \(\epsilon\) makes us more robust to proportional increases in \(\sigma\). **Finite Data and Good Coverage**: However, if we assume \(N\) is large enough to mitigate the coverage effect of system noise, are there any other effects of transition diversity in the dataset? The answer lies in generalization properties of the algorithm, which as provided in Theorem4.3 are fairly conservative (i.e., in the definition of \(P_{S}(s;N,\epsilon)\)). In TheoremA.1 in AppendixA, we use a more loose generalization definition and relax the assumption that the learned policy is perfect. We show as system noise increases, the resulting boost in sample coverage can actually replicate and overcome the effects of high learned policy noise, suggesting that more system noise can actually be beneficial for learning. This finding sheds light on results from prior work and our own experiments in Section5 that show the benefits of collecting data in the presence of high system noise [33]. Plots of both coverage probabilities for different \(N\) are included in AppendixA. ### Implications for Data Curation We have shown how action divergence and transition diversity are both tied to distribution shift and thus data quality. Based on action divergence and transition diversity, we now examine downstream implications of these properties on what matters for _data curation_, where the goal is to collect and then select _high quality_ demonstrations in our dataset \(\mathcal{D}_{N}\) for good policy learning. **Action Consistency**: To minimize action divergence, the algorithm action representation should _align_ with the expert's action distribution, for the given dataset size. One algorithmic solution is to improve the expressiveness of the policy action space so it can capture the exact actions at every state that was demonstrated. However, in practice, individual states might only be visited a few times in the dataset, and so if the entropy of the expert policy at those states is high, the learned policy will find it difficult to perfectly match the actions even for the most expressive action spaces. Instead, we argue that expert data should be curated to have more _consistent_ actions, e.g., reducing the entropy of the expert policy: \(\mathbb{E}_{s\sim\rho_{\pi_{A}}(\cdot)}[\mathcal{H}(\pi_{E}(\cdot|s))]\). Since knowing the visited state distribution is impossible beforehand, the best we can do is to encourage low entropy in the expert data distribution: \[\min_{\pi_{E}}\mathbb{E}_{s\sim\rho_{\pi_{E}}(\cdot)}[\mathcal{H}(\pi_{E}( \cdot|s))] \tag{7}\] **State Diversity**: As discussed previously, the state visitation of the policy depends on the trajectory action distribution, the initial state distribution, and the system noise distribution. Many prior works seek to improve the _state coverage_ of the dataset, using some metric similar to Definition1[6, 29]. However, the required state diversity is a function of the learned policy mismatch (action divergence) and the noise present in the system (transition diversity), and is thus a coarse representation of data quality. Uniformly increasing coverage over the state space is not necessarily a good thing -- as shown in Section4.1, if state coverage comes at the expense of action consistency, the result might be worse than a policy trained on less state coverage but more consistent actions. Instead, we argue that we should optimize for system noise. **System Noise**: Although system noise at a given state cannot be controlled, experts can control system noise at the trajectory level (e.g. visiting states with more or less system noise). Should system noise be encouraged or discouraged in a dataset? Based on the discussion in Section4.2, we hypothesize that when the dataset size is fixed, increasing the system noise can increase the _coverage_ of the expert and thus improve robustness in the learned policy (TheoremA.1), but only up to a certain point, when the data becomes too sparse to generalize (Theorem4.3). Thus, in addition to consistent actions, we posit that expert demonstrators should encourage paths with high system entropy for learning better policies, such that the overall state entropy stays below some threshold \(\gamma\) to avoid coverage that is too sparse. \[\max_{\pi_{E}}\mathbb{E}_{s\sim\rho_{\pi_{E}}(\cdot),a\sim\pi_{E} (\cdot|s)}[\mathcal{H}(\rho(\cdot|a,s))]\] \[s.t.\ H(\rho_{\pi_{E}}(s))\leq\gamma \tag{8}\] **Horizon Length**: The length of trajectories in the expert data also can have a large effect on the expert and visited state distributions, and thus on the quality of the dataset. However, while horizon length certainly plays a role, like state diversity, it is a downstream effect of action divergence and transition diversity. Based on previous analysis, what really matters is minimizing the _aggregated_ action divergence and transition diversity produced by the expert demonstrator across time. Horizon length alone only crudely measures this, but is often correlated in practice as we show in Section5.2. While prior work in data curation primarily aim to maximize state diversity alone, our analysis of data quality reveals several new properties that matter for data curation, such as action consistency, system noise, and their combined effect over time. As we demonstrate in the next section, understanding these properties and the inherent tradeoffs between them is vital for data curation in imitation learning. ## 5 Analysis To empirically analyze how different data properties affect imitation learning, we conduct two sets of experiments. In the first set, we generate various quality expert datasets by adding different types of noise to _scripted_ policies, i.e., policies that are hand-designed to be successful at the task. An example scripted policy compared to human demonstrations is shown in Fig.1. We study both noise in the system - considering transition diversity - and noise added to the expert policy - considering both transition diversity and action divergence - as a function of dataset size. In the second set of experiments, we evaluate the empirical properties (see AppendixB) for real human collected datasets. ### Data Noising We train Behavior Cloning (BC) with data generated with system noise and policy noise in two environments: _PMObstacle_, a 2D environment where a point mass agent must reach a target point without hitting an obstacle in its way; and _Square_, a 3D environment where a robot arm must pick up a square nut and insert it onto a peg (shown in Fig.1). In both, system and policy noise are Gaussian random variables added to the dynamics and scripted policy, respectively, at each time step, and BC uses an MLP architecture. Fig.2 shows results in _PMObstacle_ (top row) and _Square_ (bottom row). **State Diversity through Transition Diversity improves performance, up to a point**. The left plots in Fig.2 (a)-(d) show policy success rates as we vary system noise (\(\sigma_{s}\)) (more noise for lighter color Figure 1: **Case Study**: Trajectories and action variance for scripted (left two plots) compared to human demonstration data (right two plots). Even though the human data (right) has high state coverage, the action variance is high, leading to high action divergence, and vice versa. shades), where (a)(c) show the high-data regime and the (b)(d) show the low data regime. Higher system noise tends to improve policy performance in the high data regime, but only up to a point for _Square_ -- after \(\sigma_{s}=0.3\) in the expert data (c), performance starts to drop off, which we hypothesize is due to the low coverage, based on the analysis of transition diversity in Section 4.2. In the low data regime, the story is similar but even more exaggerated, with increasing system noise leading to comparable performance as the high data regime. Once again for _Square_, increasing transition diversity (d) helps until the state coverage is too thin. See Table 2, Table 5, and Table 6 in Appendix C for the full sweep of system noise values in each environment. **State Diversity at the cost of Action Divergence hurts performance**. Plots in Fig. 2 (e)(f) and Fig. 2 (i)(j) show policy success rates for increasing the policy noise (\(\sigma_{p}\)) in the dataset, where the (e)(g) show the high data regime and the (f)(h) show the low data regime. Note that each value of policy noise yields the same expert state distribution as the corresponding amount of system noise. Since this noise is zero-mean and the learned policy is deterministic, policy noise in the high data regime (e)(g) is only moderately worse as compared to equivalent amounts of system noise (a)(c), suggesting that the action divergence is minor. In fact, due to the added state diversity, adding policy noise in the high data regime helps up to a certain point. However in the low data regime (f)(h), performance is substantially worse compared to comparable system noise, since the policy can not recover the unbiased expert policy from just a few noisy examples (note that in (c)(d), the x-axes are not aligned with those in (g)(h), and \(\sigma_{s}=0.2\) corresponds to \(\sigma_{p}=0.02\) as shown by the dotted line). This illustrates that state diversity coming from the noise in the expert policy can increase action divergence, and thus high state diversity is not universally desirable. See Table 3, Table 7, and Table 8 in Appendix C for the full sweep of policy noise values in each environment. **Transition Diversity can counteract the effects of policy noise**. In the right two plots (i)(j) for _PMOstacle_ in Fig. 2, the dataset combines both system and policy noise. The system noise is fixed (\(\sigma_{s}=0.03\)) and the policy noise is varied in the same range as the middle two plots (just policy noise). Given the analysis in Section 4.2 and Theorem A.1 in Appendix A, we would expect that having some system noise could actually make the policy much more robust to policy noise, since the state coverage provided in the dataset can help learn a more robust policy (generalization). Indeed we find that just by adding system noise, the learned policy becomes very robust to added policy noise in both the high (i) and low (j) regimes. This suggests that adding transition diversity can help mitigate the affects of action divergence incurred by noisy or sub-optimal experts. See Table 4 in Appendix C for the full sweep of policy noise values for fixed system noise in each environment. Figure 2: BC Success rates in _PMOstacle_ (top row) for 1000 and 10 episodes of data, and in _Square_ (bottom row) for 200 and 50 episodes of data (error bars over 3 datasets). X-axis corresponds to injected Gaussian noise in the _dataset_ and each line corresponds to injected system noise (\(\sigma_{s}\)) during _evaluation_. **System Noise** (left): for large datasets (a)(c), higher system noise during evaluation decreases performance, but more system noise during training does the best. For small datasets (b)(d), we note a similar but exaggerated effect. **Policy Noise** (mid) For large datasets (e)(g), unlike system noise, more expert policy noise often hurts performance despite similar state coverage. For small datasets (f)(h), adding policy noise exaggerates this effect and produces large variance in performance. For _Square_, the dotted lines mark comparable values of noise in terms of state coverage. **System + Policy Noise** (i)(j): Adding some system noise can make the policy more robust to action divergence. ### Data Measuring Next we empirically instantiate several metrics from Section4.3 that capture various facets of data quality, and measure different datasets: (1) action variance (higher means less action consistency) measured through clustering nearby states (2) horizon length \(H\) (higher means longer trajectories), and (3) state similarity (opposite of state diversity) measured as the average cluster size. See AppendixB for the exact empirical definitions. We leave out system noise since these human datasets were collected in deterministic environments. In Table1, we consider single and multi-human datasets from the _Square_ and _Can_ tasks from robomimic [36]. We see that higher action variance and horizon length are often accompanied by decreases in success rates. The _Worse_ multi-human datasets have seemingly lower action variance but less state similarity, and yet qualitatively we observed highly multi-modal behaviors in these datasets (e.g., grasping different parts of the nut instead of the handle). We suspect that this type of multi-modality is not measured well by the single time step action variance metric, but rather requires a longer term view of the effects of that action on the scene - as indicated in Theorem4.1. Additionally, state diversity once again is not correlated with performance on the task. We emphasize that these metrics likely do not provide a complete picture of data quality, but they do provide us insights into why performance is low and how data collection can be improved in future datasets. Through future efforts from the community, we envision a comprehensive set of data metrics that practitioners can use to quickly evaluate the quality of datasets without needing to first train and evaluate models on those datasets. ## 6 Discussion Data curation is incredibly important for maximizing the potential of any given algorithm, and especially so in imitation learning. To curate data, one needs to establish a formalism for assessing data quality. The predominant notions of data quality today are solely centered around maximizing _state diversity_ in a dataset, which ignore the quality of the expert's demonstrated actions and how these factors interplay. In this work, we propose a holistic view of data curation centered around minimizing distribution shift. We demonstrate that action divergence and transition diversity are crucial factors in data quality that can often be controlled or measured, and we draw several valuable insights from this shedding light on effective strategies for curating expert datasets. We find that making actions more consistent tends to increase policy success. Furthermore, we observe a fundamental tradeoff between state diversity and action divergence -- increasing state diversity can often come at the expense of action divergence. Instead of uniformly maximizing state diversity, we show that increasing transition diversity improves performance until the data coverage becomes too sparse. While the metrics presented in our experiments are often good measures of final performance, we find that comprehensively measuring data quality in practice for real world datasets can be quite challenging. Finally, we envision a broader set of metrics informed by our formalism for data quality that practitioners can use to curate datasets, and we believe our work is an important first step in formulating and evaluating these metrics.
2305.14705
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnable parameters to Large Language Models (LLMs) without increasing inference cost. Instruction tuning is a technique for training LLMs to follow instructions. We advocate combining these two approaches, as we find that MoE models benefit more from instruction tuning than dense models. In particular, we conduct empirical studies across three experimental setups: (i) Direct finetuning on individual downstream tasks devoid of instruction tuning; (ii) Instructiontuning followed by in-context few-shot or zero-shot generalization on downstream tasks; and (iii) Instruction tuning supplemented by further finetuning on individual downstream tasks. In the first scenario, MoE models overall underperform dense models of identical computational capacity. This narrative, however, dramatically changes with the introduction of instruction tuning (second and third scenario), used independently or in conjunction with task-specific finetuning. Our most powerful model, FLAN-MOE-32B, surpasses the performance of FLAN-PALM-62B on four benchmark tasks, while using only a third of the FLOPs. The advancements embodied byFLAN-MOE inspire a reevaluation of the design principles of large-scale, high-performance language models in the framework of task-agnostic learning.
Sheng Shen, Le Hou, Yanqi Zhou, Nan Du, Shayne Longpre, Jason Wei, Hyung Won Chung, Barret Zoph, William Fedus, Xinyun Chen, Tu Vu, Yuexin Wu, Wuyang Chen, Albert Webson, Yunxuan Li, Vincent Zhao, Hongkun Yu, Kurt Keutzer, Trevor Darrell, Denny Zhou
2023-05-24T04:22:26Z
http://arxiv.org/abs/2305.14705v2
# Mixture-of-Experts Meets Instruction Tuning: ###### Abstract Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnable parameters to Large Language Models (LLMs) without increasing inference cost. Instruction tuning is a technique for training LLMs to follow instructions. We advocate combining these two approaches, as we find that MoE models benefit more from instruction tuning than dense models. In particular, we conduct empirical studies across three experimental setups: (i) Direct finetuning on individual downstream tasks devoid of instruction tuning; (ii) Instruction tuning followed by in-context few-shot or zero-shot generalization on downstream tasks; and (iii) Instruction tuning supplemented by further finetuning on individual downstream tasks. In the first scenario, MoE models overall underperform dense models of identical computational capacity. This narrative, however, dramatically changes with the introduction of instruction tuning (second and third scenario), used independently or in conjunction with task-specific finetuning. Our most powerful model, Flan-MoE\({}_{\rm 32b}\), surpasses the performance of Flan-PaLM\({}_{\rm 62b}\) on four benchmark tasks, while using only a third of the FLOPs. The advancements embodied by Flan-MoE inspire a reevaluation of the design principles of large-scale, high-performance language models in the framework of task-agnostic learning. ## 1 Introduction The recent years have witnessed remarkable advancements in the field of natural language processing (NLP), driven by the development of increasingly large and sophisticated deep learning models. Among these models, transformer-based language models [49] have emerged as the de facto standard for a wide range of NLP tasks, owing to their unparalleled capabilities in capturing complex linguistic patterns and generalizing across diverse contexts. One particularly successful paradigm for training such models is instruction-tuning [44; 52; 4; 28; 34; 38], which enhances their performance on specific tasks by adapting their pre-trained representations to follow natural language instructions. While the benefits of Large Language Models (LLMs) are indisputable, their rapidly growing size and computational requirements pose significant challenges in terms of training efficiency, memory footprint, and deployment costs. Consequently, there is a pressing need for developing scalable techniques that can harness the power of these models without incurring prohibitive computational overheads. On the other hands, models with sparsely activated Mixture of Experts (MoEs) significantly reduce the computational cost of LLMs. MoE models build upon the observation that language models can be decomposed into smaller, specialized sub-models, or "experts", that focus on distinct aspects of the input data, thereby enabling more efficient computation and resource allocation. However, we show that conventional, task-specific finetuning MoE models lead to suboptimal performance, often even worse than finetuning dense models with the same computational cost. One of the possible reasons is the discrepancy between general pretraining and task-specific finetuning. In this paper, we illuminate the pivotal role of instruction-tuning within the context of Mixture-of-Experts (MoE) models, specifically in terms of their successful scalability on downstream tasks. We demonstrate this through a two-fold analysis: Firstly, we expand on the known benefits of instruction-tuning for task-specific downstream finetuning [28], illustrating its significantly larger impact when applied to MoE models compared to their dense equivalents. Secondly, we emphasize the necessity of an instruction-tuning stage for MoE models [45; 10; 12; 23] to surpass the performance of dense models on downstream and held-out tasks. Our unique amalgamation, Flan-MoE, is an instruction-tuned model built on the Flan mixture[4], which successfully harnesses the strengths of both instruction-tuning and the sparse MoE technique. Flan-MoE effectively and efficiently scales up language models, without necessitating a rise in computational resources or memory requirements. We subject our model, Flan-MoE, to a battery of tests across an array of tasks encompassing natural language understanding, reasoning, and question answering. Our evaluation framework consists of three distinct setups: (i) Direct finetuning of the model on individual downstream tasks; (ii) Instruction tuning succeeded by in-context, few-shot, or zero-shot generalization on downstream tasks; and (iii) Instruction tuning enhanced with subsequent finetuning on individual downstream tasks. The results spotlight Flan-MoE's marked superiority over its dense counterparts in the second and third settings. Notably, these advancements materialize without the need for augmented computational resources or memory requisites. Our top-tier model, in fact, manages to eclipse the performance of a Flan-PaLM equivalent, requiring only a third of the computational cost per token on four separate benchmarks. To summarize, our contributions are as follows: * We establish the critical role of instruction-tuning in the efficacy of MoE models: * We demonstrate that in the absence of instruction tuning, MoE models fall short in performance when compared to dense models on downstream tasks. * We highlight that when supplemented with instruction tuning, MoE models exceed the performance of dense models on downstream tasks, as well as on held-out zero-shot and few-shot tasks. * We present a comprehensive series of experiments, offering a comparative analysis of the performance of diverse MoE models subjected to instruction-tuning. ## 2 Method ### Model Architecture We leverage sparsely activated Mixture-of-Experts (MoE) [23; 12; 55] in Flan-MoE models. Similar to the Switch Transformer [12], we replace the feed-forward component of every other Transformer layer with an MoE layer. Each MoE layer consists of a collection of independent feed-forward networks as the 'experts'. A gating function then uses a softmax activation function to model a probability distribution over these experts. This distribution indicates how well each expert is able to process the incoming input. Even though each MoE layer has many more parameters, the experts are sparsely activated. This means that for a given input token, only a limited subset of experts is used, giving the model more capacity while limiting computation. In our architecture, the subset size is either one or two depending on the routing strategy. Each MoE layer's learnable gating network is trained to use its input to activate the best two experts for each token of an input sequence. During inference, the learned gating network dynamically picks the two best experts for each token. For an MoE layer with \(E\) experts, this essentially provides a collection of \(O(E^{2})\) different combinations of feed-forward networks instead of one in the classic Transformer architecture, enabling greater computational flexibility. The final learned representation of a token will be the weighted combination of the outputs from the selected experts. ### Instruction Fine-tuning Recipe We fine-tune Flan-MoE using the prefix language model objective on the FLAN collective dataset [4; 28]. Each Flan-MoE will inherit the auxiliary loss setting during pre-training. All the model parameters will be updated. We adapt the sequence length of each Flan-MoE to \(2,048\) for input and \(512\) for output based on the relative position embedding. The dropout rate is \(0.05\) and the expert dropout rate is \(0.2\). The learning rate is \(1e^{-4}\). The optimizer setting follows [4]. ## 3 Experiment We study Flan-MoE in the context of instruction-tuning. We first perform a controlled comparison of Flan-MoE to an equivalent "standard" dense encoder-decoder Transformer (T5), across a range of model sizes in Section 3.2. We subsequently demonstrate in Section 3.3 that scaling up our model, referred to as Flan-MoE, can attain remarkable performance levels. Our most extensive model, Flan-ST\({}_{\text{32B}}\), surpasses the performance of Flan-PaLM\({}_{\text{62B}}\) while utilizing less than 30% of FLOPs per token. We further ablate the various design decisions in the next Section. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & \begin{tabular}{c} **FLOPs** \\ **per token** \\ \end{tabular} & \begin{tabular}{c} **Total** \\ **\# Params** \\ \end{tabular} & \begin{tabular}{c} **MMLU** \\ Direct \\ \end{tabular} & \begin{tabular}{c} **BBH** \\ **Direct** \\ \end{tabular} & \begin{tabular}{c} **Reasoning** \\ **CoT** \\ \end{tabular} & \begin{tabular}{c} **QA** \\ Direct \\ \end{tabular} & **Norm. Avg.** \\ \hline TS\({}_{\text{SMALL}}\) & 0.06G & 80M & 26.7 & 7.2 & 26.7 & 5.6 & 10.3 & 33.8 & 26.3 \\ Flan-T\({}_{\text{SMALL}}\) & 0.06G & 80M & 28.7 & 12.1 & 29.1 & 19.2 & 15.0 & 40.9 & 28.7 (+2.4) \\ TS\({}_{\text{BASE}}\) & 0.3G & 250M & 25.7 & 14.1 & 27.7 & 14.6 & 14.7 & 35.3 & 26.2 \\ Flan-T\({}_{\text{BASE}}\) & 0.3G & 250M & 35.6 & 33.3 & 30.3 & 26.8 & 16.4 & 48.8 & 33.9 (+7.7) \\ TS\({}_{\text{BASE}}\) & 1.0G & 780M & 25.1 & 15.3 & 27.7 & 16.2 & 11.9 & 36.4 & 25.7 \\ Flan-T\({}_{\text{BASE}}\) & 1.0G & 780M & 44.7 & 38.9 & 34.7 & 28.5 & 22.2 & 64.6 & 42.0 (+16.3) \\ TS\({}_{\text{XL}}\) & 3.6G & 3B & 25.3 & 14.1 & 27.4 & 19.3 & 14.2 & 38.2 & 25.9 \\ Flan-T\({}_{\text{XL}}\) & 3.6G & 3B & 50.3 & 46.1 & 40.2 & 35.9 & 33.9 & 74.1 & 48.0 (+22.1) \\ TS\({}_{\text{XXL}}\) & 13.9G & 11B & 26.1 & 19.1 & 29.5 & 19.3 & 21.4 & 47.4 & 27.7 \\ Flan-T\({}_{\text{XXL}}\) & 13.9G & 11B & 52.6 & 47.9 & 45.6 & 41.6 & 46.3 & 80.4 & 51.7 (+24.0) \\ PaLM & 12.6G & 8B & 24.3 & 24.1 & 30.8 & 30.1 & 24.9 & 47.6 & 27.1 \\ Flan-PaLM & 12.6G & 8B & 49.3 & 41.3 & 36.4 & 31.1 & 36.9 & 75.1 & 47.5 (+20.4) \\ PaLM & 91.6G & 62B & 55.1 & 49.0 & 37.4 & 43.0 & 50.6 & 70.4 & 51.0 \\ Flan-PaLM & 91.6G & 62B & 59.6 & 56.9 & 47.5 & 44.9 & 59.7 & 85.3 & 57.6 (+6.6) \\ PaLM & 847G & 540B & 71.3 & 62.9 & 49.1 & 63.7 & 72.6 & 86.0 & 66.2 \\ Flan-PaLM & 847G & 540B & 73.5 & 70.9 & 57.9 & 66.3 & 76.5 & 89.9 & 70.3 (+4.1) \\ \hline Switch\({}_{\text{BASE}}\) & 0.3G & 3.5B & 28.3 & 13.6 & 0.1 & 1.4 & 5.2 & 35.8 & 20.2 \\ Flan-Switch\({}_{\text{AACE}}\) & 0.3G & 3.5B & 38.0 & 34.2 & 33.2 & 29.4 & 18.6 & 58.0 & 36.8 (+16.6) \\ Switch\({}_{\text{AACE}}\) & 1.0G & 26B & 24.0 & 23.1 & 0.2 & 7.2 & 12.4 & 33.7 & 17.7 \\ Flan-Switch\({}_{\text{AACE}}\) & 1.0G & 26B & 46.1 & 40.3 & 36.3 & 28.0 & 25.3 & 66.5 & 43.5 (+25.8) \\ Switch\({}_{\text{XXL}}\) & 13.9G & 395B & 24.6 & 15.1 & 0.0 & 6.7 & 9.2 & 32.5 & 17.8 \\ Flan-Switch\({}_{\text{XXL}}\) & 13.9G & 395B & 55.6 & 50.1 & 47.9 & 43.5 & 46.6 & 78.8 & 54.2 (+36.4) \\ \hline GS\({}_{\text{SMALL}}\) & 0.06G & 0.3B & 23.9 & 0.0 & 0.2 & 0.8 & 0.8 & 24.1 & 16.7 \\ Flan-GS\({}_{\text{SMALL}}\) & 0.06G & 0.3B & 32.6 & 26.9 & 29.6 & 20.9 & 16.1 & 48.9 & 31.8 (+15.1) \\ GS\({}_{\text{BASE}}\) & 0.3G & 1.3B & 25.0 & 15.9 & 0.0 & 4.8 & 3.8 & 26.8 & 17.6 \\ Flan-GS\({}_{\text{BASE}}\) & 0.3G & 1.3B & 39.9 & 33.6 & 33.7 & 25.1 & 22.0 & 57.9 & 38.3 (+20.7) \\ GS\({}_{\text{AACE}}\) & 1.0G & 9.2B & 26.4 & 12.8 & 0.2 & 14.3 & 13.0 & 31.9 & 19.2 \\ Flan-GS\({}_{\text{LACE}}\) & 1.0G & 9.2B & 47.8 & 40.8 & 35.0 & 29.2 & 27.6 & 69.5 & 44.5 (+25.3) \\ GS\({}_{\text{XL}}\) & 03.6G & 17.4B & 25.7 & 10.0 & 0.0 & 0.0 & 10.4 & 35.0 & 18.7 \\ Flan-GS\({}_{\text{XL}}\) & 3.6G & 17.4B & 51.1 & 42.3 & 40.1 & 31.4 & 34.3 & 73.9 & 48.7 (+30.0) \\ \hline EC\({}_{\text{SMALL}}\) & 0.06G & 0.3B & 25.3 & 1.2 & 0.1 & 2.3 & 0.8 & 36.0 & 18.1 \\ Flan-EC\({}_{\text{SMALL}}\) & 0.06G & 0.3B & 34.1 & 25.1 & 29.2 & 22.1 & 16.6 & 58.1 & 33.1 (+15.0) \\ EC\({}_{\text{BASE}}\) & 0.3G & 1.3B & 25.0 & 25.9 & 0.0 & 1.4 & 14.3 & 35.7 & 18.5 \\ Flan-EC\({}_{\text{BASE}}\) & 0.3G & 1.3B & 42.7 & 33.0 & 34.0 & 26.7 & 22.2 & 61.5 & 40.3 (+21.8) \\ EC\({}_{\text{AACE}}\) & 1.0G & 9.2B & 23.4 & 12.6 & 0.0 & 8.6 & 6.7 & 40.1 & 17.3 \\ Flan-EC\({}_{\text{LACE}}\) & 1.0G & 9.2B & 48.3 & 44.5 & 37.9 & 32.0 & 32.2 & 73.1 & 46.4 (+29.1) \\ EC\({}_{\text{XL}}\) & 3.6G & 17.4B & 26.7 & 11.0 & 0.0 & 1.9 & 12.4 & 34.2 & 19.4 \\ Flan-EC\({}_{\text{XL}}\) & 3.6G & 17.4B & 52.1 & 41.4 & 40.3 & 33.2 & 38.1 & 74.3 & 49.4 (+30.0) \\ \hline ST\({}_{\text{BERT}}\) & 0.3G & 1.3B & 25.2 & 17.7 & 0.0 & 14.0 & 12.6 & 25.7 & 18.1 \\ Flan-ST\({}_{\text{BASE}}\) & 0.3G & 1.3B & 42.4 & 35.5 & 34.9 & 26.4 & 22.5 & 61.5 & 40.4 (+21.8) \\ ST tasks from BIG-Bench [47]; The reasoning benchmark comprises four tasks: GSM8K [8] and SVAMP [40]/ASDIV [32] incorporate the grade school math word problems and the elementary-level math word problems, and StrategyQA [13] measures open-domain questions where the required reasoning steps are implicit in the question; The QA benchmark include four QA tasks: the elementary AI2 science category in UnifiedQA [20], BoolQ [6], ARC-easy and ARC-challenge [7] that covers QA tasks in abstract, yes/no, multiple-choice formats. For MMLU and BBH, we evaluate both the ability of directly predicting the answer via direct prompting, where the model directly gives the answer [4], as well as via chain-of-thought (CoT) prompting, where the model must provide a reasoning chain before giving the final answer [53]. For reasoning tasks, we only measure CoT prompting accuracy. For all benchmarks except for QA we use the given few-shot exemplars, with the number of exemplars following prior work: five-shot for MMLU, three-shot for BBH, eight-shot for reasoning tasks, and zero-shot for QA. For a given model we also report a single "normalized average" metric, following the "normalized preferred metric" in BIG-Bench [47]. Our normalized average metric is the macro-average over four normalized scores: MMLU-Direct, BBH-Direct, Reasoning-CoT, and QA-Direct. Results for all tasks in each benchmark are reported in Appendix. ### Controlled study across scales We instruction finetune a range of Flan-MoE models at batch size 32 and sequence length 2048 for 200k steps. This matches the number of training examples used for Flan-T5 [4]. We re-finetuning our own Flan-T5 variants for fair comparisons. Dense Model Size.Figure 2 shows the performance of each model (dense and sparse) against forward-pass FLOPs. The cost-performance Pareto frontier for Flan-MoE dominates the dense models by a wide margin, indicating that Flan-MoE offers strong improvements across all scales from small, up to xxl. The effect is particularly large on zero-shot and few-shot MMLU-Direct, with absolute performance improvements of 7.1% on average. For challenging tasks in BBH-Direct, Flan-MoE offers a strong boost at small scales, while at larger scales the gains are more modest but still significant. Expert Number.The performance of Flan-MoE models has been observed to scale with the number of experts included in the architecture, but it tends to saturate beyond a certain threshold. Initially, as the number of experts increases in Figure 4, the model benefits from a richer repertoire of specialized sub-networks, each capable of handling distinct tasks or aspects of the problem space. This diverse ensemble enables the MoE model to demonstrate enhanced adaptability and efficiency in processing complex tasks, leading to improved performance overall. However, as the number of Figure 2: Average zero performance of Flan-MoE models versus Flan-T5 dense models for similar effective FLOPs per token over the 57 MMLU tasks and 23 BBH tasks. experts continues to grow, the performance gains begin to diminish, eventually reaching a point of saturation for base-sized model. Routing StrategyRouting strategy is an essential component of Mixture-of-Experts (MoE) models, playing a pivotal role in determining the effectiveness and efficiency of these models. The primary function of the routing strategy is to intelligently distribute input data among multiple specialized experts, each optimized for handling specific subsets of the input space. This distribution process is crucial for maximizing the utilization of the model's capacity while minimizing the risk of overfitting. An effective routing strategy not only ensures that the appropriate experts are selected for a given input, but also that resources are allocated optimally, leading to enhanced computational efficiency and faster training times. Consequently, there have been two trending strategies, token-choice [23] which lets the token select the top-\(K\) experts, and expert-choice [55] which lets the experts select the top-\(K\) tokens. We presented a detailed study about how different routing decisions affect the instruct fine-tuning performance in Figure 3 and Table 1, which includes the checkpoints from Switch Transformer top-1 token-choice gating (Flan-Switch), GShard top-2 token-choice gating (Flan-GS) and expert-choice top-2 gating (Flan-EC) models pre-trained on the same GLaM [10] dataset. It is evident that activating more experts, as demonstrated by the comparison between the Flan-Switch and Flan-GS strategies, results in enhanced performance across all four benchmarks. Among these benchmarks, the MMLU-Direct model shows the most significant improvement, with an increase from 38.0% to 39.9% for base/large-sized models. Although the gains at the extra-large scale are more modest, they remain noteworthy and meaningful. It's noteworthy that instruction-tuning significantly amplifies the performance of both held-out MMLU, BBH, and held-in QA and reasoning benchmarks for MoE models in comparison to dense models of equivalent capacity. The advantages are amplified even further for larger MoE models. For instance, instruction-tuning enhances the performance of ST\({}_{\text{32B}}\) by a substantial 45.2%, while the improvement observed for Flan-PALM\({}_{\text{62B}}\) is comparatively modest at around 6.6%. Furthermore, the Flan-EC strategy consistently outshines the Flan-GS approach for the given model across various scales and tasks. It is noteworthy that the performance gap between the token-choice and expert-choice models can be bridged when we incorporate advanced auxiliary loss and pre-training strategy as exhibited in ST-MoE [56]. This integration led to the development of our Flan-ST models. Considering that the largest ST-MoE set the benchmark in a variety of NLP tasks when appropriately fine-tuned, we have also decided to scale up Flan-ST, employing instruction fine-tuning. ### Scaling up Flan-MoE We increase the architecture size to assess the performance of Flan-MoE in the large-scale regime. As discussed above, we instruction fine-tune the largest ST-MoE\({}_{\text{32B}}\)[56] model with 12 expert layers in encoder, and decoder, respectively; these are non-uniformly distributed, with 64 experts per layer, Figure 3: Learning efficiency comparison. Average zero-shot, and few-shot performance of Flan-MoE models versus Flan-T5 dense models as more tokens are processed during training on FLAN Tasks. and \(K=2\) activated per token. It was trained at a batch size of 32 and sequence length of 2048 for 200k steps. We average checkpoints towards the end of training. The model Flan-ST\({}_{\text{32B}}\), comprising a total of 32 billion parameters, only utilizes 32.1 GFLOPs per token, which amounts to merely one-third of the computational power required by a Flan-PaLM\({}_{\text{62B}}\) model. Additionally, all the routers combined account for less than 4 million parameters. Table 1 illustrates the performance of this model alongside current state-of-the-art instruct fine-tuned models. Flan-ST\({}_{\text{32B}}\) achieves a 65.4% few-shot MMLU benchmark accuracy and a 54.4% few-shot BBH benchmark accuracy, with a relatively modest architectural size and training count. Notably, Flan-ST\({}_{\text{32B}}\) surpasses the performance of Flan-PaLM\({}_{\text{62B}}\), which consumes nearly triple the compute resources, by a substantial margin across all four benchmarks. However, it is important to acknowledge the considerable performance gap that persists between the largest Flan-PaLM\({}_{\text{540B}}\) and Flan-ST\({}_{\text{32B}}\) models. ## 4 Discussion ### Finetuing Strategy Sparse models have performed remarkably well in the regime of large datasets, but have sometimes performed poorly when finetuning data is limited [56; 12]. Instruction finetuning can also be viewed as a continual finetuning stage, so we present a detailed study about how different factors impact the instruct finetuning performance of Flan-MoE and offer a practical recipe. All the discussion here is based on instruction finetuning Flan-EC\({}_{\text{base}}\)/Flan-ST\({}_{\text{base}}\) for 100k steps. ECbase, whereas Z-loss resulted in a deterioration of performance. Conversely, for Flan-STbase, we observed a contrasting trend. We conjecture that the discordance between the auxiliary loss during pre-training and instruction-tuning could potentially disrupt the optimization process, thereby leading to a suboptimally optimized Flan-MoE model. Expert/Gating Freeze.In an effort to enhance the generalization capabilities of sparse models and combat overfitting, researchers have discovered that finetuning a subset of model parameters results in improved generalization performance for ST-MoE models, as noted in the study by ST-MoE [56]. Interestingly, it was observed that updating non-MoE parameters yields similar outcomes to updating all parameters, while updating only expert parameters performs slightly better. We conducted experiments by freezing the gating function, expert modules, and MoE parameters of the given model, as presented in Table 2. The results indicate that freezing either the expert or MoE components negatively impacts performance. Conversely, freezing the gate slightly improves performance, albeit not significantly. We postulate that this observation is related to the under-fitting of the Flan-MoE, as in Figure 5, which depicts the finetuning data efficiency ablation study. Hyperparameter Sensitivity.Following ST-MoE [56], we further experiment with expert dropout (\(0.0,0.1,0.5\)), varying the learning rate (\(1e^{-4},5e^{-4},1e^{-3}\)) and batch size (\(16,32,64\)) to examine the hyperparameter sensitivity of Flan-MoE. We found that the performance varies in different tasks but not significantly with all the hyperparameters, but lower learning rate and small batch size lead to a more stable instruction finetuning process of the model at extra-large scales. Finetuning v.s. Instruction Finetuning.To compare the gap between finetuning MoE directly and Flan-MoE, we experiment with single-task finetuned MoE, single-task finetuned Flan-MoE, and dense counterparts in Figure 6. We perform hyper-parameter search for each finetuning setting. For the examined Held-Out tasks, we observed that the improvement of Flan-MoE over finetuning MoE is noticeably larger compared to the performance gap between Flan-T5 and T5. This difference becomes even more pronounced when there is a scarcity of labeled data or when the model size is increased. These observations confirm the benefits of Flan-MoE in mitigating overfitting issues associated with directly finetuning MoE. Despite their advantages such as increased adaptability and efficiency in managing complex tasks, MoE architectures are prone to overfitting during the finetuning process, as discussed in citation. This can be seen in Figures 6 and 1, where single-task fine-tuned MoE models sometimes underperform their dense T5 counterparts. Interestingly, compared to dense models, MoE models derive greater benefits from instruction-tuning and are more sensitive to the number of instruction-tuning tasks. In general, MoE model performance scales better with respect to the number of tasks rather than the number of experts. We hypothesize this is primarily due to the specialized nature of individual experts, which can lead to heightened sensitivity to noise and limited generalization capabilities when exposed to unseen data. ### Additional Analysis Expert Specialization.As the size of a Flan-MoE model increases in Figure 7, a notable rise in expert specialization tends to occur. Larger models entail a higher number of parameters and more complex structures, which inherently provide a broader scope for each expert to specialize in specific facets of the problem space. This increased specialization can be understood as a form of division of labor, where each expert sub-network becomes adept at handling a certain type of task or data pattern. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Finetuning** & **MMLU** & **BBH** & **GSMSK** & **Avg.** \\ **Strategy** & Direct & Direct & CoT & **Avg.** \\ \hline Baseline\({}_{\text{Flan-MoE},\text{ST}_{\text{max}}}\) & 40.0 & 33.2 & 6.6 & 37.7 \\ \hline Freeze-Galert\({}_{\text{Flan-MoE},\text{EC}_{\text{max}}}\) & 40.2 & 33.9 & 6.6 & 38.0 \\ Freeze-Expert\({}_{\text{Flan-MoE},\text{EC}_{\text{max}}}\) & 38.3 & 32.5 & 5.4 & 36.2 \\ Freeze-Molfer\({}_{\text{Flan-MoE},\text{EC}_{\text{max}}}\) & 38.4 & 32.2 & 5.3 & 36.2 \\ \hline \(\text{Z-loss\({}_{\text{Flan-MoE},\text{EC}_{\text{max}}})}\) & 38.9 & 32.8 & 5.7 & 36.8 \\ Balance-loss\({}_{\text{Flan-MoE},\text{EC}_{\text{max}}}\) & 40.8 & 33.4 & 7.1 & 38.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablations on different finetuning strategies of Flan-EC\({}_{\text{BASE}}\) and Flan-ST\({}_{\text{BASE}}\). Consequently, the overall model can demonstrate a higher degree of adaptability and precision in tackling diverse and complex tasks. We also observe that after instruction-tuning, the MoE models exhibit better expert usage, which may help prevent the expert collapse for generalization after instruction-tuning as in [57]. Failure Cases.The fine-grained specialization of Flan-MoE models, particularly when fine-tuned on English-only instructions, can inadvertently lead to a narrowing of the model's capacity to effectively process and generate content in multiple languages. We found all the Flan-MoE perform poorly on multi-lingual benchmarks including TyDiQA and MGSM. Even the largest Flan-ST\({}_{\text{32b}}\) only achieves 15.5% on MGSM and 25.1% on TyDiQA, which is only comparable to the vanilla PalM\({}_{\text{62b}}\) with 18.2% on MSGM, and PaLM\({}_{\text{89}}\) with 25.0% on TyDiQA. It also underperform Flan-PaLMvariants. We hypotheses that this issue may steps from the model's over-optimization towards the specificities of the English language during finetuning, which can impede its ability to navigate the complexities of other languages. Consequently, while MoE models offer significant benefits in terms of task-specific adaptability and efficiency, their potential shortcomings in multilinguality highlight the importance of incorporating diverse linguistic data during the training process to ensure broad and effective language coverage. ## 5 Related Work Instruction Tuning.Instruction tuning has evolved as a strategy to enhance the functionality and interactivity of large language models (LLMs) for dialogues and complex tasks. Prior studies, including [41, 27, 1], have delved into large-scale multi-task fine-tuning to enhance the downstream single target fine-tuning, albeit without instruction prompts. Initiatives such as UnifiedQA [20, 31, 19] have amalgamated a multitude of NLP tasks into a singular generative question answering format, utilizing prompt instructions for multi-task fine-tuning and evaluation. Efforts like Natural Instructions [33], Flan 2021 [52], and P3 (the Public Pool of Prompts, [44]) have collated vast NLP task collections, templatizing them with instructions for fine-tuning models to enhance their adaptability to unseen instructions. Some studies, such as Super-Natural Instructions [51] Figure 6: Flan-MoE Outperforms MoE on Single-Task Finetuning. We compare single-task finetuned MoE, single-task finetuned Flan-MoE, and dense counterparts. The performance gap between Flan-MoE and MoE is noticeably larger than that between FLAN-T5 and T5. Figure 7: Expert usage of Flan-EC at different scales during instruction finetuning, where larger models entail smaller expert usage. and OPT-IML [18], took this a step further by combining numerous datasets and tasks into a single resource. In the meantime, others like xP3 [35] introduced multilingual instruction tuning and Flan 2022 [4] employed Chain-of-Thought training prompts. Recently, there has been a move towards expanding task diversity more assertively using synthetic data generation, particularly for creative and open-ended dialogue [50; 17; 54]. Some researchers have also tried to provide human feedback on language model responses [39; 14; 37; 3; 2], or bridge the modality gap with multi-modal instruction fine-tuning [26; 9; 25]. Sparse Mixture of Experts models.The foundation of our work is built on the concept of deep sparse Mixture-of-Experts (MoEs), a topic that has been independently explored in both Computer Vision [42; 29; 36; 46] and Natural Language Processing [29; 36; 45; 23; 12; 10; 56; 5; 55; 21; 22; 57]. The idea revolves around conditional computation, which aims to enhance the number of model parameters without a corresponding rise in computational expense. This is achieved by selectively activating only the relevant portions of the model, based on input-dependent factors. MoE models leverage a learned gating mechanism that triggers only a select subset of \(k\) experts out of a total of \(E\) for a given input. This approach allows an input to either select all experts [11] or merely a sparse mixture of them, as observed in recent massive language models [12; 10]. While a number of studies have sought to enhance the gating mechanism itself [15; 24; 43; 55], MoE models have also been explored in the context of multitask learning [15; 22]. Typically, a shared pool of experts is used, although there has been investigation into per-task routers [30]. This essentially permits an input to choose the most relevant expert(s) for a given task, thereby optimizing the processing and results. Nevertheless, the instability of MoE models during fine-tuning or multitask learning has consistently been a challenge. Our study aims to investigate whether instruction fine-tuning with scaled tasks might contribute to mitigating the generalization issues inherent to MoE models. ## 6 Conclusion In this work, we have introduced Flan-MoE, an innovative method to amplify the scalability of instruction-tuned language models by employing the sparse Mixture-of-Experts (MoE) technique. Our strategy amalgamates the merits of instruction-finetuning, which bolsters task-specific performance, and MoE, which provides computational efficiency coupled with diminished memory requirements. We have substantiated the effectiveness of Flan-MoE through comprehensive experiments across a wide spectrum of Natural Language Processing (NLP) tasks, such as natural language understanding, question answering, and reasoning. Our results consistently underscore the superior performance of Flan-MoE over current state-of-the-art methods, marking substantial advancements in both accuracy and efficiency. Notably, these advancements are attained without necessitating an increase in computational resources or memory usage during training and inference, often even reducing the resource requirements in the process.
2304.00459
Fast Convergence of Random Reshuffling under Over-Parameterization and the Polyak-Łojasiewicz Condition
Modern machine learning models are often over-parameterized and as a result they can interpolate the training data. Under such a scenario, we study the convergence properties of a sampling-without-replacement variant of stochastic gradient descent (SGD) known as random reshuffling (RR). Unlike SGD that samples data with replacement at every iteration, RR chooses a random permutation of data at the beginning of each epoch and each iteration chooses the next sample from the permutation. For under-parameterized models, it has been shown RR can converge faster than SGD under certain assumptions. However, previous works do not show that RR outperforms SGD in over-parameterized settings except in some highly-restrictive scenarios. For the class of Polyak-\L ojasiewicz (PL) functions, we show that RR can outperform SGD in over-parameterized settings when either one of the following holds: (i) the number of samples ($n$) is less than the product of the condition number ($\kappa$) and the parameter ($\alpha$) of a weak growth condition (WGC), or (ii) $n$ is less than the parameter ($\rho$) of a strong growth condition (SGC).
Chen Fan, Christos Thrampoulidis, Mark Schmidt
2023-04-02T06:00:29Z
http://arxiv.org/abs/2304.00459v1
Fast Convergence of Random Reshuffling under Over-Parameterization and the Polyak-Lojasiewicz Condition ###### Abstract Modern machine learning models are often over-parameterized and as a result they can interpolate the training data. Under such a scenario, we study the convergence properties of a sampling-without-replacement variant of stochastic gradient descent (SGD) known as random reshuffling (RR). Unlike SGD that samples data with replacement at every iteration, RR chooses a random permutation of data at the beginning of each epoch and each iteration chooses the next sample from the permutation. For under-parameterized models, it has been shown RR can converge faster than SGD under certain assumptions. However, previous works do not show that RR outperforms SGD in over-parameterized settings except in some highly-restrictive scenarios. For the class of Polyak-Lojasiewicz (PL) functions, we show that RR can outperform SGD in over-parameterized settings when either one of the following holds: (i) the number of samples (\(n\)) is less than the product of the condition number (\(\kappa\)) and the parameter (\(\alpha\)) of a weak growth condition (WGC), or (ii) \(n\) is less than the parameter (\(\rho\)) of a strong growth condition (SGC). ## 1 Introduction We consider finite-sum minimization problems of the form \[\min\biggl{\{}f(x)=\frac{1}{n}\sum_{i=1}^{n}f(x;i)\biggr{\}}. \tag{1}\] Stochastic gradient descent (SGD) is a popular algorithm for solving machine learning problems of this form. A significant amount of effort has been made to understand its theoretical and empirical properties [5]. SGD has a simple update rule in which a sample \(i_{k}\) is chosen randomly with replacement and at each iteration we compute \(x^{k+1}=x^{k}-\eta^{k}\nabla f(x^{k};i_{k})\). This is cheaper than using the full gradient at each iteration. However, it is well known that the convergence rate of SGD can be much worse than the convergence rate of full-gradient descent. For example, for strongly-convex functions SGD has a sublinear convergence rate while full-gradient descent has a linear convergence rate. Given the increasing complexity of modern learning models, a practically-relevant question to ask is how SGD performs in over-parameterized settings, under which the model interpolates (exactly fits) the data. Previously, it has been shown that SGD can achieve a linear convergence rate like full-gradient descent under various interpolation conditions for strongly-convex functions [24, 35, 26, 37] such as the strong growth condition (SGC) and the weak growth condition (WGC). An assumption that is weaker than strong convexity which allows full gradient descent to achieve a linear rate is the Polyak-Lojasiewicz (PL) condition [29]. Recently, the PL condition has gained popularity in machine learning [14] and it has been shown that several overparameterized models that interpolate the data satisfy the PL condition [2]. Similar to the strongly-convex case, under interpolation and the PL condition SGD can achieve a linear rate similar to full gradient descent [2, 37]. A popular variation on SGD is random reshuffling (RR). At each epoch \(t\in\{1,2,...,T\}\), the algorithm RR randomly chooses a permutation \(\pi^{t}\). That is, \(\pi^{t}_{j+1}\) is sampled without replacement from the set \(\{1,2,...,n\}\) for \(j\in\{0,1,...,n-1\}\). Then it performs the following update, going through the dataset in the order specified by \(\pi^{t}\) \[x^{t}_{j+1}=x^{t}_{j}-\eta^{t}_{j}\nabla f(x^{t}_{j};\pi^{t}_{j+1}). \tag{2}\] Note that \(x_{0}\triangleq x^{1}_{0}\) is the initialization and \(x^{t+1}_{0}=x^{t}_{n}\ \forall t\geq 1\). We summarize the method in Algorithm 1. A variation on RR is the incremental gradient (IG) method where \(\pi^{t}\) is deterministic and fixed over all epochs. ``` 0:\(x_{0}\), \(T\), step sizes \(\{\eta^{t}_{j}\}\) 1:for\(t=1,2,\ldots,T\)do 2: Choose a permutation \(\pi^{t}\) from the set of all permutations; set \(x^{t}_{0}=x_{0}\) if \(t=1\); set \(x^{t}_{0}=x^{t-1}_{n}\) if \(t>1\). 3:for\(j=1,2,\ldots,n-1\)do 4:\(x^{t}_{j+1}=x^{t}_{j}-\eta^{t}_{j}\nabla f(x^{t}_{j};\pi^{t}_{j+1})\) 5:endfor 6:endfor ``` **Algorithm 1** Random Reshuffling (RR) RR has long been known to converge faster than SGD empirically for certain problems [3, 4]. However, analyzing RR is more difficult than SGD because (conditioned on the past iterates) each individual gradient is no longer an unbiased estimate of the full gradient. Thus, the analysis of RR has only emerged in a series of recent efforts [13, 25, 33, 12]. Previous works have shown that RR outperforms SGD for strongly-convex objectives in various under-parameterized settings, when the the number of epochs (\(T\)) is sufficiently large. However, these sublinear rates for RR are slower than the linear rates obtained for SGD in the over-parameterized setting. Further, current convergence rate analyses for RR in the over-parameterized setting either make unrealistic assumptions (see the next section) or are also slower than SGD unless we make very-strong assumptions. In this work, we analyze RR and IG for PL functions under both the SGC and WGC over-parameterized assumptions. We give explicit convergence rates for RR that can be faster than the best known rate for SGD, under realistic assumptions that hold for situations like over-parameterized least squares. Our results also show IG can converge faster than SGD (though not RR) in some settings. We consider relaxations of the SGC and WGC where interpolation is only approximately satisfied, showing faster convergence of RR to a neighbourhood of the solution. ## 2 Related Work Optimization under the PL condition:the PL inequality was first explored by Polyak [29] and Lojasiewicz [19]. It applies to a wide range of important machine learning problems such as least square and logistic regression (over a compact set) [14]. More generally, any function of the form \(f(x)=g(Ax)\) for a matrix \(A\) with a \(\mu\)-strongly convex funciton \(g\) satisfies the \(\mu\)-PL condition [14]. Several recent works have argued that considering a local PL condition around minimizers can be used as a model for analyzing the effectiveness of SGD in training neural networks [9, 17, 28, 36]. Polyak [29] showed that full gradient descent on smooth objective functions can achieve a linear convergence rate under the PL condition. But it has recently been highlighted that the PL condition can be used to show linear convergence rates for a variety of methods [14]. Typically, the PL condition leads to similar convergence rates as those obtained under the stronger condition of strong convexity. In the case of SGD under interpolation, it has been shown that the rate of SGD under the PL condition is linear [2, 37]. However, the convergence rates for SGD under interpolation for \(\mu\)-PL functions are slower than those for strongly convex functions (see Table 1). RR for Strongly-Convex Under-Parameterized Problems:in this paragraph we focus our discussion on the case of strongly-convex functions, the subject of most literature on the topic. Bottou conjectured that the convergence rate of RR is \(O(\frac{1}{n^{2}T^{2}})\)[3], where \(T\) is the number of epochs. But it was several years before progress was made at showing this. The difficulty in the analysis of RR arises because of the bias in the conditional expectation of the gradients, \[\mathbb{E}\big{[}\nabla f(x_{i}^{t};\pi_{i+1}^{t})\mid x_{i}^{t}\big{]}\neq \nabla f(x_{i}^{t}). \tag{3}\] An early attempt to analyze RR [32] was not successful because their noncommutative arithmetic-geometric mean inequality conjecture was proven to be false [15]. An \(\tilde{\mathcal{O}}(\frac{1}{n^{2}T^{2}})\) rate was first shown asymptotically by Gurbuzbalaban et al. [12] Haochen and Sra [13] give the first non-asymptotic convergence result of \(\tilde{\mathcal{O}}(\frac{1}{\mu^{4}}(\frac{1}{n^{2}T^{2}}+\frac{1}{T^{3}}))\) with a strong-convexity constant \(\mu\), under strong assumptions. Under weaker assumptions a rate of \(\tilde{\mathcal{O}}(\frac{1}{\mu^{3}nT^{2}})\) was shown when by assuming component-wise convexity of each \(f(\cdot;i)\)[25], matching a lower bound of \(\Omega(\frac{1}{nT^{2}})\)[31].3 Note that this rate is faster than the \(\tilde{\mathcal{O}}(\frac{1}{nT})\) rate of SGD when the large epoch requirement is satisfied. Mischenko et al. [22] obtain the same rate of \(\tilde{\mathcal{O}}(\frac{1}{\mu^{3}nT^{2}})\) but only require \(T\gtrsim\frac{1}{\mu}\). Their analysis is dependent on the underlying component-wise convexity structure. Ahn et al. [1] remove this dependence and obtain the same rate with \(T\gtrsim\frac{1}{\mu}\). However, their analysis relies on each \(f(\cdot;i)\) being \(G\)-Lipschitz (\(\|\nabla f(\cdot;i)\|\leq G\) for all \(i\)), which may require a constraint on problem (1) and a projection operation is needed to ensure the iterates are bounded [1, 27]. Nguyen et al. [27] give a unified analysis for shuffling schemes other than RR, while Safran and Shamir [33] show a lower bound of \(\Omega(\frac{1}{\mu n^{2}T^{2}}+\frac{1}{\mu nT^{3}})\) when \(f\) is a sum of \(n\) quadratics. In a more recent work, they have shown that RR does not significantly improve over SGD unless \(T\) is larger than \(\Theta(\frac{1}{\mu})\) in the worst case [34]. We can also consider other shuffling schemes; Lu et al. [20] design the GraB algorithm that achieves a faster rate of \(\tilde{\mathcal{O}}(\frac{1}{\mu^{3}n^{2}T^{2}})\) by incorporating previous gradient information into determining the current permutation. Furthermore, the matching lower bound for this algorithm is given by Cha et al. [7]. We note that all of these sublinear convergence rates are slower than the linear rates that are possible for SGD on over-parameterized problems. Footnote 3: Following previous conventions in the literature [25, 34, 7], we take \(\kappa=\Theta(\frac{1}{\mu})\) for comparisons. _RR for Strongly-Convex Over-Parameterized Problems:_ despite the widespread use of RR for training over-parameterized models, there is relatively little literature analyzing this setting. Haochen and Sra [13] show that the convergence rate of RR is at least as fast as SGD under interpolation even without any epoch requirements. However, their result does not show that a gap in the rates can exist and only applies in the degenerate case where each function is strongly-convex.4 Ma and Zhou [21] show a faster rate than SGD for RR under over-parameterization and a "weak strong convexity" assumption, but their result is also somewhat degenerate as their assumption excludes standard problems like over-parameterized least squares.5 We can obtain interpolation results under more-realistic assumptions as special cases of the results of Mischenko et al. [22] and Nguyen et al. [27]. However, in the interpolation setting the rates obtained by these works for convex and strongly-convex functions are slower than the rate obtained by Vaswani et al. [37] for SGD (we summarize the known linear rates under over-parameterization in Table 1). Footnote 4: In this setting we could solve the problem by simply applying gradient descent to any individual function and ignoring all other training examples. _RR for Polyak-Lojasiewicz Problems:_ several works have presented convergence rates for RR on PL problems [13, 1, 22, 27]. Haochen and Sra give the first non-asymptotic rate of \(\tilde{\mathcal{O}}(\frac{1}{\mu^{4}}(\frac{1}{n^{2}T^{2}}+\frac{1}{T^{3}}))\) for the under-parameterized setting but requiring each individual function to have Lipschitz continuous Hessian [13]. Ahn et al. improve the rate to \(\tilde{\mathcal{O}}(\frac{1}{\mu^{3}nT^{2}})\) when \(T\gtrsim\frac{1}{\mu}\) assuming bounded gradients [1] but these sublinear rates for the under-parameterized setting are slower than the linear rates of SGD for over-parameterized problems. The result of Mischenko et al. on \(\mu\)-PL functions [22] can be used to obtain a linear rate under over-parameterization,6 but only in the degenerate setting where all functions have the same gradient. The result of Nguyen et al. on PL functions [27, Theorem 1] concerns the under-parameterized setting. But in Appendix E we show how the proof of Theorem 1 of Nguyen et al. [27] can be modified to give a linear convergence rate of RR for the over-parameterized setting. This RR rate can be faster than SGD, but under a more restricted scenario than the results presented in this work (we discuss the precise details in Section 4). Footnote 6: By taking \(B=0\) in Mischenko et al.’s Theorem 4. ## 3 Assumptions In this section, we present the assumptions made in our analyses. Our first assumption is that the individual and overall functions are smooth and bounded below. **Assumption 1** (Smoothness and Boundedness): _The objective \(f\) is \(L\)-smooth and each individual loss \(f(\cdot;i)\) is \(L_{i}\)-smooth such that \(\forall x,x^{\prime}\in\mathrm{dom}(f)\),_ \[\|\nabla f(x)-\nabla f(x^{\prime})\| \leq L\|x-x^{\prime}\|, \tag{4}\] \[\|\nabla f(x;i)-\nabla f(x^{\prime};i)\| \leq L_{i}\|x-x^{\prime}\|,\forall i. \tag{5}\] _We denote \(L_{\max}\triangleq\max\limits_{i}L_{i}\). We assume \(f\) is lower bounded by \(f^{*}\), which is achieved at some \(x^{*}\), so \(f^{*}=f(x^{*})\). We also assume that each \(f(\cdot;i)\) is lower bounded by some \(f^{*}_{i}\)._ Next, we give the definition of the Polyak-Lojasiewicz (PL) inequality which lower bounds the size of the gradient as the value of \(f\) increases. **Assumption 2** (\(\mu\)-Pl): _There exists some \(\mu>0\) such that_ \[f(x)-f^{*}\leq\frac{1}{2\mu}\|\nabla f(x)\|^{2}\quad\forall x\in\mathrm{dom} (f), \tag{6}\] _where \(f^{*}\) is the optimal value of \(f\)._ All strongly-convex functions satisfy the PL inequality. However, many important functions satisfy the PL inequality that are not necessarily strongly-convex including the classic example of the squared prediction error with a linear prediction function [14]. PL functions can be non-convex but the condition implies invexity, a generalization of convexity. For smooth functions invexity is equivalent to every stationary point of \(f\) being a global minimum (though unlike strongly-convex functions we may have multiple stationary points) [8]. Recent literature shows that the PL condition and variants of it can be used to help analyze a variety of complicated models [2, 9, 17, 28, 36]. Next, we formally define interpolation which is the key property of the optimization problem implied by over-parameterization. **Assumption 3** (Interpolation): _We are in the interpolating regime, which we take to mean that_ \[\nabla f(x^{*})=0\quad\Longrightarrow\quad\nabla f(x^{*};i)=0. \tag{7}\] Thus, by interpolation we mean that stationary points with respect to the function \(f\) are also stationary points with respect to the individual functions \(f(\cdot;i)\). However, we do not assume that \(x^{*}\) is unique. Our results also consider either the strong or weak growth conditions [35, 37] regarding how the individual gradients change as \(x\) moves away from \(x^{*}\). **Assumption 4** (Sgc): _There exists a constant \(\rho\geq 1\) such that the following holds_ \[\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f(x;i)\rVert^{2}\leq\rho\lVert\nabla f(x )\rVert^{2},\quad\forall x\in\mathrm{dom}(f). \tag{8}\] **Assumption 5** (Wgc): _There exists a constant \(\alpha\geq 0\) such that the following holds_ \[\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f(x;i)\rVert^{2}\leq 2\alpha L(f(x)-f(x^ {*})),\quad\forall x\in\mathrm{dom}(f), \tag{9}\] _where \(f\) is lower bounded by \(f^{*}\) and \(f^{*}=f(x^{*})\)._ There are a close relationships between interpolation, the SGC, and the WGC. The SGC implies interpolation, while for smooth functions the SGC implies the WGC [23, Lemma 5]. Further, for functions satisfying smoothness and \(\mu\)-PL (Assumptions 2 and 1) all three conditions are equivalent if the \(f_{i}\) are invex (SGC implies interpolation which implies WGC which implies SGC). In this setting of smooth \(\mu\)-PL functions with invex \(f_{i}\), the constant \(\alpha\) in the WGC is upper bounded by \(\frac{L_{\max}}{L}\) and by \(\rho\)[23, Lemmas 5-6],7 while the value of the constant \(\rho\) in the SGC is upper bounded by \(\frac{\alpha L}{\mu}\)[37, Proposition 1] and is thus also bounded by \(\frac{L_{\max}}{\mu}\). The value of \(\rho\) in the SGC is lower bounded by 1 while in Appendix A we show that \(\alpha\) in the WGC is lower-bounded by \(\frac{\mu}{L}\). Footnote 7: We include the \(\alpha\leq\rho\) result in Appendix A since it is shown under stronger assumptions [37] or stated differently [22] in prior work. A classical setting where we would expect these interpolation properties to hold is binary classification with a linear classifier when the data is linearly separable. In this setting the gradient of each example can be made to converge to zero due to the linear separability (under standard loss functions). In Appendix A, we give a generalization of an existing result [37, Lemma 1] showing that the SGC holds for a class of functions including the squared hinge loss and logistic regression in the linearly-separable setting. In the most-general form of our results, we consider relaxations of the SGC and the WGC that do not require the data to be fit exactly [30, 6]. **Assumption 6**: _There exists constants \(\rho\geq 0\) and \(\sigma\geq 0\) such that the following holds: \(\forall x\in\mathrm{dom}(f),\)_ \[\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f(x;i)\rVert^{2}\leq\rho\lVert\nabla f(x )\rVert^{2}+\sigma^{2}. \tag{10}\] **Assumption 7**: _There exists constants \(\alpha\geq 0\) and \(\sigma\geq 0\) such that the following holds: \(\forall x\in\mathrm{dom}(f),\)_ \[\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f(x;i)\rVert^{2}\leq 2\alpha L(f(x)-f(x^ {*}))+\sigma^{2}. \tag{11}\] These assumptions reduce to the SGC and WGC when \(\sigma=0\), and we note that the relaxed WGC is related to the expected smoothness condition of Gower et al. [11]. We also note that Assumption 6 reduces to the bounded variance assumption \(\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f(x;i)-\nabla f(x)\rVert^{2}\leq\sigma ^{2}\) when \(\rho=1\), which is commonly used in the analysis of SGD [5]. ## 4 Contributions Table 1 summarizes known and new convergence results for the convergence of SGD on strongly-convex and PL functions under the interpolation assumptions (Assumptions 3, 4, and 5). In this table we see that existing RR rates for strongly-convex functions are slower than for SGD. In contrast, the RR rates shown in this work can be faster than the SGD rates for \(\mu\)-PL objectives. Our main contributions are summarized as follows: _RR under WGC:_ for \(\mu\)-PL functions satisfying the WGC, we derive the sample complexity of RR to be \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}\sqrt{n} \sqrt{\frac{\alpha L}{\mu}}).\) In comparison, the sample complexity of SGD in this case is \(\tilde{\mathcal{O}}(\alpha\frac{L^{2}}{\mu^{2}})\) (we show this in Appendix B). Hence, RR outperforms SGD when \(\boldsymbol{\alpha}\frac{L}{\boldsymbol{\mu}}>\boldsymbol{n}\) and \(L_{\max}\sim L\) without requiring a large number of epochs. Thus, for \(\mu\)-PL objectives satisfying the WGC, RR converges faster than SGD for sufficiently ill-conditioned problems when the Lipschitz constants are similar. RR under SGC:under the SGC we give a sample complexity for RR of \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}\sqrt{n}\sqrt {\rho}\})\), which is better than the \(\tilde{\mathcal{O}}(\rho\frac{L}{\mu})\) of SGD provided \(\mathbf{\rho>n^{2}}\) and \(L_{\max}\sim L\). The situation \(\mathbf{\rho>n}\) happens when there is a large amount of disagreements in the gradients. Note that the rates of SGC can be faster than the WGC when \(\rho\leq\alpha\frac{L}{\mu}\). This can be satisfied for example when \(\alpha\sim 1\) and \(L_{\max}\sim L\). IG under WGC/SGC:IG can also outperform SGD for \(\mu\)-PL objectives when \(\mathbf{\rho>n^{2}}\) and \(L_{\max}\sim L\) under the SGC, or when \(\mathbf{\alpha^{\frac{L}{\mu}}>n^{2}}\) and \(L_{\max}\sim L\) under the WGC. The rates of RR for \(\mu\)-PL objectives can be better than IG by a factor of \(\sqrt{n}\), and cannot be worse because \(\rho\geq 1\) and \(\alpha\geq\frac{\mu}{L}\) (see Proposition 3 in Appendix). Experiments:we conduct experiments on objectives satisfying our assumptions. We experimentally see that a condition like \(\mathbf{\rho>n}\) is important for RR to achieve a faster rate than SGD. For over-parameterized problems we demonstrate that in practice RR can converge faster than SGD even in the early training phase, but that RR has less of an advantage as \(n\) grows. Comparisons with Existing Rates for \(\mu\)-PL Functions:Appendix E discusses how the proof of Nguyen et al. [27, Theorem 1] can used to obtain convergence rates of RR on over-parameterized \(\mu\)-PL objectives. Under interpolation (Assumption 3), the analysis of Nguyen et al. implies a rate for RR of \(\tilde{\mathcal{O}}((\frac{L_{\max}}{\mu})^{2}n)\), which \begin{table} \begin{tabular}{l c c c} \hline \hline & Interp & WGC & SGC & Interp+Invex \\ \hline SGD/SC & - & \(\alpha\frac{L}{\mu}\)[37] & \(\rho\frac{L}{\mu}\)[35] & \(\frac{L_{\max}}{\mu}\)[26] \\ RR/SC & - & - & - & \(\frac{L_{\max}}{\mu}n\)[22] \\ \hline SGD/PL & \(\frac{L_{\max}^{2}}{\mu^{2}}\)[2] & \(\alpha\frac{L^{2}}{\mu^{2}}\) & \(\rho\frac{L}{\mu}\)[37] & \(\frac{L_{\max}^{2}}{\mu^{2}}\)[2] \\ IG/PL & \(\frac{L_{\max}^{2}}{\mu^{2}}n\)[27] & \(\frac{L_{\max}}{\mu}n\sqrt{\frac{\alpha L}{\mu}}\) & \(\frac{L_{\max}}{\mu}n\sqrt{\rho}\) & \((\frac{L_{\max}}{\mu})^{3/2}n\) \\ RR/PL & \(\frac{L_{\max}^{2}}{\mu^{2}}n\)[27] & \(\frac{L_{\max}}{\mu}\sqrt{n}(\sqrt{n}\vee\sqrt{\frac{\alpha L}{\mu}})\) & \(\frac{L_{\max}}{\mu}\sqrt{n}(\sqrt{n}\vee\sqrt{\rho})\) & \(\frac{L_{\max}}{\mu}\sqrt{n}(\sqrt{\sqrt{n}\vee\sqrt{\frac{L_{\max}}{\mu}}})\) \\ \hline \hline \end{tabular} \({}^{(1)}\) We ignore numerical constants and logarithmic factors. Note that \(\mu\)-SC stands for \(\mu\)-strongly convex. Interp, SGC, and WGC refer to Assumption 3, 4, and 5 (respectively). Note that \(L\leq L_{\max}\), for the WGC constant \(\alpha\) we have \(\frac{\mu}{L}\leq\alpha\leq\rho\)[23, Appendix A], while for the SGC constant \(\rho\) we have \(\max\{1,\alpha\}\leq\rho\leq\frac{\alpha L}{\mu}\)[37], and if all functions are invex we also have \(\alpha\leq\frac{L_{\max}}{L}\) and \(\rho\leq\frac{L_{\max}}{\mu}\)[23]. \({}^{(2)}\) The “Interp+Invex” results assume each function is invex, and the strongly-convex results [26, 22] additionally assume each function is convex. \({}^{(3)}\) The symbol “\(\vee\)” refers to taking the maximum of two values. \end{table} Table 1: Number of gradient evaluations required by each algorithm to obtain an \(\epsilon\)-accurate solution, which is defined as \(\|x-x^{*}\|^{2}\leq\epsilon\) and \(f(x)-f(x^{*})\leq\epsilon\) for \(\mu\)-strongly convex and \(\mu\)-PL objectives respectively. The contributions of this work are the rates highlighted in blue.\({}^{(1)(2)(3)}\) is always worse than the SGD rate of \(\tilde{\mathcal{O}}((\frac{L_{\max}}{\mu})^{2})\) and our RR rate under this assumption of \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,(\frac{L_{\max}}{\mu})^{3/2} \sqrt{n}\})\) (although this rate requires assuming each function is invex). On the other hand, under the SGC (Assumption 4) the analysis of Nguyen et al. implies a rate of \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu^{2}} \sqrt{n}\sqrt{n+\rho-1}\})\), but this is worse than our SGC result of \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}\sqrt{n} \sqrt{\rho}\})\) and only faster than SGD under a more-narrow range of problem constant settings. ## 5 Convergence Results We present the convergence results of RR and IG for \(\mu\)-PL objectives under the SGC and the WGC. Below, we use a constant step size \(\eta\). The proofs of Theorem 1 and 2 are given in Appendix D. Theorem 5.1 (RR + \(\mu\)-Pl): _Suppose f satisfies Assumption 1 and 2, then RR with a learning rate \(\eta\) satisfying_ \[\eta\leq\min\{\frac{1}{2nL_{\max}},\frac{1}{2\sqrt{2}L_{\max}\sqrt{n\rho}}\} \quad\text{under Assumption \ref{eq:R}},\] _or_ \[\eta\leq\min\{\frac{1}{2nL_{\max}},\frac{1}{2\sqrt{2}L_{\max}\sqrt{n}\sqrt{ \frac{\alpha L}{\mu}}}\}\quad\text{under Assumption \ref{eq:R}}, \tag{12}\] _achieves the following rate for \(\mu\)-PL objectives:_ \[\mathbb{E}[f(x_{n}^{T})-f(x^{*})]\leq(1-\frac{1}{4}n\mu\eta)^{T}(f(x^{0})-f(x^ {*}))+\frac{4L_{\max}^{2}\eta^{2}n\sigma^{2}}{\mu}. \tag{13}\] Remark 1: Set \(\sigma=0\) in (13) and using Lemma 5 in Appendix C to solve the recursion, the sample complexity under the SGC is \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}\sqrt{n} \sqrt{\rho}\})\), and the sample complexity under the WGC is \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}\sqrt{n} \sqrt{\frac{\alpha L}{\mu}}\})\). These rates are faster than SGD under the corresponding assumption for \(\mu\)-PL objectives when either \(n<\rho\) or \(n<\alpha\frac{L}{\mu}\) holds and \(L_{\max}\sim L\). In cases where \(L_{\max}\) is not similar to \(L\), RR can still outperform SGD when \(n<\frac{\rho L^{2}}{L_{\max}^{2}}\) is satisfied under the SGC or \(n<\frac{\alpha L^{3}}{\mu L_{\max}^{2}}\) is satisfied under the WGC. Theorem 5.2 (IG + \(\mu\)-Pl): _Suppose f satisfies Assumption 1 and 2, then IG with a learning rate \(\eta\) satisfying_ \[\eta\leq\min\{\frac{1}{\sqrt{2}nL_{\max}},\frac{1}{2L_{\max}n\sqrt{\rho}}\} \quad\text{under Assumption \ref{eq:R}},\] _or_ \[\eta\leq\min\{\frac{1}{\sqrt{2}nL_{\max}},\frac{1}{2L_{\max}n\sqrt{\frac{ \alpha L}{\mu}}}\}\quad\text{under Assumption \ref{eq:R}}, \tag{14}\] achieves the following rate for \(\mu\)-PL objectives:_ \[f(x_{n}^{T})-f(x^{*})\leq(1-\frac{1}{2}n\mu\eta)^{T}(f(x^{0})-f(x^{*}))+\frac{2L_ {\max}^{2}\eta^{2}n^{2}\sigma^{2}}{\mu}. \tag{15}\] Remark 2: Similar to Remark 1, we can follow the same approach to obtain the sample complexity for IG to be \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}n\sqrt{ \rho})\})=\tilde{\mathcal{O}}(\frac{L_{\max}}{\mu}n\sqrt{\rho})\) under the SGC and \(\tilde{\mathcal{O}}(\max\{\frac{L_{\max}}{\mu}n,\frac{L_{\max}}{\mu}n\sqrt{ \frac{\alpha L}{\mu}}\})=\tilde{\mathcal{O}}(\frac{L_{\max}}{\mu}n\sqrt{ \frac{\alpha L}{\mu}})\) under the WGC. For \(\mu\)-PL objectives, these rates are worse than RR as the parameter \(\rho\) in the SGC satisfies \(\rho\geq 1\), and the parameter \(\alpha\) in the WGC satisfies \(\alpha\geq\frac{\mu}{L}\). ### Proof Sketch In this section, we provide a proof sketch of Theorems 1 and 2 under the SGC. A similar approach is taken for the WGC. To start with, we first express the descent on the objective \(f\) in terms of epochs. For a step size \(\eta\leq\frac{1}{nL}\)[27], we have the following \[f(x_{0}^{t+1})\leq f(x_{0}^{t})-\frac{n\eta}{2}\|\nabla f(x_{0}^{t})\|^{2}+ \frac{L_{\max}^{2}\eta}{2}\sum_{i=0}^{n}\|x_{i}^{t}-x_{0}^{t}\|^{2}. \tag{16}\] The key is to upper bound the sum of deviations of each step from its starting point within an epoch, i.e. \(\sum_{i=0}^{n}\|x_{i}^{t}-x_{0}^{t}\|^{2}\). We want this term to be small to get a fast rate. To do so, we obtain the following results (see Lemma 3 and Lemma 4 in Appendix C by setting \(\sigma=0\)) for IG and RR under the SGC. IG: \[\sum_{i=0}^{n-1}\|x_{i}^{t}-x_{0}^{t}\|^{2} \leq 2\eta^{2}n^{2}(n\rho)\|\nabla f(x_{0}^{t})\|^{2},\] RR: \[\mathbb{E}[\sum_{i=0}^{n-1}\|x_{i}^{t}-x_{0}^{t}\|^{2}] \leq 2\eta^{2}n^{2}(\rho+n)\mathbb{E}[\|\nabla f(x_{0}^{t})\|^{2}].\] Neglecting the effects of taking expectations, the difference in the deviation bound between RR and IG is reflected in the factors \(\rho+n\) and \(n\rho\). After substituting these results into (16), we obtain IG: \[f(x_{0}^{t+1})\stackrel{{(a)}}{{\leq}}f(x_{0}^{t})- \frac{n\eta}{2}(1-2L_{\max}^{2}\eta^{2}n^{2}\rho)\|\nabla f(x_{0}^{t})\|^{2},\] (17) RR: \[\mathbb{E}[f(x_{0}^{t+1})]\stackrel{{(b)}}{{\leq}} \mathbb{E}[f(x_{0}^{t})]-\frac{n\eta}{4}(1-4L_{\max}^{2}\eta^{2}n\rho) \mathbb{E}[\|\nabla f(x_{0}^{t})\|^{2}],\] (18) where both (a) and (b) require \(\eta\sim\tilde{\mathcal{O}}(\frac{1}{nL_{\max}})\). Note that for IG to make progress in decreasing \(f\), we further require \(\eta\sim\tilde{\mathcal{O}}(\min\{\frac{1}{nL_{\max}},\frac{1}{L_{\max}n\sqrt{ \rho}}\}=\frac{1}{L_{\max}n\sqrt{\rho}})\) as \(\rho\geq 1\); whereas for RR, we require \(\eta\sim\tilde{\mathcal{O}}(\min\{\frac{1}{nL_{\max}},\frac{1}{L_{\max}\sqrt{n \rho}}\})\). Hence, the learning rate of RR can be larger than IG under the SGC. ## 6 Experimental Results In this section, we aim to address the following questions with our experiments: 1 How do the empirical convergence rates of RR, SGD, and IG depend on \(n\) and \(\rho\)? 2 Is a large number of epochs required for RR to outperform SGD under interpolation? Unless otherwise stated, we choose a constant learning rate in the range \([10^{-3},10^{-1}]\) that minimizes the train loss. ### Synthetic Experiments We first conduct experiments on a binary classification synthetic dataset. Specifically, we use the code provided by Loizou et al. [18] to generate linearly-separable data with margin \(\tau>0\). For training, we use a squared hinge loss with or without \(L_{2}\) regularization to simulate the convex and \(\mu\)-PL cases respectively. Note that squared hinge loss satisfies the SGC with \(\rho\leq\frac{n}{\tau^{2}}\) (see Proposition 1 in Appendix A). Thus by varying \(\tau\) we can investigate the convergence of SGD, RR, and IG as a function of the SGC constant \(\rho\). The results are shown in Figure 1. In agreement with our bounds, we observe that: (i) RR outperforms SGD when \(\rho\gg n\), (ii) RR outperforms IG for all values of \(\rho\), and (iii) the convergence (naturally) slows down for increasing \(\rho\) (which corresponds to decreasing margin \(\tau\)). ### Binary classification using RBF kernels Besides understanding the influence of \(\rho\), we further conduct experiments to study the convergence rates of RR in minimizing convex losses under interpola Figure 1: Binary classification on a linearly separable dataset with \(n=800\). (a) We plot train loss of RR as a function of epochs using squared hinge loss for different values of \(\rho\). We observe that the training is very slow when \(\rho>\mathcal{O}(10^{4})\), and speeds up when \(\rho\) approaches the order of magnitude of \(n\) or less. (b) We plot minimum train loss difference between RR and SGD or IG as a function of \(\rho\). For a convex objective such as squared hinge loss, we observe that the performance gain of RR over SGD predominantly occurs when \(\rho\gtrsim\mathcal{O}(10^{4})\). For \(\rho<\mathcal{O}(10^{4})\), SGD achieves a lower train loss than RR. We also observe that RR consistently performs better than IG for all \(\rho\) values. (c) Similar observations are made as those of (b) for a \(\mu\)-PL objective: squared hinge loss with \(L_{2}\) regularization. tion. To this end, we use squared hinge loss, squared loss and logistic loss on the mushrooms dataset from LIBSVM using RBF kernels [18]. We follow the procedure in Vaswani et al. [38] for choosing the bandwidths of the RBF kernels. As shown by our bounds, a large epoch is not required for RR to outperform SGD. This is observed in Figure 2, in which RR outperforms SGD in the first few epochs of training, and the performance gap is significantly larger in the early training stage than the later. ### Multi-class classification using deep networks We conducted experiments to study the convergence rates of RR in non-convex settings under interpolation. To this end, we use a softmax loss and one hidden-layer multilayer perceptrons (1-MLP) of different widths on the MNIST image classification dataset [16]. The results in Figure 3 show that RR can converge faster than SGD for a small \(T\), and a small \(n\) favors RR while for large \(n\) there is less of an advantage to using RR. The key take-aways from our experimental results is that for over-parameterized problems: 1. RR consistently outperforms IG. 2. The condition \(n\ll\rho\) is crucial for RR to outperform SGD. 3. Under over-parametrization, RR can outperform SGD already at the early stage of training (a large epoch is not required as suggested by previous theory for under-parameterized models). Figure 2: Binary classfication on mushrooms dataset using RBF kernels. (a) We plot the difference in the squared loss between SGD and RR as a function of epochs for step sizes (\(lr\)) 0.1, 1.0, and 10.0. (b)(c) Similar plot as (a) but for the squared loss and the logistic loss respectively. We observe that RR converges faster than SGD for all the step sizes, and the train loss difference is more significant in the initial training stage. Considering logistic loss and \(lr=10.0\), the train loss gap is \(\tilde{\mathcal{O}}(10^{-2})\) after one epoch, and decreases to \(\tilde{\mathcal{O}}(10^{-4})\) in the end. Similarly, we observe a faster decrease in the train loss difference within the first 25 epochs of training when squared hinge loss or squared loss is used. ## 7 Conclusion In this paper, we have derived convergence rates of RR under interpolation as implied by the SGC or the WGC for \(\mu\)-PL objectives. In this setting, RR converge faster than SGD provided the key condition \(\rho>n\) under the SGC or \(\alpha\frac{L}{\mu}>n\) under the WGC holds (assuming the individual Lipschitz constants do not differ too much from the overall Lipschitz constant). Moreover, we show that RR outperforms IG for all values of \(\rho\) and \(\alpha\) under the SGC and WGC respectively. Besides this, we further demonstrate that IG can outperform SGD when \(\rho>n^{2}\) holds under the SGC or \(\alpha\frac{L}{\mu}>n^{2}\) holds under the WGC. We remark that none of these conclusions follows from previous analysis under the strong convexity assumption. Our experimental results support these theoretical findings. #### 7.0.1 Ethical Statement The contribution is the theoretical analysis of an existing algorithm, so it does not have direct societal or ethical implications.
2301.12683
Computing the Haar state on ${\mathbb{O}(SL_q(3))}$
This paper shows that to compute the Haar state on $\mathbb{O}(SL_q(n))$, it suffices to compute the Haar states of a special type of monomials which we define as standard monomials. Then, we provide an algorithm to explicitly compute the Haar states of standard monomials on $\mathbb{O}(SL_q(3))$ with reasonable computational cost. The numerical results on $\mathbb{O}(SL_q(3))$ will be used in the future study of the $q$-deformed Weingarten function.
Ting Lu
2023-01-30T06:12:40Z
http://arxiv.org/abs/2301.12683v3
# Computing the Haar state on \(\mathcal{O}(SL_{q}(3))\) ###### Abstract. This paper shows that to compute the Haar state on \(\mathcal{O}(SL_{q}(n))\), it suffices to compute the Haar states of a special type of monomials which we define as standard monomials. Then, we provide an algorithm to explicitly compute the Haar states of standard monomials on \(\mathcal{O}(SL_{q}(3))\) with reasonable computational cost. The numerical results on \(\mathcal{O}(SL_{q}(3))\) will be used in the future study of the \(q\)-deformed Weingarten function. _Keywords --_ Quantum groups; quantum special linear group; Haar state. Math department, Texas A&M University, College Station, TX 77843, USA _E-mail address:[email protected]. **Definition 1**.: The **counting matrix** of a monomial \(x\in\mathcal{O}(SL_{q}(n))\), denoted as \(\theta(x)\), is a \(n\times n\) matrix with entries \(a_{ij},i,j=1,\ldots,n\) where \(a_{ij}\) equals the number of appearance of generator \(x_{ij}\) in \(x\). **Definition 2**.: The **row sum** and **column sum** of a \(n\times n\) matrix \(A=(a_{ij})_{i,j=1}^{n}\), denoted as \(\alpha(A)\) and \(\beta(A)\), are vectors in \(\mathbb{R}^{n}\): \[\alpha(A)=\left(\sum_{j=1}^{n}a_{ij}\right)_{i=1}^{n},\qquad\beta(A)=\left( \sum_{i=1}^{n}a_{ij}\right)_{j=1}^{n}.\] Here, we denote \((k)_{i=1}^{n}\) as a vector whose entries all equal to \(k\). **Definition 3**.: Let \(A\) be a \(n\times n\) matrix with non-negative integer entries. Then \(A\) is a \(\boldsymbol{k}\)**-doubly stochastic matrix**[14] if there is a positive integer \(k\) such that \(\alpha(A)=(k)_{i=1}^{n}=\beta(A)\). **Definition 4**.: Let \(S_{n}\) be the permutation group on \(n\) letters. Monomials in form \(\prod_{\sigma_{i}\in S_{n}}(x_{\sigma_{i}})^{m_{i}}\) with \(m_{i}\in\mathbb{N}\) and \(x_{\sigma_{i}}=\prod_{k=1}^{n}x_{k\sigma_{i}(k)}\) are called **standard monomials**. \(m=\sum_{i=1}^{n!}m_{i}\) is called the **order** and each \(x_{\sigma_{i}}\) is called a **segment**. The current paper will prove the following theorem on \(\mathcal{O}(SL_{q}(n))\): **Theorem 1**.: _The following are true on \(\mathcal{O}(SL_{q}(n))\):_ * _Let_ \(x\) _be a monomial. Then_ \(h(x)\neq 0\) _implies that there exist_ \(k\in\mathbb{N}^{+}\) _such that_ \(\theta(x)\) _is a_ \(k\)_-doubly stochastic matrix._ * _Every monomial with non-zero Haar state value can be written as a linear combination of standard monomials._ * _Let_ \(s_{l}\)_,_ \(l\in\mathcal{I}_{m}\) _be the set of standard monomials of order_ \(m\)_. Then, we can write_ \((Id\otimes h)\circ\Delta(s_{l})\) _and_ \((h\otimes Id)\circ\Delta(s_{l})\) _as linear combinations of_ \(s_{j}\)_'s and the coefficient of each_ \(s_{j}\) _is a linear combination of_ \(h(s_{i})\)_'s._ * _Let_ \(l(\tau)\) _be the inverse number of_ \(\tau\in S_{n}\)_. Then :_ \[h(\Pi_{k=1}^{n}x_{k,\tau(k)})=\frac{(-q)^{l(\tau)}}{\sum_{\sigma\in S_{n}}(-q) ^{2l(\sigma)}}=\frac{(-q)^{l(\tau_{i})}}{[n]_{q^{2}}!},\] _where_ \([n]_{q^{2}}=\frac{1-q^{2n}}{1-q^{2}}\) _and_ \([n]_{q^{2}}!=\prod_{j=1}^{n}[j]_{q^{2}}\)__ * _When changing the order of generators in a monomial, the newly generated monomials cannot contain more generator_ \(x_{11}\) _and_ \(x_{nn}\) _and cannot contain less generator_ \(x_{1n}\) _and_ \(x_{n1}\)_, comparing to the monomial being reordered._ For simplicity, the generators of on \(\mathcal{O}(SL_{q}(3))\) are denoted as: \[\begin{array}{cccc}a&b&c\\ d&e&f\\ g&h&k.\end{array}\] Then, standard monomials of order \(m=c_{1}+c_{2}+c_{3}+c_{4}+c_{5}+c_{6}\) are in the form: \[(aek)^{c_{1}}(afh)^{c_{2}}(bdk)^{c_{3}}(bfg)^{c_{4}}(cdh)^{c_{5}}(ceg)^{c_{6}}.\] **Definition 5**.: Segments \(aek\), \(afh\), and \(bdk\) are **high-complexity segments**. **Definition 6**.: Segments \(bfg\), \(cdh\), and \(ceg\) are **low-complexity segments**. Let \(x\) be a monomial. Denote \(N_{a}(x)\), \(N_{k}(x)\), \(N_{c}(x)\), and \(N_{g}(x)\) as the number of generators \(a\), \(k\), \(c\), and \(g\) in \(x\), respectively. Denote \(N_{hc}(x)\), \(N_{lc}(x)\) be the number of high/low-complexity segments in \(x\), respectively. The current paper will prove the following theorem on \(\mathcal{O}(SL_{q}(3))\): **Theorem 2** (Complexity Non-increasing Principle).: _Let \(x,y\), and \(z_{i}\)'s be monomials such that \(\theta(x)=\theta(y)\) and_ \[x=y+\sum_{i}z_{i}.\] _Then, for all \(z_{i}\)'s:_ * \(N_{hc}(z_{i})\leq N_{a}(x)+N_{k}(x)\)_;_ * \(N_{lc}(z_{i})\geq\max\{N_{c}(x),N_{g}(x)\}\)__ The current paper will provide an algorithm for explicitly computing the Haar state \(\mathcal{O}(SL_{q}(3))\) and for real values of \(q\), using the complexity non-increasing principle (see section 4.4.1 for a proof.) Explicit expressions in terms of parameter \(q\) are provided for a special type of monomials, and explicit expressions for general monomials can be computed given enough computational resources. Using this \(q\)-deformed Haar measure, we hope to pursue \(q\)-deformed Weingarten functions in future work. Examples of \(q\)-deformed Weingarten functions are provided in Appendix E. For all of these examples, it can be seen directly that when \(q\to 1\), the usual Haar measure is recovered. ## 2. Haar state on \(\mathcal{O}(SL_{q}(n))\) By Noumi _et al._[12], monomials on \(\mathcal{O}(GL_{q}(n))\) form a basis. As a quotient group of \(\mathcal{O}(GL_{q}(n))\), monomials on \(\mathcal{O}(SL_{q}(n))\) form a basis as well. To define the Haar state on \(\mathcal{O}(SL_{q}(n))\), it suffices to define the Haar state of each monomial. ### Characterization of monomial \(x\) such that \(h(x)\neq 0\) Not every monomial has a non-zero Haar state value. In this section, we will give a criterion to determine whether the Haar state of a monomial is zero. Let \(D_{n}\) be the diagonal subgroup of \(SL_{q}(n)\). Recall that the coordinate Hopf algebra \(\mathcal{O}(D_{n})\) is the commutative algebra \(\mathbb{C}[t_{1},t_{1}^{-1},\cdots,t_{n},t_{n}^{-1}]\) of all Laurent polynomials in \(n\) indeterminates \(t_{1},t_{2},\ldots,t_{n}\) with comultiplication \(\Delta(t_{i})=t_{i}\otimes t_{i}\) and counit \(\varepsilon(t_{i})=1\). The surjective homomorphism \(\pi_{D_{n}}:\mathcal{O}(SL_{q}(n))\mapsto\mathcal{O}(D_{n})\) is given by \(\pi_{D_{n}}(x_{ij})=\delta_{ij}t_{i}\). Since we have \(D_{q}^{k}=1_{\mathcal{O}(SL_{q}(n))}\) for all \(k\in\mathbb{N}^{+}\), \(\pi_{D_{n}}\) tells us \(1_{\mathcal{O}(D_{n})}=\pi_{D_{n}}(1_{\mathcal{O}(SL_{q}(n))})=\pi_{D_{n}}(D_{ q}^{k})=(\Pi_{i=1}^{n}t_{i})^{k}\). The right and left action of \(\mathcal{O}(SL_{q}(n))\) on \(\mathcal{O}(D_{n})\), denoted as \(L_{D_{n}}\) and \(R_{D_{n}}\), is defined as: \[L_{D_{n}} =(\pi_{D_{n}}\otimes Id)\circ\Delta,\] \[R_{D_{n}} =(Id\otimes\pi_{D_{n}})\circ\Delta.\] Given vector \(v=(v_{1},v_{2},\ldots,v_{n})\in\mathbb{R}^{n}\), we write \(t^{v}=\Pi_{i=1}^{n}t_{i}^{v_{i}}\). If \(x\) is a monomial, we have: \[L_{D_{n}}(x) =t^{\alpha(\theta(x))}\otimes x,\] \[R_{D_{n}}(x) =x\otimes t^{\beta(\theta(x))}.\] The next theorem is a generalization of Klimyk and Schmudgen's observation [6]. It gives the necessary condition such that \(h(x)\neq 0\) for \(x\in\mathcal{O}(SL_{q}(n))\): **Theorem 1 a)**: _Let \(x\) be a monomial. Then \(h(x)\neq 0\) implies that there exist \(k\in\mathbb{N}^{+}\) such that \(\theta(x)\) is a \(k\)-doubly stochastic matrix._ **Proof**: Consider \((\pi_{D_{n}}\otimes h)\circ\Delta(x)\). There are two ways to compute this object: \[(\pi_{D_{n}}\otimes h)\circ\Delta(x) =\pi_{D_{n}}\circ(Id\otimes h)\circ\Delta(x)=\pi_{D_{n}}(h(x)\cdot 1 _{\mathcal{O}(SL_{q}(n))})\] \[=h(x)\cdot 1_{\mathcal{O}(D_{n})},\] \[(\pi_{D_{n}}\otimes h)\circ\Delta(x) =(id\otimes h)\circ(\pi_{D_{n}}\otimes id)\circ\Delta(x)=(id \otimes h)\circ L_{D_{n}}(x)\] \[=(id\otimes h)(t^{\alpha(\theta(x))}\otimes x)\] \[=h(x)\cdot t^{\alpha(\theta(x))}.\] Thus, \(h(x)\cdot 1_{\mathcal{O}(D_{n})}=h(x)\cdot t^{\alpha(\theta(x))}\). Since \(h(x)\neq 0\), we get \(1_{\mathcal{O}(D_{n})}=t^{\alpha(\theta(x))}\). This means that we can find integer \(k_{1}>0\) such that \(t^{\alpha(\theta(x))}=(\Pi_{i=1}^{n}t_{i})^{k_{1}}\). Thus, \(\alpha(\theta(x))=(k_{1})_{i=1}^{n}\). Apply the same argument to \((h\otimes\pi_{D_{n}})\circ\Delta(x)\), we get \(1_{\mathcal{O}(D_{n})}=t^{\beta(\theta(x))}\). Thus, we can find \(k_{2}>0\) such that \(\beta(\theta(x))=(k_{2})_{i=1}^{n}\). But we must have \(k_{1}=k_{2}\) since \[nk_{1}=\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}=nk_{2}.\] \(\blacksquare\) ### The linear subspace of elements with non-zero Haar state Let \(\mathcal{NZ}\) be the linear subspace of elements with non-zero Haar state. In this section, we give a criterion to pick a basis on \(\mathcal{NZ}\). We write \(A_{n}(m),\ m\in\mathbb{N}^{+}\) as the set of \(n\times n\)\(m\)-doubly stochastic matrices and \(B_{n}(m)\) as the set of monomials on \(\mathcal{O}(SL_{q}(n))\) whose counting matrices belong to \(A_{n}(m)\). First, we introduce a total order '\(<\)' on \(A_{n}(m)\). For every \(C=(c_{ij})_{i,jk=1}^{n}\in A_{n}(m)\), we associate a vector \[\mathcal{V}(C)=(c_{11},c_{12},\ldots,c_{1n},c_{21},c_{22},\ldots,c_{nn}),\] and we compare such vectors in lexicographic order. We say matrices \(C<D\) if \(\mathcal{V}(C)<\mathcal{V}(D)\). With this total order, we have the following observation: _If \(x=P\cdot x_{ik}x_{jl}\cdot Q\in B_{n}(m)\)\((i<j,k<l)\) where \(P,Q\) are two monomials and we switch the order of \(x_{ik}x_{jl}\) so that:_ \[x=y+(q-q^{-1})z,\] _where \(y=P\cdot x_{jl}x_{ik}\cdot Q\) and \(z=P\cdot x_{il}x_{jk}\cdot Q\), then we have: \(\theta(z)\in A_{n}(m)\) and \(\theta(z)<\theta(x)=\theta(y)\)._ Based on the observation, we get the following lemma: **Lemma 1**: For each \(A\in A_{n}(m)\), we fix monomial \(x_{A}\in B_{n}(m)\) such that \(\theta(x_{A})=A\). If \(\phi\in B_{n}(m)\) is a monomial with counting matrix \(M\), then we can decompose \(\phi\) as: \[\phi=c_{M}\cdot x_{M}+\sum_{\begin{subarray}{c}P<M\\ P\in A_{n}(m)\end{subarray}}c_{P}\cdot x_{P}. \tag{1}\] **Proof**: Since \(\phi\) and \(x_{M}\) have the same counting matrix, we can permute the generators in \(\phi\) to the same order as in \(x_{M}\). We denote this process as a chain: \[\phi=\phi_{0}\rightarrow\phi_{1}\rightarrow\phi_{2}\rightarrow\cdots \rightarrow\phi_{k}=x_{M},\] where each \(\phi_{i}\) is a reordering of \(\phi\) and we get \(\phi_{i+1}\) by switching the order of two adjacent generators in \(\phi_{i}\). From \(\phi_{i}\) to \(\phi_{i+1}\), we may get a new term \(\varphi_{i+1}\). As discussed before, \(\theta(\varphi_{i+1})\in A_{n}(m)\) and \(\theta(\varphi_{i+1})<\theta(\phi_{i+1})=M\). We can permute these newly generated \(\varphi_{i}\)'s to their corresponding \(x_{\theta(\varphi_{i+1})}\)'s, and we may get new terms in this process as well. However, each time we repeat this permuting process to a monomial \(y\), the counting matrix of the newly generated monomial is always smaller than \(\theta(y)\). Since the counting matrix of the newly generated monomial is always descending, we can finish this permuting process in finite steps. In other words, we will get a chain on which every transposition does not generate new monomials. Then, every monomial appearing in the summation will be in the desired form, and we get Equation (1). \(\blacksquare\) Lemma 1 provides a criterion for picking a basis for \(\mathcal{NZ}\). Let \(S_{n}(m)=\{x_{M},M\in A_{n}(m)\}\). Then, we can write \[\mathcal{NZ}=\left\langle\bigcup_{i=1}^{\infty}S_{n}(i)\right\rangle.\] **Theorem 1 b**): _Every monomial with non-zero Haar state value can be written as a linear combination of standard monomials._ **Proof**: By the **Birkhoff-Von Neumann Theorem**[2][17], every \(M\in A_{n}(m)\) can be decomposed into \(M=m_{1}\sigma_{1}+m_{2}\sigma_{2}+\cdots+m_{n!}\sigma_{n!}\), where \(\sigma_{i}\)'s are matrix in \(A_{n}(1)\) and \(m_{i}\)'s are non-negative integers whose sum is \(m\). Notice that each matrix \(\sigma_{i}\) can be identified with a permutation on \(n\) letters. We denote the corresponding permutation as \(\sigma_{i}\) as well. Then, the counting matrix of the monomial \(\prod_{\sigma_{i}\in S_{n}}(x_{\sigma_{i}})^{m_{i}}\) is \(M\). This implies that for every \(M\in A_{n}(m)\), we can choose \(x_{M}\) in form \(\prod_{\sigma_{i}\in S_{n}}(x_{\sigma_{i}})^{m_{i}}\). Combining Lemma 1, the statement in Main Theorem b is clear. \(\blacksquare\) Notice that the set of all standard monomials contains the basis of \(\mathcal{NZ}\), but the set differs from the basis of \(\mathcal{NZ}\). The reason is that the different standard monomials could have the same counting matrix. In this case, standard monomials with the same counting matrix are linearly dependent(see Appendix A, Eq(21), Eq(22)). To find a basis of \(\mathcal{NZ}\), for each \(M\in A(m)\), we have to preserve only one standard monomial corresponding to \(M\) and filter out 'unnecessary' standard monomials. ### Comultiplication of standard monomials Once we solve the Haar state of standard monomials of each order, we can find the Haar state for other monomials according to their linear combination. We will use the defining relation \(((id\otimes h)\circ\Delta)(x)=h(x)\cdot 1=((h\otimes id)\circ\Delta)(x)\) to solve the Haar state of every standard basis. We start with the investigation of the comultiplication of a monomial. **Lemma 2**: Let \(x\) be a monomial and we write: \[\Delta(x)=\sum_{i\in I}z_{i}\otimes y_{i},\] with \(I\) an index set and \(y_{i},z_{i}\) monomials. Then we have the following equations: \[\alpha(\theta(x))=\alpha(\theta(z_{i}))\quad\beta(\theta(x))=\beta(\theta(y_{i }))\quad\beta(\theta(z_{i}))=\alpha(\theta(y_{i})).\] **Proof**: Recall that \(\Delta(x_{ij})=\sum_{k=1}^{n}x_{ik}\otimes x_{kj}\) and \(\Delta\) is a morphism of algebra. If \(x=\Pi_{l=1}^{p}x_{i_{l}j_{l}}\), then \[\Delta(x)=\Delta(\Pi_{l=1}^{p}x_{i_{l},j_{l}})=\Pi_{l=1}^{p}\Delta(x_{i_{l},j _{l}})=\prod_{l=1}^{p}\left(\sum_{k=1}^{n}x_{i_{l},k}\otimes x_{k,j_{l}}\right).\] For each \(z_{i}\) the \(l\)-th generator is in the same row as the \(l\)-th generator in \(x\), and for each \(y_{i}\) the \(l\)-th generator is in the same column as the \(l\)-th generator in \(x\). The column index of the \(l\)-th generator in \(z_{i}\) is the same as the row index of the \(l\)-th generator in \(y_{i}\). Thus, the row sum of \(x\) equals the row sum of \(z_{i}\); the column sum of \(x\) equals the column sum of \(y_{i}\), and the column sum of \(z_{i}\) equals the row sum of \(y_{i}\). \(\blacksquare\) With Lemma 2, we have the following result: **Lemma 3**: If \(\theta(x)\in A_{n}(m)\) then \(h(y_{i})\neq 0\) (or \(h(z_{i})\neq 0\)) if and only if \(\theta(y_{i})\in A_{n}(m)\) ( or \(\theta(z_{i})\in A_{n}(m)\)). Moreover, \(\theta(y_{i})\in A_{n}(m)\) if and only if \(\theta(z_{i})\in A_{n}(m)\). **Proof**: Use Main theorem a) and Lemma 2. \(\blacksquare\) **Theorem 1 c)**: _Let \(\{s_{l}\}_{l\in\mathcal{I}_{m}}\), be the set of standard monomials of order \(m\). Then, we can write \((Id\otimes h)\circ\Delta(s_{l})\) and \((h\otimes Id)\circ\Delta(s_{l})\) as linear combinations of \(s_{j}\)'s and the coefficient of each \(s_{j}\) is a linear combination of \(h(s_{i})\)'s._ **Proof**: If \(s_{l}\in B_{n}(m)\) is a standard basis, Lemma 3 implies that \[(id\otimes h)\circ\Delta(s_{l})=\sum_{\begin{subarray}{c}y\in B_{n}(m)\\ z\in B_{n}(m)\end{subarray}}h(y)\cdot z=\sum_{i}h(y_{i})\cdot z_{i} \tag{2}\] Then, by Lemma 1, we can decompose each \(y_{i}\) and \(z_{i}\) as: \[y_{i}=\sum_{j=1}^{k}d_{j}^{y_{i}}\cdot s_{j},\] \[z_{i}=\sum_{j=1}^{k}d_{j}^{z_{i}}\cdot s_{j}. \tag{4}\] Substitute Equation (3) and Equation (4) into Equation (2), we get: \[(id\otimes h)\circ\Delta(s_{l})=\sum_{j=1}^{k}\left(\sum_{i=1}^{k}c_{ij}h(s_{i} )\right)\cdot s_{j}. \tag{5}\] **Remark**.: _Here, \(\{s_{i}\}_{i=1}^{k}\subset\{s_{l}\}_{l\in\mathcal{I}_{m}}\) is a basis of standard monomials of order \(m\)._ In Equation (2), we call \(y_{i}\) the **relation component** and call \(z_{i}\) the **comparing component**. We will say \(\boldsymbol{z_{i}}\)**(or \(\boldsymbol{y_{i}}\)) contains \(\boldsymbol{s_{j}}\)** if \(d_{j}^{z_{i}}\neq 0\) (or \(d_{j}^{y_{i}}\neq 0\)). Since we can identify \(1\) with \(D_{q}^{m}\), we get \((id\otimes h)\circ\Delta(s_{l})=h(s_{l})\cdot D_{q}^{m}\). Notice that we can decompose \(D_{q}^{m}\) as a linear combination of standard monomials of order \(m\). Thus, by comparing the coefficient of the same standard monomial on both sides of \((id\otimes h)\circ\Delta(s_{l})=h(s_{l})\cdot D_{q}^{m}\), we can find a linear relation consisting of the Haar states of standard monomials of order \(m\) (See section 2.5 for more detail). We call such a linear relation **linear relation of order \(\boldsymbol{m}\)**. We call a linear system consisting of linear relations of order \(m\) a **system of order \(\boldsymbol{m}\)**. ### System of order 1 In this section, we will prove Theorem 1 d). The standard basis for \(B_{n}(1)\) is in the form of \(x_{\tau_{i}}=\Pi_{k=1}^{n}x_{k,\tau_{i}(k)}\) where \(\tau_{i}\) is a permutation on \(n\) letters. We have: \[\Delta(x_{\tau_{i}})=\Delta(\Pi_{k=1}^{n}x_{k,\tau_{i}(k)})=\prod_{k=1}^{n} \left(\sum_{p=1}^{n}x_{k,p}\otimes x_{p,\tau_{i}(k)}\right).\] By Lemma 3, after apply \((id\otimes h)\) to \(\Delta(x_{\tau_{i}})\), we get: \[(id\otimes h)\circ\Delta(x_{\tau_{i}})=\sum_{\sigma_{j}\in S_{n}}h(\Pi_{k=1}^ {n}x_{\sigma_{j}(k),\tau_{i}(k)})\cdot\Pi_{k=1}^{n}x_{k,\sigma_{j}(k)}. \tag{6}\] On the other hand, recall that \[1=D_{q}=\sum_{\sigma_{j}\in S_{n}}(-q)^{l(\sigma_{j})}\prod_{k=1}^{n}x_{k, \sigma_{j}(k)}, \tag{7}\] where \(l(\sigma_{j})\) is the inverse number of \(\sigma_{j}\). Thus, using \((id\otimes h)\circ\Delta(x_{\tau_{i}})=h(x_{\tau_{i}})\cdot 1\) and comparing the coefficients of each standard basis, we get for every \(\sigma_{j}\in S_{n}\): \[h(\Pi_{k=1}^{n}x_{\sigma_{j}(k),\tau_{i}(k)})=(-q)^{l(\sigma_{j})}h(x_{\tau_{ i}}). \tag{8}\] In general, \(\Pi_{k=1}^{n}x_{\sigma_{j}(k),\tau_{i}(k)}\) is not a standard monomial. However, if we choose \(\sigma_{j}\) such that \(\sigma_{j}(k)=n+1-\tau_{i}(k)\), then every generator in \(\Pi_{k=1}^{n}x_{\sigma_{j}(k),\tau_{i}(k)}\) commutes with each other and \(\Pi_{k=1}^{n}x_{\sigma_{j}(k),\tau_{i}(k)}=\Pi_{k=1}^{n}x_{k,n+1-k}\). Moreover, \(l(\sigma_{j})=\frac{n(n-1)}{2}-l(\tau_{i})\). Thus, from Equation (8) we get: \[h(\Pi_{k=1}^{n}x_{k,n+1-k})=(-q)^{\frac{n(n-1)}{2}-l(\tau_{i})}h(\Pi_{k=1}^{n} x_{k,\tau_{i}(k)}). \tag{9}\] Therefore, using Equation (9) and Equation (7) we get: \[\begin{split} 1&=h(1)=\sum_{\sigma_{j}\in S_{n}}(-q)^{l( \sigma_{j})}h(\Pi_{k=1}^{n}x_{k,\sigma_{j}(k)})\\ &=\left(\sum_{\sigma_{j}\in S_{n}}(-q)^{2l(\sigma_{j})-\frac{n(n -1)}{2}}\right)h(\Pi_{k=1}^{n}x_{k,n+1-k}),\end{split} \tag{10}\] which gives \[h(\Pi_{k=1}^{n}x_{k,n+1-k})=\frac{(-q)^{\frac{n(n-1)}{2}}}{\sum_{\sigma_{j}\in S _{n}}(-q)^{2l(\sigma_{j})}}. \tag{11}\] Then by Equation (8), notice that the inverse number for the \(\tau_{i}\) corresponding to \(\Pi_{k=1}^{n}x_{k,n+1-k}\) is just \(\frac{n(n-1)}{2}\), we get for every \(\tau_{i}\in S_{n}\): \[h(\Pi_{k=1}^{n}x_{k,\tau_{i}(k)})=\frac{(-q)^{l(\tau_{i})}}{\sum_{\sigma_{j} \in S_{n}}(-q)^{2l(\sigma_{j})}}. \tag{12}\] Now, let \(I_{n}(k)\) be the number of permutations on \(n\) letters with \(k\) inversions. Then, the denominator of Eq(12) can be rewritten as: \[\sum_{\sigma_{j}\in S_{n}}(-q)^{2l(\sigma_{j})}=\sum_{k=0}^{\frac{n(n+1)}{2}}I _{n}(k)q^{2k}.\] By Andrews [1], the generating function of \(I_{n}(k)\) is \[\sum_{k=0}^{\frac{n(n+1)}{2}}I_{n}(k)x^{k}=\prod_{j=1}^{n}\frac{1-x^{j}}{1-x}.\] So the denominator of Eq(12) can be rewritten as \[\sum_{\sigma_{j}\in S_{n}}(-q)^{2l(\sigma_{j})}=\prod_{j=1}^{n}\frac{1-q^{2j} }{1-q^{2}}=[n]_{q^{2}}!,\] and we get: \[h(\Pi_{k=1}^{n}x_{k,\tau_{i}(k)})=\frac{(-q)^{l(\tau_{i})}}{[n]_{q^{2}}!}.\] This finishes the proof of Theorem 1 d). ### Liner relations of higher order In this section, let \(\{s_{l}\}_{l=1}^{K_{m}}\) be a set of linearly independent standard monomials of order \(m\). Recall Equation (5): \[(h\otimes id)\circ\Delta(s_{l})=\sum_{j=1}^{K_{m}}\left(\sum_{i=1}^{K_{m}}c_{ ij}h(s_{i})\right)\cdot s_{j}.\] We can do the same thing to \(h(s_{l})\cdot 1=h(s_{l})\cdot D_{q}^{m}\) and get: \[h(s_{l})\cdot 1=h(s_{l})\cdot D_{q}^{m}=\sum_{j=1}^{K_{m}}b_{j}h(s_{l})\cdot s_{ j}. \tag{13}\] By comparing the coefficients of standard bases in \((id\otimes h)\circ\Delta(s_{l})\) and in \(h(s_{l})\cdot 1\), we get: \[\sum_{i=1}^{K_{m}}c_{ij}h(s_{i})=b_{j}h(s_{l}) \tag{14}\] for every \(1\leq j\leq K_{m}\). We will call Equation (14) the **linear relation** derived from **equation basis**\(s_{l}\) and **comparing basis**\(s_{j}\). Each index \(1\leq l\leq K_{m}\) corresponds to \(K_{m}\) linear relations, so there are \(K_{m}^{2}\) linear relations. Since there are \(K_{m}\) unknowns, it is possible to construct more than one system of order \(m\). Notice that these linear relations all have the zero right-hand side. One way to get a linear relation with the non-zero right-hand side is by decomposing \(1=h(1)=h(D_{q}^{m})\) into a sum of standard monomials. Although we can construct more than one system of order \(m\), not every system is invertible. We will give a more robust approach to compute the Haar state of \(\mathcal{O}(SL_{q}(3))\) later. ### The difficulty of solving higher order system We get an elegant result for the basis of order \(1\) from Equation (12). However, the complexity of higher-order systems grows quickly as the order increase. Let \(s_{l}=\Pi_{k\in I}x_{i_{k},j_{k}}\) be a standard monomial of order \(m\). We can write its comultiplication as: \[\Delta(s_{l})=\sum_{i\in I}z_{i}\otimes y_{i}.\] We only focus on those summands whose \(y_{i},z_{i}\in B_{n}(m)\). We call these summands the **regular summands**. Notice that by the property of the comultiplication, we have the following **order restriction**: * the \(k\)-th generator of \(z_{i}\) is in the \(i_{k}\)-th row * the \(k\)-th generator of \(y_{i}\) is in the \(j_{k}\)-th column * The column index of the \(l\)-th generator in \(z_{i}\) equals the row index of the \(l\)-th generator in \(y_{i}\). Thus, if \(z_{i}\otimes y_{i}\) is a summand in the comultiplication \(\Delta(s_{l})\), regular or not, the tensor product must satisfy the order restriction. Using property iii) of order restriction, we know that for every \(M\in A_{n}(m)\), there is a regular summand such that \(z_{i}\) (or \(y_{i}\)) is the standard monomials corresponding to \(M\) and the corresponding \(y_{i}\) (or \(z_{i}\)) will also be a standard monomial of order \(m\). This is similar to the case of the order \(1\) system. However, those regular summands do not cover all regular summands. In fact, for every reordering \(x_{M}^{r}\) of the standard monomial corresponding to every \(M\in A_{n}(m)\) satisfying the order restriction property i) (or ii)), there is a regular summand such that \(z_{i}=x_{M}^{r}\) (or \(y_{i}=x_{M}^{r}\)). As an example, in \(\mathcal{O}(SL_{q}(3))\), all reordering of the standard monomial \((x_{11}x_{23}x_{32})(x_{12}x_{21}x_{33})\) satisfying property i) are: By iii), when \(z_{i}\) is reordered, \(y_{i}\) is reordered simultaneously, and vice versa. Thus, most of the regular summands will be reordering of regular summands whose left component \(z_{i}\) (or right component \(y_{i}\)) corresponds to a certain standard monomial. A similar situation would happen when we are trying to decompose \(D_{q}^{m}\). The difficulty of finding Equation (14) with equation basis \(s_{l}\) and comparing basis \(s_{j}\) comes from 3 aspects: * We must find all regular summands whose comparing component contains \(s_{j}\) and we have to compute the corresponding coefficient \(d_{j}^{z}\) in Equation (4); * We must find Equation (3) corresponding to the relation component of those regular summands whose comparing component contains \(s_{j}\); * We must decompose every summand containing \(s_{j}\) in Equation (13) to find \(b_{j}\). The computations involving all the 3 aspects are complex, especially when there are \((n!)^{m}\) summands in Equation (13). However, if we choose equation basis \(s_{l}\) and comparing basis \(s_{j}\) carefully, we can reduce the computations involved in finding Equation (14). For a detailed discussion, see section 3. ## 3. Explicit formulas for special standard monomials on \(\mathcal{O}(SL_{q}(3))\) We will apply a "divide and conquer" strategy to solve the Haar states of standard monomials on \(\mathcal{O}(SL_{q}(3))\). In this section, we will construct a system matrix consisting of special standard monomials of the same order called **source matrix**. The source matrix of order \(m\) consists of monomials: \(cdh(ceg)^{m-1}\), \(bfg(ceg)^{m-1}\), \(bfgcdh(ceg)^{m-2}\), \(afh(ceg)^{m-1}\), \(bdk(ceg)^{m-1}\), \(aek(ceg)^{m-1}\), and \((ceg)^{m}\). Its construction relies on Theorem 1 e) in the following subsection. The explicit expression of the source matrix and its solution is provided in Appendix B. The computations in section 4 rely on the solution to the source matrix of each order, which is the reason to call it the source matrix. ### Properties on the number of special generators in the decomposition of a monomial In this sub-section, we prove Theorem 1 e). In the context of Equation (1), Theorem 1 e) can be rephrased in the following way: **Theorem 1 e)**: _In Equation (1), write \(M=(m_{ij})_{i,j=1}^{n}\) and \(P=(p_{ij})_{i,j=1}^{n}\) we have:_ * \(m_{11}\geq p_{11}\) _and_ \(m_{nn}\geq p_{nn}\)_._ * \(m_{1n}\leq p_{1n}\) _and_ \(m_{n1}\leq p_{n1}\)_._ **Proof**: In the proof of Lemma 1, we replace \(x_{ik}x_{jl}\) (\(i<j,k<l\)) by \(x_{il}x_{jk}\) in \(\phi_{i+1}\) to get \(\varphi_{i+1}\). Thus, values of the \((i,l)\)-th and the \((j,k)\)-th entry in \(\theta(\phi_{i+1})\) are less than that in \(\theta(\varphi_{i+1})\) and values of the \((i,k)\)-th and the \((j,l)\)-th entry in \(\theta(\phi_{i+1})\) are larger than that in \(\theta(\varphi_{i+1})\). Notice that none of \(x_{ik}\) and \(x_{jl}\) can be \(x_{1n}\) or \(x_{n1}\) and none of \(x_{il}\) and \(x_{jk}\) can be \(x_{11}\) or \(x_{nn}\). Thus, we know that: * values of the \((1,1)\)-th and the \((n,n)\)-th entry in \(\theta(\phi_{i+1})\) are no less than that in \(\theta(\varphi_{i+1})\), 2. values of the \((1,n)\)-th and the \((n,1)\)-th entry in \(\theta(\phi_{i+1})\) are no larger than that in \(\theta(\varphi_{i+1})\). which correspond to i) and ii), respectively. Since 1) and 2) are true each time when we get a new monomial from switching the order of some \(x_{ik}x_{jl}\) (\(i<j,k<l\)), we know 1) and 2) is also true between \(M\) and each \(P\) in Equation (1). Thus, we get i) and ii). \(\blacksquare\) Theorem 1 e) gives us a criterion to exclude the appearance of certain types of standard monomials in the decomposition of a given monomial by counting the number of generators \(x_{11},x_{nn},x_{1n}\) and \(x_{n1}\). In other words, Theorem 1 e) gives us the information about the number of generators \(x_{11},x_{nn},x_{1n}\) and \(x_{n1}\) for those standard monomials that could appear in the decomposition of a given monomial. With the help of Theorem 1 e), we can control the type of standard monomials appearing in the linear relation (14). This is the key to constructing the source matrix. A similar idea will appear again when we give an improved version of Theorem 1 e) later (see section 4.3.1). ### Source Matrix We use \((ceg)^{m}\) as the equation basis. The comparing bases we choose are (with descending order): \((aek)^{m-1}afh\), \((aek)^{m-1}bdk\), \((aek)^{m-2}afhbdk\), \((aek)^{m-1}bfg\), \((aek)^{m-1}cdh\), and \((aek)^{m-1}ceg\). These are all the standard monomials containing at least \(m-1\) generator \(a\) and at least \(m-1\) generator \(k\). By the technical lemma i), we know that if the decomposition of a monomial contains any of these comparing bases, then the monomial must contain at least \(m-1\) generator \(a\) and at least \(m-1\) generator \(k\). Thus, when we compute the linear relations associated with these comparing bases, we only need to consider those regular summands whose comparing components contain at least \(m-1\) generator \(a\) and at least \(m-1\) generator \(k\). So, the comparing components must be reorders of \((aek)^{m-1}afh\), \((aek)^{m-1}bdk\), \((aek)^{m-2}afhbdk\), \((aek)^{m-1}bfg\), \((aek)^{m-1}cdh\), and \((aek)^{m-1}ceg\) satisfying order restriction condition i). By the order restriction condition iii), the corresponding relation components are reordering of \(cdh(ceg)^{m-1}\), \(bfg(ceg)^{m-1}\), \(bfgcdh(ceg)^{m-2}\), \(afh(ceg)^{m-1}\), \(bdk(ceg)^{m-1}\), and \(aek(ceg)^{m-1}\), respectively, satisfying order restriction condition ii). Then, by the technical lemma ii), when we decompose those relation components, the decomposition only includes standard monomials containing at least \(m-1\) generator \(c\) and at least \(m-1\) generator \(g\) which are \(cdh(ceg)^{m-1}\), \(bfg(ceg)^{m-1}\), \(bfgcdh(ceg)^{m-2}\), \(afh(ceg)^{m-1}\), \(bdk(ceg)^{m-1}\), \(aek(ceg)^{m-1}\), and \((ceg)^{m}\). For the linear relation with the non-zero right-hand side, we use the fact that \(D_{q}(ceg)^{m-1}=(ceg)^{m-1}\). Together, we get a linear system containing only monomials with at least \(m-1\) generator \(c\) and at least \(m-1\) generator \(g\) which is called the **source matrix of order \(m\)**. For the computation process, see Lu [9]. The non-zero right-hand side of the source matrix of order \(m\) relies on the solution to the source matrix of order \(m-1\). Thus, these source matrices are recursive linear systems. ## 4. General algorithm to compute the Haar states of standard monomials on \(\mathcal{O}(SL_{q}(3))\) In this section, we assume that the Haar states of all standard monomials of order \(m-1\) are known and we want to compute the Haar states of all standard monomials of order \(m\). Recursive formula for standard monomials \((cdh)^{i}(ceg)^{m-i}\) and \((bfg)^{i}(ceg)^{m-i}\) for \(i\geq 2\) First, we compute the Haar states of \((cdh)^{i}(ceg)^{m-i}\). Consider equation bases \((cdh)^{i-1}(ceg)^{m-i+1}\) and comparing basis \((aek)^{m-1}afh\). We only need to take care of those regular summands whose comparing components contain \((aek)^{m-1}afh\). By the Theorem 1 e), those comparing components are reordering of \((aek)^{m-1}afh\), and the corresponding relation components contain \(m\) generator \(c\). Thus, the derived linear relation (14) contains standard monomials with \(cdh\) segment and \(ceg\) segments. The explicit expression of (14) is: \[\frac{q^{2}(q^{m-i+1}-q^{-(m-i+1)})^{2}}{(1-q^{2})^{2}}*h((cdh)^{i }(ceg)^{m-i})\] \[= \frac{iq^{-2(m-i)-1}-(i-1)q^{2(m-i)+5}-q}{q^{2}-1}*h((cdh)^{i-1}( ceg)^{m-i+1})\] \[-\sum_{j=2}^{i}c_{j}*h((cdh)^{i-j}(ceg)^{m-i+j})\] where \[c_{j}= (-1)^{j-2}(q-1/q)^{j-2}(i-j+1)\left(\frac{i-j}{2}+q^{-2(m-i+1)}\right)\] \[+(q-1/q)^{j-2}(i-j)q^{2j}\left(\frac{i-j-1}{2}+q^{2(m-i+1)}\right).\] Similarly, to compute the Haar state of \((bfg)^{i}(ceg)^{m-i}\), we consider equation bases \((bfg)^{i-1}(ceg)^{m-i+1}\) and comparing basis \((aek)^{m-1}bdk\). Applying an analysis similar to the case of \((cdh)^{i}(ceg)^{m-i}\), we can get the recursive formula for the Haar state of \((bfg)^{i}(ceg)^{m-i}\) which is exactly the same as the formula for \((cdh)^{i}(ceg)^{m-i}\). We just need to replace the segment \(cdh\) with \(bfg\). Zigzag recursive pattern for standard monomials in the form \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-1-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-1-j}\) with \(r+s=1\) We start with the Haar state of \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-1-j}\), \(r+s=1\). In this case, we compute the Haar state of monomials in form \(bdk(cdh)^{j-1}(ceg)^{m-j}\) and \(bfg(cdh)^{j-1}(ceg)^{m-j}\). We use an induction on the value \(j\). We know the Haar state for case \(j=1\) from the solution of the source matrix of order \(m\). For \(j=2\), the Haar state of \(bfgcdh(ceg)^{m-2}\) is know as well. To compute the Haar state of \(bdkcdh(ceg)^{m-2}\), we use the linear relation derived from equation basis \(bdk(ceg)^{m-1}\) and comparing basis \((aek)^{m-1}afh\). Assume we have solved all the Haar state for \(j\leq t-1\). Then, we can compute the Haar state of \(bfg(cdh)^{t}(ceg)^{m-t-1}\) by the linear relation derived from equation basis \(bfg(cdh)^{t-1}(ceg)^{m-t}\) and comparing basis \((aek)^{m-1}afh\). Next, we compute the Haar state of of \(bdk(cdh)^{t}(ceg)^{m-t-1}\) by the linear relation derived from equation basis \(bdk(cdh)^{t-1}(ceg)^{m-t}\) and comparing basis \((aek)^{m-1}afh\). During the process, the only monomial with unknown Haar state appearing in the linear relation is the monomial which we are pursuing. The order that we used to compute these Haar states are depicted in Appendix C. Since we solve the Haar state of \(bfg(cdh)^{j}(ceg)^{m-1-j}\) and \(bdk(cdh)^{j}(ceg)^{m-1-j}\) in a "zigzag" pattern in the figure, we call this recursive relation the **Zigzag recursive relation**. We can compute the Haar states of monomials in form \(afh(bfg)^{j}(ceg)^{m-j-1}\) and \(cdh(bfg)^{j}(ceg)^{m-j-1}\) in the same order. When we derive linear relations, we use equation basis \(afh(bfg)^{j-1}(ceg)^{m-j}\) to substitute \(bdk(cdh)^{j-1}(ceg)^{m-j}\) and \(cdh(bfg)^{j-1}(ceg)^{m-j}\) to substitute \(bfg(cdh)^{j-1}(ceg)^{m-j}\) and use comparing basis \((aek)^{m-1}bdk\) to substitute \((aek)^{m-1}afh\). Standard monomials ending with \((ceg)^{m-2}\) and standard monomials ending with \(bfgcdh(ceg)^{m-3}\) #### 4.3.1. Standard monomials ending with \(cdh(ceg)^{m-2}\) or \(bfg(ceg)^{m-2}\) First, notice that if we choose \((bfg)^{2}(ceg)^{m-2}\) as the equation basis and use \((aek)^{m-1}afh\) as the comparing basis, the derived linear relation only includes the Haar state of \(bdkbfg(ceg)^{m-2}\), \(cdh(bfg)^{2}(ceg)^{m-3}\), \((bfg)^{2}(ceg)^{m-2}\), and \(bfg(ceg)^{m-1}\). Combining the results from previous subsections, we can find the Haar state of \(bfhcdh(ceg)^{m-2}\). Similarly, if we choose \((cdh)^{2}(ceg)^{m-2}\) as the equation basis and use \((aek)^{m-1}bdk\) as the comparing basis, the derived linear relation only includes the Haar state of \(afhcdh(ceg)^{m-2}\), \(bfg(cdh)^{2}(ceg)^{m-3}\), \((cdh)^{2}(ceg)^{m-2}\), and \(cdh(ceg)^{m-1}\). Combining the results from previous subsections, we can find the Haar state of \(afhcdh(ceg)^{m-2}\). Next, we consider equation bases \(aek(ceg)^{m-1}\) with comparing basis \((aek)^{m-1}afh\) and \((aek)^{m-1}bdk\). The linear relation derived by comparing basis \((aek)^{m-1}afh\) includes \(aekcdh(ceg)^{m-2}\), \(afhcdh(ceg)^{m-2}\), \(aek(ceg)^{m-1}\), and \(afh(ceg)^{m-1}\). Thus, we can solve the Haar state of \(aekcdh(ceg)^{m-2}\) from this linear relation. Similarly, The linear relation derived by comparing basis \((aek)^{m-1}bdk\) includes \(aekbfg(ceg)^{m-2}\), \(afhbfg(ceg)^{m-2}\), \(aek(ceg)^{m-1}\), and \(afh(ceg)^{m-1}\). Thus, we can solve the Haar state of \(aekbfg(ceg)^{m-2}\) from this linear relation. #### 4.3.2. Matrix of \(aekbfgcdh(ceg)^{m-3}\), \(afhbfgcdh(ceg)^{m-3}\), and \(bdkbfgcdh(ceg)^{m-3}\) To compute the Haar state of these three monomials, we construct a \(3\times 3\) linear system consisting of: 1. the quantum determinant condition: \(D_{q}*bfgcdh(ceg)^{m-3}=bfgcdh(ceg)^{m-3}\); 2. the linear relation derived from equation basis \(afhbfg(ceg)^{m-2}\) and comparing basis \((aek)^{m-1}afh\); 3. the linear relation derived from equation basis \(bdkcdh(ceg)^{m-2}\) and comparing basis \((aek)^{m-1}bdk\). Entries of the system matrix are listed below: \[\begin{array}{|c|c|c|c|}\hline\text{Relation Haar State}&aekbfgcdh(ceg)^{m-3}& afhbfgcdh(ceg)^{m-3}&bdkbfgcdh(ceg)^{m-3}\\ \hline\text{Quantum Determinant}&1&-q&-q\\ \hline afhbfg(ceg)^{m-2}&(q+1/q)\frac{q^{2}-q^{2(m-2)}}{q^{2}-1}&\frac{(q^{2(m- 1)}-1)(q^{2}-q^{2(m+1)}+1)}{(q^{2}-1)^{2}}&0\\ \hline bdkcdh(ceg)^{m-2}&q+1/q+q(q^{2}+1)\frac{1-q^{(m-2)}}{1-q^{2}}&(q^{2}+1)(q^ {-2}-q^{2(m-2)})&\frac{q^{2(m-2)}+q^{2(m-2)}-q^{2}-1}{(q^{2}-1)^{2}}\\ \hline\end{array}\] Using Gauss elimination, we have: \[\begin{bmatrix}1&-q&-q\\ 0&\frac{q^{-2(m-1)}+q^{2m}-q^{2}-1}{(q^{2}-1)^{2}}&\frac{(q^{2}+1)(q^{-2(m-2) }-q^{2})}{1-q^{2}}\\ 0&\frac{(q^{2}+1)(q^{-2}-q^{2(m-2)})}{1-q^{2}}&\frac{q^{2}(q^{-2(m-1)}+q^{2m}- q^{2}-1)}{(q^{2}-1)^{2}}\\ \end{bmatrix}\] The determinant of the matrix is: \[q^{2}\left(\frac{q^{-2(m-1)}+q^{2m}-q^{2}-1}{(q^{2}-1)^{2}} \right)^{2}-\frac{(q^{2}+1)^{2}(q^{-2}-q^{2(m-2)})(q^{-2(m-2)}-q^{2})}{(1-q^{2} )^{2}}\] \[=\frac{(1-q^{2(m-1)})^{2}(q^{4}-q^{2m})(1-q^{2(m+2)})}{q^{4m-2}(q ^{2}-1)^{4}}.\] Since \(m\geq 3\), the determinant is always non-zero for \(0<|q|<1\). Thus, the matrix is invertible. #### 4.3.3. Standard monomials with two high-complexity segments ending with \((ceg)^{m-2}\) Consider equation basis \(bdkbfg(ceg)^{m-2}\) with comparing basis \((aek)^{m-1}afh\). The derived linear relation includes \((bdk)^{2}(ceg)^{m-2}\), \(bdkbfgcdh(ceg)^{m-3}\), \(bdkbfg(ceg)^{m-2}\), \((bfg)^{2}cdh(ceg)^{m-3}\), and \((bfg)^{2}(ceg)^{m-2}\). Thus, we can solve the Haar state of \(bdkbfk(ceg)^{m-2}\) from this linear relation. Similarly, consider equation basis \(afhcdh(ceg)^{m-2}\) with comparing basis \((aek)^{m-2}bdk\). The derived linear relation includes \((afh)^{2}(ceg)^{m-2}\), \(afhbfgcdh(ceg)^{m-3}\), \(afhcdh(ceg)^{m-2}\), \(bfg(cdh)^{2}(ceg)^{m-3}\), and \((cdh)^{2}(ceg)^{m-2}\). Thus, we can solve the Haar state of \(afhafh(ceg)^{m-2}\) from this linear relation. Next, by Equation (21) and Equation (22) in Appendix A, we can compute the Haar state of \(afhbdk(ceg)^{m-2}\) and \(bdkafh(ceg)^{m-2}\). Then, using the equality \(bdk(ceg)^{m-2}=D_{q}*bdk(ceg)^{m-2}\), we can solve the Haar state of \(aekbfk(ceg)^{m-2}\). Replacing \(bdk(ceg)^{m-2}\) by \(afhc(ceg)^{m-2}\) in the above equation, we can find the Haar state of \(aekafh(ceg)^{m-2}\). Finally, using the equality \(aek(ceg)^{m-2}=aek*D_{q}*(ceg)^{m-2}\), we can solve the Haar state of \((aek)^{2}(ceg)^{m-2}\). At this point, we have computed all the Haar states of standard monomials ending with \((ceg)^{m-2}\) or \(bfgcdh(ceg)^{m-3}\). Also, we computed the Haar state of standard monomials \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-1-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-1-j}\) with \begin{table} \begin{tabular}{|c|c|c|c|} \hline Relation Haar State & \(aekbfgcdh(ceg)^{m-3}\) & \(afhbfgcdh(ceg)^{m-3}\) & \(bdkbfgcdh(ceg)^{m-3}\) \\ \hline Quantum Determinant & \(1\) & \(-q\) & \(-q\) \\ \hline \(afhbfg(ceg)^{m-2}\) & \((q+1/q)\frac{q^{2}-q^{2(m-2)}}{q^{2}-1}\) & \(\frac{(q^{2(m-1)}-1)(q^{2}-q^{2}-q^{2(m+1)})}{(q^{2}-1)^{2}}\) & \(0\) \\ \hline \(bdkcdh(ceg)^{m-2}\) & \(q+1/q+q(q^{2}+1)\frac{1-q^{(m-2)}}{1-q^{2}}\) & \((q^{2}+1)(q^{-2}-q^{2(m-2)})\) & \(\frac{q^{2(m-1)}+q^{2(m-2)}-q^{2}-1}{(q^{2}-1)^{2}}\) \\ \hline \end{tabular} \end{table} Table 1. Entries in the system matrix \(r+s=1\). In the next subsection, we assume that \(i\geq 2\) and the Haar states of standard monomials ending with \((ceg)^{m-i}\) or \(bfgcdh(ceg)^{m-i-1}\) are known. We will also assume that the Haar state of standard monomials \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-t-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-t-j}\) with \(r+s=t\leq i-1\) and \(0\leq j\leq m-t\) are known. Based on this assumption, we will compute the Haar state of standard monomials \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-i-j}\) with \(r+s=i\) and \(0\leq j\leq m-i\), as well as standard monomials ending with \((ceg)^{m-i-1}\) or \(bfgcdh(ceg)^{m-i-2}\). Standard monomials \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-i-j}\) with \(r+s=i\) and \(0\leq j\leq m-i\) and standard monomials ending with \((ceg)^{m-i-1}\) or \(bfgcdh(ceg)^{m-i-2}\). We will apply an inductive approach to compute the pursuing Haar states. For a detailed discussion on each subsection, see Lu [9]. #### 4.4.1. The Complexity Non-increasing Principle In this subsection, we prove the **Complexity Non-increasing Principle**, which is key to the computations in this section. Recall that \(bfg\), \(cdh\), and \(ceg\) are **low-complexity segments**. Notice that low-complexity segments commute with each other. \(aek\), \(afh\), and \(bdk\) are **high-complexity segments** and they do not commute with any other segments. Notice that low-complexity segments contain either generator \(c\) or generator \(g\), and high-complexity segments contain either generator \(a\) or generator \(k\). Let \(x\) be a monomial. Denote \(N_{aek}(x)\), \(N_{afh}(x)\), \(N_{bdk}(x)\), \(N_{bfg}(x)\), \(N_{cdh}(x)\), and \(N_{ceg}(x)\) the number of segments \(aek\), \(afh\), \(bdk\), \(bfg\), \(cdh\), and \(ceg\) in \(x\), respectively. Recall **Theorem 2** (Complexity Non-increasing Principle): _Let \(x,y\), and \(z_{i}\)'s be monomials such that \(\theta(x)=\theta(y)\) and_ \[x=y+\sum_{i}z_{i}.\] _Then, for all \(z_{i}\)'s:_ * \(N_{hc}(z_{i})\leq N_{a}(x)+N_{k}(x)\)_._ * \(N_{lc}(z_{i})\geq\max\{N_{c}(x),N_{g}(x)\}\)_._ **Proof:** i): For \(p=x,y\) or \(z_{i}\)'s, we have the following equality: \[N_{a}(p)=N_{aek}(p)+N_{afh}(p),\] \[N_{k}(p)=N_{aek}(p)+N_{bdk}(p),\] \[N_{hc}(p)=N_{aek}(p)+N_{afh}(p)+N_{bdk}(p).\] Thus, we have the following relation between \(N_{hc}(p)\), \(N_{aek}(p)\), \(N_{a}(p)\), and \(N_{k}(p)\): \[N_{hc}(p)+N_{aek}(p)=N_{a}(p)+N_{k}(p).\] Since \(N_{aek}(p)\geq 0\), we get: \[N_{hc}(p)\leq N_{a}(p)+N_{k}(p).\] By Theorem 1 e) part i), we know that \(N_{a}(x)\geq N_{a}(z_{i})\) and \(N_{k}(x)\geq N_{k}(z_{i})\) for all \(z_{i}\)'s. Thus, we have: \[N_{hc}(z_{i})\leq N_{a}(z_{i})+N_{k}(z_{i})\leq N_{a}(x)+N_{k}(x).\] ii): Similar to i), we get the following equality: \[N_{c}(p)=N_{cdh}(p)+N_{ceg}(p),\] \[N_{g}(p)=N_{bfg}(p)+N_{ceg}(p),\] \[N_{lc}(p)=N_{bfg}(p)+N_{cdh}(p)+N_{ceg}(p).\] Since \(N_{cdh}(p)\geq 0\) and \(N_{bfg}(p)\geq 0\), we get: \[N_{lc}(p)\geq N_{cdh}(p)+N_{ceg}(p)=N_{c}(p),\] \[N_{lc}(p)\geq N_{bfg}(p)+N_{ceg}(p)=N_{g}(p).\] Thus, \(N_{lc}(p)\geq\max\{N_{c}(p),N_{g}(p)\}\). By Theorem 1 e) part ii), we know that \(N_{c}(x)\leq N_{c}(z_{i})\) and \(N_{g}(x)\leq N_{g}(z_{i})\) for all \(z_{i}\)'s. Thus, we have: \[\max\{N_{c}(x),N_{g}(x)\}\leq\max\{N_{c}(z_{i}),N_{g}(z_{i})\}\leq N_{lc}(z_{i }).\] Given a monomial, we can use the complexity non-increasing principle to get an upper(low) bound of the number of high(low)-complexity segments for all standard monomials that are contained in the given monomial. When we compute Equation (14) in the following subsections, we can estimate the type of standard monomials appearing in Equation (14) without performing actual computation by applying the complexity non-increasing principle. 4.2. Monomials in form \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-i-j}\) with \(r+s=i\) and \(0\leq j\leq m-i\) First, notice that by our assumption, we already know the Haar states of \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\) and \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-i-j}\) with \(s\geq 1\) and \(j=1\) since these monomials end with \(bfgcdh(ceg)^{m-i-1}\). To compute the Haar state of \((bdk)^{i}cdh(ceg)^{m-i-1}\), we use equation basis \((bdk)^{i}(ceg)^{m-i}\) with comparing basis \((aek)^{m-1}afh\). Using the Theorem 1 e), we know that the derived linear relation only contains standard monomials in the form \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-j-r-s}\) since no generator \(a\) can appear in the newly generated monomials. Thus, the only monomial with unknown Haar state appearing in the linear relation is \((bdk)^{i}cdh(ceg)^{m-i-1}\) and we can compute its Haar state. This finish the case \(j=1\) for \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\). Similarly, we can compute the Haar state of \((afh)^{i}bfg(ceg)^{m-i-1}\) using the linear relation derived from equation basis \((afh)^{i}(ceg)^{m-i}\) with comparing basis \((aek)^{m-1}bdk\). Now assume we know the Haar state of \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\) for all \(j\leq t-1\). To compute the Haar state of case \(j=t\), we use equation basis \((bdk)^{r}(bfg)^{s}(cdh)^{t-1}(ceg)^{m-i-t+1}\) and comparing basis \((aek)^{m-1}afh\). Here, we have to compute the case \(r=0\) first, then the case \(r=1\), case \(r=2\), until the case \(r=i\). This is an analog to the zigzag recursive relation. To solve \((afh)^{r}(cdh)^{s}(bfg)^{j}(ceg)^{m-i-j}\), we use equation basis \((afh)^{r}(cdh)^{s}(bfg)^{j-1}(ceg)^{m-i-j+1}\) and comparing basis \((aek)^{m-1}bdk\) and use the same strategy as for monomials \((bdk)^{r}(bfg)^{s}(cdh)^{j}(ceg)^{m-i-j}\). 4.3. Monomials with one high-complexity segment ending with \((ceg)^{m-i-1}\) and \(bfgcdh(ceg)^{m-i-2}\) By our assumption, the Haar state of \(bdk(bfg)^{r}(cdh)^{i-r}(ceg)^{m-i-1}\) and \(afh(cdh)^{r}(bfg)^{i-r}(ceg)^{m-i-1}\) are known for \(0\leq r\leq i-1\). Hence, we can compute the Haar state of \(aek(bfg)^{r}(cdh)^{i-r}(ceg)^{m-i-1}\) with \(1\leq r\leq i-1\) using the equation \(D_{q}*(bfg)^{r}(cdh)^{i-r}(ceg)^{m-i-1}=(bfg)^{r}(cdh)^{i-r}(ceg)^{m-i-1}\). Next, we compute the Haar state of \(bdk(bfg)^{i}(ceg)^{m-i-1}\) and \(afh(cdh)^{i}(ceg)^{m-i-1}\). To compute the Haar state of \(bdk(bfg)^{i}(ceg)^{m-i-1}\), we use the linear relation derived from equation basis \((bfg)^{i+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). To compute the Haar state of \(afh(cdh)^{i}(ceg)^{m-i-1}\), we use the linear relation derived from equation basis \((cdh)^{i+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}bdk\). Then we compute the Haar state of \(aek(bfg)^{i}(ceg)^{m-i-1}\) and \(aek(cdh)^{i}(ceg)^{m-i-1}\) using the equation \(D_{q}*(bfg)^{i}(ceg)^{m-i-1}=(bfg)^{i}(ceg)^{m-i-1}\) and \(D_{q}*(cdh)^{i}(ceg)^{m-i-1}=(cdh)^{i}(ceg)^{m-i-1}\), respectively. Next, we compute the Haar state of \(aek(bfg)^{r}(cdh)^{s}(ceg)^{m-i-2}\), \(bdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-2}\), and \(afh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-2}\) with \(r,s\geq 1\). By our assumption, we know the Haar state of \(bdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-2}\) and \(afh(cdh)^{s}(bfg)^{r}(ceg)^{m-i-2}\) with \(r,s\geq 2\). Thus, we compute the Haar state of \(aek(cdh)^{s}(bfg)^{r}(ceg)^{m-i-2}\) with \(r,s\geq 2\) by equation \(D_{q}*(cdh)^{s}(bfg)^{r}(ceg)^{m-i-2}=(cdh)^{s}(bfg)^{r}(ceg)^{m-i-2}\). For the case \(r=1\), the Haar state of \(bdkbfg(cdh)^{i}(ceg)^{m-i-2}\) is known by our assumption. Also, the Haar state of \((bfg)^{2}(cdh)^{i+1}(ceg)^{m-i-3}\) is known. We can compute the Haar state of \(afhbfg(cdh)^{i}(ceg)^{m-i-2}\) by the linear relation derived from \(bfg(cdh)^{i+1}(ceg)^{m-i-2}\) and comparing basis \((aek)^{m-1}bdk\). Then, we compute the Haar state of \(aekbfg(cdh)^{i}(ceg)^{m-i-2}\) using the equation \(D_{q}*bfg(cdh)^{i}(ceg)^{m-i-2}=bfg(cdh)^{i}(ceg)^{m-i-2}\). For the case \(s=1\), the Haar state of \(afhcdh(bfg)^{i}(ceg)^{m-i-2}\) is known by our assumption. Also, the Haar state of \((cdh)^{2}(bfg)^{i+1}(ceg)^{m-i-3}\) is known. We can compute the Haar state of \(bdkcdh(bfg)^{i}(ceg)^{m-i-2}\) by the linear relation derived from \(cdh(bfg)^{i+1}(ceg)^{m-i-2}\) and comparing basis \((aek)^{m-1}afh\). Then, we compute the Haar state of \(aekcdh(bfg)^{i}(ceg)^{m-i-2}\) using the equation \(D_{q}*cdh(bfg)^{i}(ceg)^{m-i-2}=cdh(bfg)^{i}(ceg)^{m-i-2}\). #### 4.4.4. Monomials with two high-complexity segments ending with \((ceg)^{m-i-1}\) Finally, we compute the Haar states of monomials with two high-complexity segments ending with \((ceg)^{m-i-1}\). We start with monomials in form \(afhbdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) and \(bdkafh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\). Notice that \(afhbdkceg\) and \(bdkafhceg\) can be written as a linear combination of \(aekbfgcdh\) and other monomials with at most one high-complexity segment. Thus, \(afhbdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) and \(bdkafh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) can be written as a linear combination of \(aek(bfg)^{r+1}(cdh)^{s+1}(ceg)^{m-i-2}\) and other monomials with at most one high-complexity segment (see Eq(21) and Eq(22)). Thus, we can compute the Haar state of \(afhbdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) and \(bdkafh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) using the Haar states we known. We can compute the Haar state of \((bdk)^{2}(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) using the linear relation derived from equation basis \(bdk(bfg)^{r+1}(cdh)^{s}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). We can compute the Haar state of \((afh)^{2}(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\) using the linear relation derived from equation basis \(afh(bfg)^{r}(cdh)^{s+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}bdk\). At last, to compute the Haar state of \(aekafh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\), we use the equation \(D_{q}*afh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}=afh(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\). To compute the Haar state of \(aekbdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\), we use the equation \(D_{q}*bdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}=bdk(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\). To compute the Haar state of \(aekaek(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\), we use the equation \(aek*D_{q}*(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}=aek(bfg)^{r}(cdh)^{s}(ceg)^{m-i-1}\). At this point, we have solved the Haar state of all monomials with at most two high-complexity segments ending with \((ceg)^{m-i-1}\) and monomials with at most one high-complexity segment ending with \(bfgcdh(ceg)^{m-i-2}\). Starting from the next sub-section, we assume that the Haar states of monomials with at most \(2\leq w\leq i\) high-complexity segments ending with \((ceg)^{m-i-1}\) and monomials with at most \(w-1\) high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\) are known. Then, we compute the Haar states of monomials with \(w+1\) high-complexity segments ending with \((ceg)^{m-i-1}\) and monomials with \(w\) high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\). 4.5. Monomials in the form \((afh)^{w}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\), \((afh)^{w-1}bdk(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\), and \(aek(afh)^{w-1}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) with \(0\leq j\leq i-w\). We know the Haar states of \((afh)^{w}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) for all \(0\leq j\leq i-w-1\) from subsection 4.4.2. To compute the Haar state of \(j=i-w\), we have to solve the Haar state of monomials \((afh)^{w}(cdh)^{i-w+1}bfg(ceg)^{m-i-2}\), \((afh)^{w-1}bdk(cdh)^{i-w+1}bfg(ceg)^{m-i-2}\), and \((afh)^{w-1}aek(cdh)^{i-w+1}bfg(ceg)^{m-i-2}\) at the same time. To construct a linear system containing the three monomials, we use linear relation derived from 1. \((afh)^{w-1}*D_{q}*(cdh)^{i+1-w}bfg(ceg)^{m-i-2}=(afh)^{w-1}(cdh)^{i+1-w}bfg(ceg )^{m-i-2}\). 2. Equation basis \((afh)^{w}bfg(cdh)^{i-w}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). 3. Equation basis \((afh)^{w-1}bdk(cdh)^{i-w+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}bdk\). The system matrix is in Appendix D. We can show that the system matrix is invertible for all \(0<|q|<1\). When \(0\leq j\leq i-w-1\), we can compute the Haar state of \((afh)^{w-1}aek(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) using the linear relation derived from equation basis \((afh)^{w}(cdh)^{j}(bfg)^{i-w-j+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). Then, we can compute the Haar state of \((afh)^{w-1}bdk(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) using the equation \((afh)^{w-1}*D_{q}*(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}=(afh)^{w-1}(cdh)^{j+1 }(bfg)^{i-w-j+1}(ceg)^{m-i-2}\). Finally, we compute the Haar state of \(aek(afh)^{w-1}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) for \(0\leq j\leq i-w\) by applying Eq(19) in Appendix A repeatedly. Indeed, Eq(19) and the complexity non-increasing principle tells us that to decomposition of \((afh)^{w-1}aek(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) is a linear combination of \(aek(afh)^{w-1}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\), and \((afh)^{l}bdk(afh)^{w-1-l}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) for \(0\leq l\leq w-1\), and other standard monomials with no more than \(w-1\) high-complexity segments ending with \(cdhbfg(ceg)^{m-i-2}\). Then, Eq(18) and the complexity non-increasing principle tells us that each \((afh)^{l}bdk(afh)^{w-1-l}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) can be written as a linear combination of \((afh)^{w-1}bdk(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) and other standard monomials with no more than \(w-1\) high-complexity segments ending with \(cdhbfg(ceg)^{m-i-2}\). Thus, \(aek(afh)^{w-1}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) is the only monomial in the decomposition with unknown Haar state and we can compute its Haar state from the decomposition. 4.6. Monomials in form \((aek)^{n}(afh)^{w-j}(bdk)^{j-n}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) with \(n\leq j\leq w\), \(0\leq n\leq w\) We start with the case \(n=0\) and \(n=1\). From subsection 4.4.5, we solved the Haar states of standard monomials corresponding to \(j=1\). Without loss of generality, we assume that the Haar states of standard monomials corresponding to \(j\leq t-1\) are known. Then, the Haar state of \((afh)^{w-t}aek(bdk)^{t-1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) (\(j=t,n=1\)) using the linear relation derived from equation basis \((afh)^{w-t+1}(bdk)^{t-1}(cdh)^{s}(bfg)^{r+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). Using the equation \((afh)^{w-t}*D_{q}*(bdk)^{t-1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) \(=(afh)^{w-t}(bdk)^{t-1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) together the Haar state of \((afh)^{w-t+1}(bdk)^{t-1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) (\(j=t-1,n=0\)), we can compute the Haar state of \((afh)^{w-t}(bdk)^{t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) (\(j=t,n=0\)). Then, the Haar state of \(aek(afh)^{w-t}(bdk)^{t-1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) (\(j=t,n=1\)) can be computed using the same strategy we used for computing the Haar state of \(aek(afh)^{w-1}(cdh)^{j+1}(bfg)^{i-w-j+1}(ceg)^{m-i-2}\) in the previous subsection. Assume that we have solve all the Haar states of standard monomials corresponding to \(n\leq t-1\), \(t-1\leq j\leq w\). To compute the Haar state of standard monomials corresponding to \(n=t\), notice that for \(t\leq j\leq w\) we have the following equation: \[(aek)^{t-1}(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\] \[=(aek)^{t-1}*D_{q}*(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg )^{m-i-2}\] \[=(aek)^{t}(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\] \[-q*(aek)^{t-1}(afh)^{w-j+1}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{ m-i-2}\] \[-q*(aek)^{t-1}bdk(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^ {m-i-2}\] \[+q^{2}*(aek)^{t-1}bfg(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg )^{m-i-2}\] \[+q^{2}*(aek)^{t-1}cdh(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg )^{m-i-2}\] \[-q^{3}*(aek)^{t-1}ceg(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg )^{m-i-2}\] Using Eq(19) and the complexity non-increasing principle, we can decompose \((aek)^{t-1}bdk(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) as a linear combination of \((aek)^{t-1}(afh)^{w-j}(bdk)^{j-t+1}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) and other standard monomials with less number of high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\). Hence, we can compute the Haar state of all other monomials appearing in the equation except for \((aek)^{t}(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) and we can solve the Haar state of \((aek)^{t}(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\) from the equation. Using an inductive argument, we can compute the Haar state of standard monomials corresponding to all possible \(n\) and \(j\) values. #### 4.4.7. Monomials with \(w+1\) high-complexity segments ending with \((ceg)^{m-i-1}\) We start with monomials in form \((aek)^{n}(afh)^{w+1-j}(bdk)^{j-n}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) with \(n+1\leq j\leq w\) and \(0\leq n\leq w-1\). Monomials in this form contain at least one \(afh\) segment and one \(bdk\) segment. By Eq(15) in Appendix A, the monomial can be rewritten as \((aek)^{n}(afh)^{w-j}[afhbdkceg](bdk)^{j-n-1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-2}\) and other monomials with less number of high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\) or \((ceg)^{m-i-1}\). Then, we can apply Equation (21) and Equation (22) in Appendix A to rewrite it as a linear combination of monomials with known Haar states. Next, we compute the Haar state of monomials in form \((afh)^{w+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) and \((bdk)^{w+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\). To compute \((afh)^{w+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\), we use equation basis \((afh)^{w}(cdh)^{s+1}(bfg)^{r}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}bdk\). To compute \((bdk)^{w+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\), we use equation basis \((bdk)^{w}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\), we use equation basis \((bdk)^{w}(cdh)^{s}(bfg)^{r+1}(ceg)^{m-i-1}\) and comparing basis \((aek)^{m-1}afh\). Finally, we consider monomials in the form \((aek)^{j}(afh)^{w-j+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) and \((aek)^{j}(bdk)^{w-j+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) with \(1\leq j\leq w\). To start, we compute the Haar state of \(aek(afh)^{w}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) using the equation \(D_{q}*(afh)^{w}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}=(afh)^{w}(cdh)^{s}(bfg)^{r}(ceg )^{m-i-1}\). If we have computed the Haar state for all \(j\leq n\), to compute the case \(j=n+1\), we use the following equation \((aek)^{n}(afh)^{w-n}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}=(aek)^{n}*D_{q}*(afh)^{w-n} (cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\). We can compute the Haar state of \((aek)^{j}(bdk)^{w-j+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\) in the same way. When \(j=w+1\), we use the equation \((aek)^{w}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}=(aek)^{w}*D_{q}*(cdh)^{s}(bfg)^{r}(ceg )^{m-i-1}\) to compute the Haar state of \((aek)^{w+1}(cdh)^{s}(bfg)^{r}(ceg)^{m-i-1}\). At this point, we have solved the Haar state of all monomials with at most \(w+1\) high-complexity segments ending with \((ceg)^{m-i-1}\) and monomials with at most \(w\) high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\). Using an inductive argument, we can compute the Haar state of all monomials with at most \(i+1\) high-complexity segments ending with \((ceg)^{m-i-1}\) and monomials with at most \(i\) high-complexity segments ending with \(bfgcdh(ceg)^{m-i-2}\). Thus, this subsection shows that we can use induction on the value of \(i\) from \(1\) until \(m-2\) and compute the Haar states of all monomials ending with \(ceg\) or \(bfgcdh\). ### Monomials with at most one low-complexity segment #### 4.5.1. Monomials ending with \(cdh\) segment or \(bfg\) segment #### 4.5.2. Monomials ending with \(cdh\) segment or \(bfg\) segment We now turn to the Haar state of all monomials with at most \(w+1\) high-complexity segments ending with \((ceg)^{m-i-1}\ Here, we only show the procedure to compute monomials ending with \(cdh\). The case of \(bfg\) is solved similarly. We start with computing the the Haar states of \(afh(cdh)^{m-1}\) and \(aek(cdh)^{m-1}\). Since we know the Haar state of \((cdh)^{m}\) from subsection 4.1, we compute the Haar state of \(afh(cdh)^{m-1}\) using the linear relation derived from equation basis \((cdh)^{m}\) and comparing basis \((aek)^{m-1}bdk\). We know the Haar state of \(bdk(cdh)^{m-1}\) from subsection 4.2. Thus, we can compute the Haar state of \(aek(cdh)^{m-1}\) using the equation \(D_{q}*(cdh)^{m-1}=(cdh)^{m-1}\). Hence we solve the Haar states of all standard monomials with at most 1 high-complexity segment ending with \((cdh)^{m-1}\). Now, we assume that the Haar states of all standard monomials with at most \(w\geq 1\) high-complexity segments are known and compute the Haar state of all monomials with \(w+1\) high-complexity segments. We can compute the Haar state of \((afh)^{w+1}(cdh)^{m-w-1}\) from the linear relation served from equation basis \((afh)^{w}(cdh)^{m-w}\) and comparing basis \((aek)^{m-1}bdk\). Then, we compute the Haar state of \((afh)^{w-j}(bdk)^{j+1}(cdh)^{m-w-1}\) with \(0\leq j\leq w\) by the linear relation derived from equation basis \((afh)^{w-j}(bdk)^{j}bfg(cdh)^{m-w-1}\) and comparing basis \((aek)^{m-1}afh\). Finally, we compute the Haar states of \(aek(afh)^{w-j}(bdk)^{j}(cdh)^{m-w-1}\) with \(0\leq j\leq w\) by the equation \(D_{q}*(afh)^{w-j}(bdk)^{j}(cdh)^{m-w-1}=(afh)^{w-j}(bdk)^{j}(cdh)^{m-w-1}\). The Haar states of monomials \((aek)^{n}(afh)^{w-j}(bdk)^{j-n}(cdh)^{m-w-1}\) with \(n\leq j\leq w\) is computed using the same strategy as we used in subsection 4.4.6 to compute the Haar state of \((aek)^{t}(afh)^{w-j}(bdk)^{j-t}(cdh)^{s+1}(bfg)^{r+1}(ceg)^{m-i-2}\). At this point, we have solved the Haar states of all monomials with \(w+1\) high-complexity segments ending with \((cdh)^{m-w-1}\). Using an inductive argument, we can compute the Haar state of all monomials ending with \(cdh\) segment. To compute the Haar state of all monomials ending with \(bfg\) segment, we replace all the \(bdk\) segments by \(afh\) segments and vice versa and replace all the \(bfg\) segments by \(cdh\) segments and vice versa in the process of compute the Haar state of monomials ending with \(cdh\) segments. #### 4.5.2. Monomials without low-complexity segment Now, we are able to solve the Haar states of all monomials with at least one low-complexity segment since the number of generators \(c\) and \(g\) cannot decrease. We start with monomial in form \((afh)^{m-i}(bdk)^{i}\). When \(i\geq 2\), we use equation basis \((afh)^{m-i}(bdk)^{i-1}bfg\) and comparing basis \((aek)^{m-1}afh\). When \(i=1\), we use equation basis \((afh)^{m-1}cdh\) and comparing basis \((aek)^{m-1}bdk\). Finally, we compute monomials in form \((aek)^{n}(afh)^{r}(bfg)^{s}\). To compute the case \(n=1\), we use equation \(D_{q}*(afh)^{r}(bfg)^{s}=(afh)^{r}(bfg)^{s}\) with \(r+s=m-1\). Now, we assume that the Haar state of monomials in the form \((aek)^{n-1}(afh)^{r}(bdk)^{s}\) with \(r+s=m-n+1\) are known. To compute \((aek)^{n}(afh)^{r^{\prime}}(bdk)^{s^{\prime}}\) with \(r^{\prime}+s^{\prime}=m-n\), we use equation \((aek)^{n-1}(afh)^{r^{\prime}}(bdk)^{s^{\prime}}=(aek)^{n-1}*D_{q}*(afh)^{r^{ \prime}}(bdk)^{s^{\prime}}\). At this point, we have computed the Haar states of all monomials of order \(m\).
2306.06651
Predicting Software Performance with Divide-and-Learn
Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse. In this paper, we propose an approach based on the concept of 'divide-and-learn', dubbed DaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction. Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94x improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL.
Jingzhi Gong, Tao Chen
2023-06-11T11:16:27Z
http://arxiv.org/abs/2306.06651v4
# Predicting Software Performance with Divide-and-Learn ###### Abstract. Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse. In this paper, we propose an approach based on the concept of "divide-and-learn", dubbed DaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction. Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94x improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: [https://github.com/ideas-labo/DaL](https://github.com/ideas-labo/DaL). Configurable System, Machine Learning, Deep Learning, Performance Prediction, Performance Learning, Configuration Learning + Footnote †: isbnbn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:::bn:bn::bn:::: limited due to expensive measurement--a typical case of _sample sparsity_(Koren et al., 2016; Koren et al., 2017; Koren et al., 2018). This is particularly true when not all configurations are valid (Koren et al., 2018). Existing work has been primarily focusing on addressing feature sparsity, through using tree-liked model (Koren et al., 2017); via feature selection (Koren et al., 2017; Koren et al., 2018; Koren et al., 2018); or deep learning (Koren et al., 2017; Koren et al., 2018; Koren et al., 2018). However, the sample sparsity has almost been ignored, which can still be a major obstacle to the effectiveness of machine learning-based performance model. To address the above gap, in this paper, we propose Dal., an approach to model software performance via the concept of "divide-and-learn". The basic idea is that, to handle sample sparsity, we divide the samples (configurations and their performance) into different divisions, each of which is learned by a local model. In this way, the highly sparse samples can be split into different locally smooth regions of data samples, and hence their patterns and feature sparsity can be better captured. In a nutshell, our main contributions are: 1. We formulate, on top of the regression of performance, a new classification problem without explicit labels. 2. We extend Classification and Regression Tree (CART) (Koren et al., 2017) as a clustering algorithm to "divide" the samples into different divisions with similar characteristics, for each of which we build a local regularized Deep Neural Network (rDNN) (Koren et al., 2018). 3. Newly given configurations would be assigned into a division inferred by a Random Forest classifier (Koren et al., 2017), which is trained using the pseudo labeled data from the CART. The rDNN model of the assigned division would be used for the final prediction thereafter. 4. Under eight systems with diverse performance attributes, scale, and domains, as well as five different training sizes, we evaluate Dal. against four state-of-the-art approaches and with different underlying local models. The experiment results are encouraging: compared with the best state-of-the-art approach, we demonstrate that Dal. * achieves no worse accuracy on 33 out of 40 cases with 26 of them being significantly better. The improvements can be up to 1.94\(\times\) against the best counterpart; * uses fewer samples to reach the same/better accuracy. * incurs acceptable training time considering the improvements in accuracy. Interestingly, we also reveal that: * Dal. can considerably improve the accuracy of an arbitrarily given model when it serves as the local model for each division compared with using the model alone as a global model (which is used to learn the entire training dataset). However, the original Dal. with rDNN as the local model still produces the most accurate results. * Dal.'s error tends to correlate quadratically with its only parameter \(d\) that sets the number of divisions. Therefore, a middle value (between 0 and the bound set by CART) can reach a good balance between handling sample sparsity and providing sufficient training data for the local models, e.g., \(d=1\) or \(d=2\) (2 or 4 divisions) in this work. This paper is organized as follows: Section 2 introduces the problem formulation and the notions of sparsity in software performance learning. Section 3 delineates the tailored problem formulation and our detailed designs of Dal. Section 4 presents the research questions and the experiment design, followed by the analysis of results in Section 5. The reasons why Dal. works, its strengths, limitations, and threats to validity are discussed in Section 6. Section 7, 8, and 9 present the related work, conclude the paper, and elaborate data availability, respectively. ## 2. Background and Motivation In this section, we introduce the background and the key observations that motivate this work. ### Problem Formulation In the software engineering community, the question introduced at the beginning of this paper has been most commonly addressed by using various machine learning models (or at least partially) (Koren et al., 2017; Koren et al., 2018; Koren et al., 2018; Koren et al., 2018; Koren et al., 2018), in a data-driven manner that relies on observing the software's actual behaviors and builds a statistical model to predict the performance without heavy human intervention (Bengio et al., 2016). Formally, modeling the performance of software with \(n\) configuration options is a regression problem that builds: \[\mathcal{P}=f(\mathbf{\mathcal{S}}),\mathcal{P}\in\mathbb{R} \tag{1}\] whereby \(\mathbf{\mathcal{S}}\) denotes the training samples of configuration-performance pairs, such that \(\overline{\mathbf{x}}\in\mathbf{\mathcal{S}}\). \(\overline{\mathbf{x}}\) is a configuration and \(\overline{\mathbf{x}}=(x_{1},x_{2},\cdots,x_{n})\), where each configuration option \(x_{i}\) is either binary or categorical/numerical. The corresponding performance is denoted as \(\mathcal{P}\). The goal of machine learning-based modeling is to learn a regression function \(f\) using all training data samples such that for newly given configurations, the predicted performance is as close to the actual performance as possible. ### Sparsity in Software Performance Learning It has been known that the configuration space for software systems is generally rugged and sparse with respect to the configuration options (Koren et al., 2017; Koren et al., 2018; Koren et al., 2018; Koren et al., 2018) -- _feature sparsity_, which refers to the fact that only a small number of configuration options are prominent to the performance. We discover that, even with the key options that are the most influential to the performance, the samples still do not exhibit a "smooth" distribution over the configuration landscape. Instead, they tend to be spread sparsely: those with similar characteristics can form arbitrarily different divisions, which tend to be rather distant from each other. This is a typical case of high _sample sparsity_(Koren et al., 2018; Koren et al., 2018; Koren et al., 2018) and it is often ignored in existing work for software performance learning. In Figure 1, we show examples of the configuration samples measured from four real-world software systems. Clearly, we see that they all exhibit a consistent pattern1--the samples tend to form different divisions with two properties: Footnote 1: Similar pattern has been registered on all systems studied in this work. * **Property 1:** configurations in the same division share closer performance values with smoother changes but those inherent divisions exhibit drastically different performance and can change more sharply. * **Property 2:** configurations in the same division can have a closer value on at least one key option than those from the different divisions. In this regard, the values of performance and key configuration options determine the characteristics of samples. In general, such a high sample sparsity is caused by two reasons: (1) the inherited consequence of high feature sparsity and (2) the fact that not all configurations are valid because of the constraints (e.g., an option can be used only if another option has been turned on) (Sutton et al., 2016), thereby there are many "empty areas" in the configuration landscape. When using machine learning models to learn concepts from the above configuration data, the model needs to (1) handle the complex interactions between the configuration options with high feature sparsity while (2) capture the diverse characteristics of configuration samples over all divisions caused by the high sample sparsity, e.g., in Figure 1, where samples in different divisions have diverged performance ranges. For the former challenge, there have been some approaches proposed to target such, such as DeepPerf(Krizhevsky et al., 2014) and Perf-AL(Sutton et al., 2016). However, very little work has aimed to address the latter which can be the main obstacle for a model to learn and generalize the data for predicting the performance of the newly-given configuration. This is because those highly sparse samples increase the risk for models to overfit the training data, for instance by memorizing and biasing values in certain respective divisions (Krizhevsky et al., 2014), especially considering that we can often have limited samples from the configuration landscape due to the expensive measurement of configurable systems. The above is the main motivation of this work, for which we ask: how can we improve the accuracy of predicting software performance under such a high sample sparsity? ## 3. Divide-and-learn for Performance Prediction Drawing on our observations of the configuration data, we propose DaL - an approach that enables better prediction of the software performance via "divide-and-learn". To mitigate the sample sparsity issue, the key idea of DaL is that, since different divisions of configurations show drastically diverse characteristics, i.e., rather different performance values with distant values of key configuration options, we seek to independently learn a local model for each of those divisions that contain _locally smooth_ samples, thereby the learning can be more focused on the particular characteristics exhibited from the divisions and handle the feature sparsity. Yet, this requires us to formulate, on top of the original regression problem of predicting the performance value, a new classification problem without explicit labels. As such, we modify the original problem formulation (Equation 1) as below: \[\mathbf{\mathcal{D}}=g(\mathbf{\mathcal{S}}) \tag{2}\] Overall, we aim to achieve three goals: * **Goal 1:** dividing the data samples into diverse yet more focused divisions \(\mathbf{\mathcal{D}}\) (building function \(g\)) and; * **Goal 2:** training a dedicated local model for each division \(D_{i}\) (building function \(f\)) while; * **Goal 3:** assigning a newly coming configuration into the right model for prediction (using functions \(g\) and \(f\)). Figure 2 illustrates the overall architecture of DaL, in which there are three core phases, namely _Dividing_, _Training_, and _Predicting_. A pseudo code can also be found in Algorithm 1. ### Dividing The very first phase in DaL is to appropriately divide the data into more focused divisions while doing so by considering both the configuration options and performance values. To that end, the key question we seek to address is: how to effectively cluster the performance data with similar sample characteristics (**Goal 1**)? Indeed, for dividing the data samples, it makes sense to consider various unsupervised clustering algorithms, such as \(k\)-mean (Krizhevsky et al., 2014), Figure 1. Projection of configurations in the landscape with respect to the performance and two most important options (the divisions are circled). Figure 2. The architecture of DaL. BIRCH [66] or DBSCAN [14]. However, we found that they are ill-suited for our problem, because: * the distance metrics are highly system-dependent. For example, depending on the number of configuration options and whether they are binary/numeric options; * it is difficult to combine the configuration options and performance value with appropriate discrimination; * and clustering algorithms are often non-interpretable. As a result, in Dal, we extend Classification and Regression Tree (CART) as the clustering algorithm (lines 3-11 in Algorithm 1) since (1) it is simple with interpretable/analyzable structure; (2) it ranks the important options as part of training (good for dealing with the feature sparsity issue), and (3) it does not suffer the issues above [5; 8; 21; 22; 43; 44; 49]. As illustrated in Figure 3, CART is originally a supervised and binary tree-structured model, which recursively splits some, if not all, configuration options and the corresponding data samples based on tuned thresholds. A split would result in two divisions, each of which can be further split. In this work, we at first train the CART on the available samples of configurations and performance values, during which we use the most common mean performance of all samples for each division \(D_{i}\) as the prediction [21; 22]: \[\overline{y}_{D_{i}}=\frac{1}{|D_{i}|}\sum_{y_{j}\in D_{i}}y_{j} \tag{4}\] in which \(y_{j}\) is a performance value. For example, Figure 3 shows a projected example, in which the configuration that satisfies "rtQua-lity=true" and "threads=3" would lead to an inferred runtime of 112 seconds, which is calculated over all the 5 samples involved using Equation 4. By choosing/ranking options that serve as the splits and tuning their thresholds, in Dal, we seek to minimize the following overall loss function during the CART training: \[\mathcal{L}=\frac{1}{|D_{i}|}\sum_{y_{j}\in D_{i}}\left(y_{j}-\overline{y}_{D_ {i}}\right)^{2}+\frac{1}{|D_{r}|}\sum_{y_{j}\in D_{r}}\left(y_{j}-\overline{y} _{D_{r}}\right)^{2} \tag{5}\] where \(D_{i}\) and \(D_{r}\) denote the left and right division from a split, respectively. This ensures that the divisions would contain data samples with similar performance values (**Property 1**) while they are formed with respect to the similar values of the key configuration options as determined by the splits/thresholds at the finest granularity (**Property 2**), i.e., the more important options would appear on the higher level of the tree with excessive splitting. However, here we do not use CART to generalize prediction directly on new data once it is trained as it has been shown that the splits and simple average of performance values in the division alone can still fail to handle the complex interactions between the options, leading to insufficient accuracy [23]. Further, with our loss function in Equation 5, CART is prone to be overfitting2 especially for software quality data [34]. This exacerbates the issue of sample sparsity [27] under a small amount of data samples which is not uncommon for configurable software systems [23; 52]. Footnote 2: Overfitting means a learned model fits well with the training data but works poorly on new data. Instead, what we are interested in are the (branch and/or leaf) divisions made therein (with respect to the training data), which enable us to use further dedicated and more focused local models for better generalizing to the new data (lines 6-11 in Algorithm 1). As such, the final prediction is no longer a simple average while we do not care about the CART overfitting itself as long as it fits the training data well. This is similar to the case of unsupervised clustering for which the clustering is guided by implicit labels (via the loss function at Equation 5). Specifically, in Dal. we extract the data samples according to the divisions made by the \(d\)th depth of the CART, including all the leaf divisions with depth smaller than \(d\). An example can be seen from Figure 3, where \(d\) is a controllable parameter to be given. In this way, Dal. divides the data into a range of \([d+1,2^{d}]\) divisions (\(d\geq 1\)), each of which will be captured by a local model learned thereafter. Note that when the number of data Figure 3. Projection of CART for VP8 showing the possible divisions with different colors under alternative depth \(d\). samples in the division is less than the minimum amount required by a model, we merge the two divisions of the same parent node. As a concrete example, from Figure 3, we see that there are two depths: when \(d=1\) there would be two divisions (one branch and one leaf) with 10 and 8 samples respectively; similarly, when \(d=2\) there would be three leaf divisions: two of each have 5 samples and one is the division with 8 samples from \(d=1\) as it is a leaf. In this case, CART has detected that the rtQuality is a more important (binary) option to impact the performance, and hence it should be considered at a higher level in the tree. Note that for numeric options, e.g., threads, the threshold of splitting (threads \(>5\)) is also tuned as part of the training process of CART. ### Training Given the divisions produced by the _Dividing_ phase, we train a local model for the samples from each division identified as part of **Goal 2** (lines 1-14 in Algorithm 1). Theoretically, we can pair them with any model. However, as we will show in Section 5.2, the state-of-the-art regularized Deep Neural Network (rDNN) (Krizhevsky et al., 2014) (namely DeepPerf), published at ICSE'19, is the most effective one under DaL as it handles feature sparsity well for configurable software. Indeed, Ha and Zhang (Ha and Zhang, 2019) showed that rDNN is more effective than the others even with small data samples when predicting software performance (in our study, we also evaluate the same systems with small training sample sizes as used in their work). Therefore, in DaL we choose rDNN as the underlying local model by default. In this work, we adopt exactly the same structure and training procedure as those used by Ha and Zhang (Ha and Zhang, 2019), hence we kindly refer interested readers to their work for the training details (Ha and Zhang, 2019). Since the local models of the divisions are independent, we utilize parallel training as part of DaL. ### Predicting When a new configuration arrives for prediction, DaL chooses a model of division trained previously to infer its performance. Therefore, the question is: how to assign the new configuration to the right model (**Goal 3**)? A naive solution is to directly feed the configuration into the CART from the _dividing_ phase and check which divisions it associates with. Yet, since the performance of the new configuration is unforeseen from the CART's training data, this solution requires CART to generalize accurately, which, as mentioned, can easily lead to poor results because CART is overfitting-prone when directly working on new data (Zhu and Zhang, 2019). Instead, by using the divided samples from the _Dividing_ phase (which serves as pseudo labeled data), we train a Random Forest--a widely used classifier and is resilient to overfitting (Beng et al., 2017; Chen et al., 2017; Chen et al., 2017)--to generalize the decision boundary and predict which division that the new configuration should be better assigned to (lines 1-21 in Algorithm 1). Again, in this way, we are less concerned about the overfitting issue of CART as long as it matches the patterns of training data well. This now becomes a typical classification problem but there are only pseudo labels to be used in the training. Using the example from Figure 3 again, if \(d=1\) then the configurations in the 10 sample set would have a label _"division1"_; similarly, those in the 8 sample set would result in a label _"division2"_. However, one issue we experienced is that, even with \(d=1\), the sample size of the two divisions can be rather imbalanced, which severely harms the quality of the classifier trained. For example, when training BDB-C with 18 samples, the first split in CART can lead to two divisions with 14 and 4 samples, respectively. Therefore, before training the classifier we use Synthetic Minority Oversampling Technique (SMOTE) (Beng et al., 2017) to pre-process the pseudo label data, hence the division(s) with much less data (minority) can be more repeatedly sampled. Finally, the classifier predicts a division whose local model would infer the performance of the new configuration. ### Trade-off with the Number of Divisions Since more divisions mean that the sample space is separated into more loosely related regions for dealing with the sample sparsity, one may expect that the accuracy will be improved, or at least, stay similar, thereby we should use the maximum possible \(d\) from CART in the _dividing phase_. This, however, only exists in the "utopia case" where there is an infinite set of configuration data samples. In essence, with the design of DaL, the depth \(d\) will manage two conflicting goals that influence its accuracy: 1. greater ability to handle sample sparsity by separating the distant samples into divisions, each of which is learned by an isolated local model; 2. and a larger amount of data samples in each division for the local model to be able to generalize. Clearly, a greater \(d\) may benefit goal (1) but it will inevitably damage goal (2) since it is possible for CART to generate divisions with imbalanced sample sizes. As a result, we see \(d\) as a value that controls the trade-off between the two goals, and neither a too small nor too large \(d\) would be ideal, as the former would lose the ability to deal with sample sparsity while the latter would leave too little data for a local model to learn, hence produce negative noises to the overall prediction. Similar to the fact that we cannot theoretically justify how much data is sufficient for a model to learn the concept (Zhu and Zhang, 2019), it is also difficult to prove how many divisions are sufficient for handling the sample sparsity in performance modeling. However, in Section 5.3, we will empirically demonstrate that there is a (upward) quadratic correlation between \(d\) value and the error incurred by DaL due to the conflict between the above two goals. ## 4. Experiment Setup Here, we delineate the settings of our evaluation. In this work, DaL is implemented based on Tensorflow and scikit-learn. All experiments were carried out on a machine with Intel Core i7 2GHz CPU and 16GB RAM. ### Research Questions In this work, we comprehensively assess DaL by answering the following research questions (RQ): * **RQ1:** How accurate is DaL compared with the state-of-the-art approaches for software performance prediction? * **RQ2:** Can DaL benefit different models when they are used locally therein for predicting software performance? * **RQ3:** What is the sensitivity of DaL's accuracy to \(d\)? * **RQ4:** What is the model building time for DaL? We ask **RQ1** to assess the effectiveness of DaL under different sample sizes against the state-of-the-art. Since the default rDNN in DaL is replaceable, we study **RQ2** to examine how the concept of "divide-and-learn" can benefit any given local model and whether using rDNN as the underlying local model is the best option. In **RQ3**, we examine how the depth of division (\(d\)) can impact the performance of DaL. Finally, we examine the overall overhead of DaL in **RQ4**. ### Subject Systems We use the same datasets of all valid configurations from real-world systems as widely used in the literature (Zhou et al., 2017; Li et al., 2018; Zhang et al., 2019; Li et al., 2019; Li et al., 2019). To reduce noise, we remove those that contain missing measurements or invalid configurations. As shown in Table 1, we consider eight configurable software systems with diverse domains, scales, and performance concerns. Some of those contain only binary configuration options (e.g., x264) while the others involve mixed options (binary and numeric), e.g., HSMGP, which can be more difficult to model (Li et al., 2019). The configuration data of all the systems are collected by prior studies using the standard benchmarks with repeated measurement (Zhou et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019). For example, the configurations of Apache\(-\) a popular Web server\(-\)are benchmarked using the tools Autobench and Httpf, where workloads are generated and increased until reaching the point before the server crashes, and then the maximum load is marked as the performance value (Zhou et al., 2017). The process repeats a few times for each configuration to ensure reliability. To ensure generalizability of the results, for each system, we follow the protocol used by existing work (Li et al., 2019; Li et al., 2019; Li et al., 2019) to obtain five sets of training sample size in the evaluation: * **Binary systems:** We randomly sample \(n\), \(2n\), \(3n\), \(4n\), and \(5n\) configurations and their measurements, where \(n\) is the number of configuration options (Li et al., 2019; Li et al., 2019). * **Mixed systems:** We leverage the sizes suggested by SPLConqueror (Li et al., 2019) (a state-of-the-art approach) depending on the amount of budget. The results have been illustrated in Table 2. All the remaining samples in the dataset are used for testing. ### Metric and Statistical Validation #### 4.3.1. Accuracy For all the experiments, mean relative error (MRE) is used as the evaluation metric for prediction accuracy, since it provides an intuitive indication of the error and has been widely used in the domain of software performance prediction (Zhou et al., 2017; Li et al., 2019; Li et al., 2019). Formally, the MRE is computed as: \[MRE=\frac{1}{k}\times\sum_{t=1}^{k}\frac{|A_{t}-P_{t}|}{A_{t}}\times 100\% \tag{6}\] whereby \(A_{t}\) and \(P_{t}\) denote the \(t\)th actual and predicted performance, respectively. To mitigate bias, all experiments are repeated for 30 runs via bootstrapping without replacement. Note that excluding replacement is a common strategy for the performance learning of configuration as it is rare for a model to learn from the same configuration sample more than once (Li et al., 2019). #### 4.3.2. Statistical Test Since our evaluation commonly involves comparing more than two approaches, we apply Scott-Knott test (Zhou et al., 2017) to evaluate their statistical significance on the difference of MRE over 30 runs, as recommended by Mittas and Angelis (Mittas and Angelis, 2018). In a nutshell, Scott-Knott sorts the list of treatments (the approaches that model the system) by their median values of the MRE. Next, it splits the list into two sub-lists with the largest expected difference (Zhou et al., 2017). For example, suppose that we compare \(A\), \(B\), and \(C\), a possible split could be \(\{A,B\}\), \(\{C\}\), with the rank (\(r\)) of 1 and 2, respectively. This means that, in the statistical sense, \(A\) and \(B\) perform similarly, but they are significantly better than \(C\). Formally, Scott-Knott test aims to find the best split by maximizing the difference \(\Delta\) in the expected mean before and after each split: \[\Delta=\frac{|I_{1}|}{|l|}(\overline{I_{1}}-\overline{l})^{2}+\frac{|I_{2}|}{| l|}(\overline{I_{2}}-\overline{l})^{2} \tag{7}\] whereby \(|I_{1}|\) and \(|I_{2}|\) are the sizes of two sub-lists (\(l_{1}\) and \(l_{2}\)) from list \(l\) with a size \(|l|\). \(\overline{I_{1}}\), \(\overline{I_{2}}\), and \(\overline{l}\) denote their mean MRE. During the splitting, we apply a statistical hypothesis test \(H\) to check if \(l_{1}\) and \(l_{2}\) are significantly different. This is done by using bootstrapping and \(\hat{A}_{12}\)(Zhou et al., 2017). If that is the case, Scott-Knott recurses on the splits. In other words, we divide the approaches into different sub-lists if both bootstrap sampling and effect size test suggest that a split is statistically significant (with a confidence level of 99%) and with a good effect \(\hat{A}_{12}\geq 0.6\). The sub-lists are then ranked based on their mean MRE. \begin{table} \begin{tabular}{l c c c c} \hline \hline **System** & \(|\mathcal{B}|/|\mathcal{N}|\) & **Performance** & **Description** & \(|\mathcal{C}|\) & **Used by** \\ \hline Apache & 9/0 & Maximum load & Web server & 192 & (Zhou et al., 2017; Li et al., 2019; Li et al., 2019) \\ BDB-C & 16/0 & Latency (ms) & Database (C) & 2560 & (Zhou et al., 2017; Li et al., 2019; Li et al., 2019) \\ BBB-J & 26/0 & Latency (ms) & Database (Java) & 180 & (Zhou et al., 2017; Li et al., 2019; Li et al., 2019) \\ x264 & 16/0 & Runtime (ms) & Video encoder & 1152 & (Zhou et al., 2017; Li et al., 2019; Li et al., 2019) \\ HSMGP & 11/3 & Runtime (ms) & Compiler & 3456 & (Zhou et al., 2017; Li et al., 2019) \\ HPA\({}^{\rm{cc}}\) & 31/2 & Runtime (ms) & Compiler & 13485 & (Zhou et al., 2017; Li et al., 2019) \\ VPS & 9/4 & Runtime (ms) & Video encoder & 2736 & (Zhou et al., 2017) \\ Lzzip & 9/3 & Runtime (ms) & Compression tool & 5184 & (Zhou et al., 2017) \\ \hline \hline \end{tabular} \end{table} Table 1. Details of the subject systems. (\(|\mathcal{B}|/|\mathcal{N}|\)) denotes the number of binary/numerical options, and \(|\mathcal{C}|\) denotes the number of valid configurations (full sample size). \begin{table} \begin{tabular}{l c c c c c} \hline \hline **System** & **Size 1** & **Size 2** & **Size 3** & **Size 4** & **Size 5** \\ \hline Apache & \(n\) & \(2n\) & \(3n\) & \(4n\) & \(5n\) \\ BDB-C & \(n\) & \(2n\) & \(3n\) & \(4n\) & \(5n\) \\ BDB-J & \(n\) & \(2n\) & \(3n\) & \(4n\) & \(5n\) \\ x264 & \(n\) & \(2n\) & \(3n\) & \(4n\) & \(5n\) \\ HSMGP & 77 & 173 & 384 & 480 & 864 \\ HHP\({}^{\rm{ac}}\) & 261 & 528 & 736 & 1281 & 2631 \\ VP8 & 121 & 273 & 356 & 467 & 830 \\ Lzzip & 127 & 295 & 386 & 485 & 907 \\ \hline \hline \end{tabular} \end{table} Table 2. The training sample sizes used. \(n\) denotes the number of configuration options in a binary system. ## 5. Evaluation ### Comparing with the State-of-the-art #### 5.1.1. Method To understand how Dal performs compared with the state-of-the-art, we assess its accuracy against both the standard approaches that rely on statistical learning, i.e., SPLConqueror (Wang et al., 2017) (linear regression and sampling methods) and DECART (Krishnan et al., 2017) (an improved CART), together with recent deep learning-based ones, i.e., DeepPerf (Krishnan et al., 2017) (a single global rDNN) and Perf-AL (Krishnan et al., 2017) (an adversarial learning method). All approaches can be used for any type of system except for DECART, which works on binary systems only. Following the setting used by Ha and Zhang (Ha and Zhang, 2017), SPLConqueror3 and DECART use their own sampling method while Dal., DeepPerf and Perf-AL rely on random sampling. Since there are 8 systems and 5 sample sizes each, we obtain 40 cases to compare in total. Footnote 3: Since SPLConqueror supports multiple sampling methods, we use the one (or combination for the mixed system) that leads to the best MRE. We use the implementations published by their authors with the same parameter settings. For DaL, we set \(d=1\) or \(d=2\) depending on the systems, which tends to be the most appropriate value based on the result under a small portion of training data (see Section 5.3). We use the systems, training sizes, and statistical tests as described in Section 4. All experiments are repeated for 30 runs. #### 5.1.2. Results The results have been illustrated in Table 3, from which we see that DaL remarkably achieves the best accuracy on 31 out of 40 cases. In particular, DaL considerably improves the accuracy, e.g., by up to 1.94\(\times\) better than the second-best one on _Size 1_ of VP8. The above still holds when looking into the results of the statistical test: DaL is the only approach that is ranked first for 26 out of the 31 cases. For the 9 cases where DaL does not achieve the best median MRE, it is equally ranked as the first for two of them. These conclude that DaL is, in 33 cases, similar to (7 cases) or significantly better (26 cases) than the best state-of-the-art for each specific case (which could be a different approach). For cases with different training sample sizes, we see that Dal performs generally more inferior than the others when the size is too limited, i.e., _Size 1_ and _Size 2_ for the binary systems. This is expected as when there are too few samples, each local model would have a limited chance to observe the right pattern after the splitting, hence blurring its effectiveness in handling sample sparsity. However, in the other cases (especially for mixed systems that have more data even for _Size 1_), DaL needs far fewer samples to achieve the same accuracy as the best state-of-the-art. For example, on LrizP, DaL only needs 386 samples (_Size 3_) to achieve an error less than 15% while DeepPerf requires 907 samples (_Size 5_) to do so. Another observation is that the improvements of DaL is much more obvious in mixed systems than those for binary systems. This is because: (1) the binary systems have fewer training samples as they have a smaller configuration space. Therefore, the data learned by each local model is more restricted. (2) The issue of sample sparsity is more severe on mixed systems, as their configuration landscape is more complex and comes with finer granularity. As a result, we anticipate that the benefit of DaL can be amplified with more complex systems and/or more training data. To summarize, we can answer **RQ1** as: _RQ1:_ DaL. _performs similar or significantly better than the best state-of-the-art approach in 33 out of 40 cases, with up to 1.94\(\times\) improvements. It also needs fewer samples to achieve the same accuracy and the benefits can be amplified with complex systems/more training samples._ ### Dal under Different Local Models #### 5.2.1. Method Since the idea of "divide-and-learn" can be applicable to a wide range of underlying local models of the divisions \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline \hline & \multicolumn{3}{c}{**Apracine**} & \multicolumn{3}{c}{**ABD-C**} & \multicolumn{3}{c}{**BDB-J**} & \multicolumn{3}{c}{**Ax264**} & \multicolumn{3}{c}{**HSMGP**} & \multicolumn{3}{c}{**HPFA\({}^{\alpha}\)**} & \multicolumn{3}{c}{**VPs**} & \multicolumn{3}{c}{**Lazur**} \\ \cline{3-14} \multicolumn{1}{c}{} & **Approach** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** & \(r\) & **Med (IQR)** \\ \hline \multirow{3}{*}{Size 1} & DSPPerf & 3 & 21.96 (3.4) & **13.46**(**4.80**) & 3 & 29 (36.3) & 3.04 (3.0) & 3.79 (0.94) & 2.9 (0.79) & 3.04 (1.29) & 2 & 4.48 (3.27) & 2 & 33.0 (1.59) \\ & DecART & 2 & 19.44 (3.4) & 3 & 5.18 (45.9) & **1.29**(**4.72**) & 3 & 9.27 (2.11) & NA identified, we seek to understand how well DaL perform with different local models against their global model counterparts (i.e., using them directly to learn the entire training dataset). To that end, we run experiments on a set of global models available in scikit-learn and widely used in software engineering tasks to make predictions directly (Krizhevsky et al., 2014; Krizhevsky et al., 2014), such as CART, Random Forest (RF), Linear Regression (LR), Support Vector Regression (SVR), Kernel Ridge Regression (KRR), and \(k\)-Nearest Neighbours (\(k\)NN). We used the same settings as those for **RQ1** and all models' hyperparameters are tuned in training. For the simplicity of exposition, we report on the ranks \(r\) produced by the Scott-Knott test. #### 5.2.2. Result From Table 4, we can obtain the following key observations: firstly, when examining each pair of the counterparts, i.e., DaL\({}_{X}\) and \(X\), DaL can indeed improve the accuracy of the local model via the concept of "divide-and-learn". In particular, for simple but commonly ineffective models like LR (Krizhevsky et al., 2014), DaL can improve them to a considerable extent. Yet, we see that DaL does not often lead to a significantly different result when working with CART against using CART directly. This is as expected, since using different CART models for the divisions identified by a CART makes little difference to applying a single CART that predicts directly. Interestingly, we also see that our model performs better than the traditional ensemble learning: DaL\({}_{CART}\)--a CART-based "divide-and-learn" model performs generally better than RF, which uses CART as the local model and combines them via Bagging. Secondly, the default of DaL, which uses the rDNN as the local model, still performs significantly better than the others. This aligns with the findings from existing work (Krizhevsky et al., 2014) that the rDNN handles the feature sparsity better. Indeed, deep learning models are known to be data-hungry, but our results surprisingly show that they can also work well for a limited amount of configuration samples. The key behind such is the use of regularization, which stresses additional penalties on the more important weights/options. This has helped to relieve the need for a large amount of data during training while better fitting with the sparse features in configuration data. A similar conclusion has also been drawn from previous studies (Krizhevsky et al., 2014). Therefore, for **RQ2**, we say: [leftmargin=*] **RQ2**: Thanks to the concept of "divide-and-learn", DaL is able to significantly improve a range of global models when using them as the underlying local model. ### Sensitivity to the Depth \(d\) #### 5.3.1. Method To understand **RQ3**, we examine different \(d\) values. Since the number of divisions (and hence the possible depth) is sample size-dependent, for each system, we use 80% of the full dataset for training and the remaining for testing. This has allowed us to achieve up to \(d=4\) with 16 divisions as the maximum possible bound. For different \(d\) values, we report on the median MRE together \begin{table} \begin{tabular}{c c c c c c c c|c c c c|c c c} \hline \hline with the results of Scott-Knott test for 30 runs. We also report the smallest sample size from the divisions, averaging over 30 runs. #### 5.3.2. Results From Figure 4, we see that the correlation between the error of DaL and \(d\) value is close to quadratic: DaL reaches its best MRE with \(d=1\) or \(d=2\). At the same time, the size of training data for a local model decreases as the number of divisions increases. Since \(d\) controls the trade-off between the ability to handle sample sparsity and ensuring sufficient data samples to train all local models, \(d=1\) or \(d=2\) tends to be the "sweet points" that reach a balance for the systems studied. After the point of \(d=1\) or \(d=2\), the MRE will worsen, as the local models' training size often drops dramatically. This is a clear sign that, from that point, the side-effect of having too less samples to train a local model has started to surpass the benefit that could have been brought by dealing with sample sparsity using more local models. When \(d=0\), which means only one division and hence DaL is reduced to DeepPerf that ignores sample sparsity, the resulted MRE is the worst on 4 out of 8 systems; the same applied to the case when \(d=4\). This suggests that neither too small \(d\) (e.g., \(d=0\) with only one division) nor too larger \(d\) (e.g., \(d=4\) with up to 16 divisions, i.e., too many divisions) are ideal, which matches our theoretical analysis in Section 3.4. Therefore, we conclude that: **RQ3:**_The error of DaL has a (upward) quadratic correlation to \(d\). In this work, \(d=1\) or \(d=2\) (\(2\) to \(4\) divisions) reaches a good balance between handling sample sparsity and providing sufficient training data for the local models._ ### Overhead of Model Building #### 5.4.1. Method To study **RQ4**, we examine the overall time required and the breakdown of overhead for DaL in various phases. As some baselines, we also illustrate the model building time required by the approaches compared in **RQ1**. #### 5.4.2. Result From Table 5, DaL incurs an overall overhead from 6 to 56 minutes. Yet, from the breakdown, we note that the majority of the overhead comes from the _training phase_ that trains the local models. This is expected, as DaL uses rDNN by default. Specifically, the overhead of DaL compared with DeepPerf (3 to 60 minutes) is encouraging as it tends to be faster in the worst-case scenario while achieving up to 1.94\(\times\) better accuracy. This is because (1) each local model has less data to train and (2) the parallel training indeed speeds up the process. In contrast to Perf-AL (a few seconds to one minute), DaL appears to be rather slow as the former does not use hyperparameter tuning but fixed-parameter values (Zhu et al., 2017). Yet, as we have shown for **RQ1**, DaL achieves up to a few magnitudes of accuracy improvement. Although SPLConqueror and DECART have an overhead of less than a minute, again their accuracy is much more inferior. Further, SPLConqueror requires a good selection of the sampling method(s) (which can largely incur additional overhead) while DECART does not work on mixed systems. Finally, we have shown in **RQ3** that DaL's MRE is quadratically sensitive to \(d\) (upward), hence its value should be neither too small nor too large, e.g., \(d=1\) or \(d=2\) in this work. Figure 4. The median MRE (), its IQR (area), and the average smallest training size of the divisions () achieved by DaL under different depths (\(d\) values)/number of divisions over all systems and 30 runs. \(r=1\) means rank 1 in the Scott-Knott test on MRE. The best-ranked \(d\) is marked as \(*\). \begin{table} \begin{tabular}{l l l} \hline \hline **Approach** & **Overhead (min)** & **Restriction and Prerequisite** \\ \hline DeepPerf & 3 to 60 & None \\ DECART & 0.07 to 0.5 & does not work on mixed systems \\ Perf-AL & 0.08 to 1 & None \\ SPLConqueror & 4\(\times 10^{-4}\) to 5\(\times 10^{-3}\) & needs to select sampling method(s) \\ DaL & 6 to 56 & needs to set the depth \(d\) \\ - DaL (_dividing_) & 9\(\times 10^{-4}\) to 0.18 & None \\ - DaL (_training_) & 4 to 52 & None \\ - DaL (_predicting_) & 1.3 to 5 & None \\ \hline \hline \end{tabular} \end{table} Table 5. The overhead ranges across all systems and sizes. In summary, we say that: **RQ4:** DaL has competitive model building time to DeepPerf and higher overhead than the other state-of-the-art approaches, but this can be acceptable considering its improvement in accuracy. ## 6. Discussion ### Why does DaL Work? To provide a more detailed understanding of why DaL performs better than state-of-the-art, in Figure 5, we showcase the most common run of the predicted performance by DaL and DeepPerf against actual performance. Clearly, we note that the sample sparsity is rather obvious where there are two distant divisions. DeepPerf, as an approach that relies on a single and global rDNN, has been severely affected by such highly sparse samples: we see that the model tries to cover points in both divisions, but fails to do so as it tends to overfit the points in one or the other. This is why, in Figure 5b, its prediction on some configurations that should lead to low runtime tend to have much higher values (e.g., when rtQuality=1 and threads=1) while some of those that should have high runtime may be predicted with much lower values (e.g., when rtQuality=8 and threads=1). DaL, in contrast, handles such a sample sparsity well as it contains different local models that particularly cater to each division identified, hence leading to high accuracy (Figure 5a). ### Strengths and Limitations The first strength of DaL is that the concept of "divide-and-learn", paired with the rDNN, can handle both sample sparsity and feature sparsity well. As from Section 5.1 for **RQ1**, this has led to better accuracy and better utilization of the sample data than the state-of-the-art approaches. The second strength is that, as from Section 5.2 for **RQ2**, DaL can improve different local models compared with when they are used alone as a global model. While we set rDNN as the default for the best accuracy, one can also easily replace it with others such as LR for faster training and better interoperability. This enables great flexibility with DaL to make trade-offs on different concerns of the practical scenarios. A limitation of DaL is that it takes a longer time to build the model than some state-of-the-art approaches. On a machine with CPU 2GHz and 16GB RAM, DaL needs between 6 and 56 minutes for systems with up to 33 options and more than 2,000 samples. ### Why \(d\in\{1|2\}\) is Highly Effective? We have shown that the setting of \(d\) in DaL should be neither too small nor too large; the key intention behind the \(d\) is to reach a good balance between handling the sample sparsity and providing sufficient data for the local models to generalize. This is especially true when the CART might produce divisions with imbalanced sample sizes, e.g., we observed cases where there is a division with around 500 samples while one other has merely less than 10. Our experimental results show that such "sweet points" tend to be \(d=1\) or \(d=2\) for the cases studied in this work. However, the notion of "too small" and "too large" should be interpreted cautiously depending on the systems and data size. That is, although in this study, setting \(d=1\) or \(d=2\) appears to be appropriate; they might become "too small" settings when the data size increases considerably and/or the system naturally exhibits well-balanced divisions of configuration samples in the landscapes. Yet, the pattern of quadratic correlation between \(d\) and the error of DaL should remain unchanged. ### Using DaL in Practice Like many other data-driven approaches, using DaL is straightforward and free of assumptions about the software systems, data, and environments. We would recommend setting \(d=1\) or \(d=2\) by default, especially when the data sample size is similar to those we studied in this work. Of course, it is always possible to fine-tune the \(d\) value by training DaL with alternative settings under the configuration samples available. Given the quadratic correlation between \(d\) and the error, it is possible to design a simple heuristic for this, e.g., we compare the accuracy of DaL trained with \(d=i\) and \(d=i+1\) starting from \(d=1\) and finally selecting the maximum \(d\) value \(k\) such that DaL with \(k+1\) is less accurate than DaL with \(k\). ### Threats to Validity **Internal Threats.** Internal threats to validity are related to the parameters used. In this work, we set the same setting as used in state-of-the-art studies (Shi et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). We have also shown the sensitivity of DaL to \(d\) and reveal that there exists a generally best setting. We repeat the experiments for 30 runs and use Scott-Knott test for multiple comparisons. **Construct Threats.** Threats to construct validity may lie in the metric used. In this study, MRE is chosen for two reasons: (1) it is a relative metric and hence is insensitive to the scale of the performance; (2) MRE has been recommended for performance prediction by many latest studies (Shi et al., 2017; Wang et al., 2019; Wang et al., 2019). **External Threats.** External validity could be raised from the subject systems and training samples used. To mitigate such, we evaluate eight commonly used subject systems selected from the latest studies. We have also examined different training sample sizes as determined by SPLConqueror(Wang et al., 2019)--a typical method. Yet, we agree that using more subject systems and data sizes may be fruitful, especially for examining the sensitivity of \(d\) which may lead to a different conclusion when there is a much larger set of training configuration samples than we consider in this study. Figure 5. Example run of the actual and predicted performance by DaL and DeepPerf for VP8. ## 7. Related Work We now discuss the related work in light of Dal. **Analytical model.** Predicting software performance can be done by analyzing the code structure and architecture of the systems (Dalal, 2016; Dal, 2016). For example, Marco and Inverardi (Marco and Inverardi, 2017) apply queuing network to model the latency of requests processed by the software. Velez _et al._(Velez et al., 2016) use local measurements and dynamic taint analysis to build a model that can predict performance for part of the code. However, analytical models require full understanding and access to the software's internal states, which may not always be possible/feasible. Dal is not limited to those scenarios as it is a data-driven approach. **Statistical learning-based model.** Data-driven learning has relied on various statistical models, such as linear regressions (Zhou et al., 2017; Li et al., 2017; Li et al., 2018; Li et al., 2019), tree-liked model (Zhou et al., 2017; Li et al., 2019; Li et al., 2019), and fourier-learning models (Zhou et al., 2017; Li et al., 2019), _etc._ Among others, SPLConqueror(Velez et al., 2016) utilizes linear regression combined with different sampling methods and a step-wise feature selection to capture the interactions between configuration options. DECART(Dal, 2016) is an improved CART with an efficient sampling method (Li et al., 2019). However, recent work reveals that those approaches do not work well with small datasets (Li et al., 2019), which is rather common for configurable software systems due to their expensive measurements. This is a consequence of not fully handling the sparsity in configuration data. Further, they come with various restrictions, e.g., DECART does not work on mixed systems while SPLConqueror needs an extensive selection of the right sampling method(s). In contrast, we showed that Dal produces significantly more accurate results while does not limit to those restrictions. **Ensemble model.** Models can be combined in a shared manner to predict software performance. For example, Chen and Bahsoon (Chen and Bahsoon, 2016) propose an ensemble approach, paired with feature selection for mitigating feature sparsity, to model software performance. Other classic ensemble learning models such as Bagging (Chen and Bahsoon, 2016) and Boosting (Chen and Bahsoon, 2016) (e.g., RF) can also be equally adopted. Indeed, at a glance, our Dal does seem similar to the ensemble model as they all maintain a pool of local models. However, the key difference is that the classic ensemble models will inevitably share information between the local models at one or more of the following levels: * At the training level, e.g., the local models in Boosting learn the same samples but with a different focus; the Bucket of Models (i.e., what Chen and Bahsoon (Chen and Bahsoon, 2016) did) builds local models on the same data and uses the best upon prediction. * At the model prediction level, e.g., Bagging aggregates the results of local models upon prediction. Dal, in contrast, has no information sharing throughout the learning as the samples are split and so does the prediction of the local models. This has enabled it to better isolate the samples and cope with their inherited sparsity, e.g., recall from **RQ2**, the overall accuracy of Dal\({}_{CART}\) is better than RF (they both use CART as the local models but learn with and without sharing information). **Deep learning-based model.** A variety of studies apply neural network with multiple layers and/or ensemble learning to predict software performance (Dal, 2016; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). DeepPerf(Li et al., 2019) is a state-of-the-art DNN model with \(L_{1}\) regularization to mitigate feature sparsity for any configurable systems, and it can be more accurate than many other existing approaches. The most recently proposed Perf-AL(Li et al., 2019) relied on adversarial learning, which consists of a generative network to predict the performance and a discriminator network to distinguish the predictions and the actual labels. Nevertheless, existing deep learning approaches capture only the feature sparsity while ignoring the sample sparsity, causing serve risks of overfitting even with regularization in place. Compared with those, we have shown that, by capturing sample sparsity, Dal. is able to improve the accuracy considerably with better efficiency and acceptable overhead. **Hybrid model.** The analytical models can be combined with data-driven ones to form a hybrid model (Dal, 2016; Li et al., 2019; Li et al., 2019). Among others, Didona _et al._(Dal, 2016) use linear regression and \(k\)NN to learn certain components of a queuing network. Conversely, Weber _et al._(Weber et al., 2016) propose to learn the performance of systems based on the parsed source codes from the system to the function level. We see Dal. as being complementary to those hybrid models due to its flexibility in selecting the local model: when needed, the local models can be replaced with hybrid ones, making itself a hybrid variant. In case the internal structure of the system is unknown, Dal can also work in its default as a purely data-driven approach. ## 8. Conclusion This paper proposes Dal, an approach that effectively handles the sparsity issues in configurable software performance prediction. By formulating a classification problem with pseudo labels on top of the original regression problem, Dal extracts the branches/leaves from a CART which divides the samples of configuration into distant divisions and trains a dedicated local rDNN for each division thereafter. Prediction of the new configuration is then made by the rDNN of division inferred by a Random Forest classifier. As such, the division of samples and the trained local model handles the sample sparsity while the rDNN deals with the feature sparsity. We evaluate Dal. on eight real-world systems that are of diverse domains and scales, together with five sets of training data. The results show that Dal is: * **effective** as it is competitive to the best state-of-the-art approach on 33 out of 40 cases, in which 26 of them are significantly better with up to 1.94\(\times\) MRE improvement; * **efficient** since it often requires fewer samples to reach the same/better accuracy compared with the state-of-the-art; * **flexible** given that it considerably improves various global models when they are used as the local model therein; * **robust** because, given the quadratic correlation, a middle \(d\) value(s) (between 0 and the bound set by CART) can be robust and leads to the best accuracy across the cases, e.g., \(d=1\) or \(d=2\) under the sample sizes in this work. Mitigating the issues caused by sparsity is only one step towards better performance prediction, hence the possible future work based on Dal. is vast, including multi-task prediction of performance under different environments and merging diverse local models (e.g., a mix of rDNN and LR) as part of the "divide-and-learn" concept. Consolidating Dal with an adaptive \(d\) is also within our agenda. ## 9. Data Availability Data, code, and supplementary figures of this work can be found at our repository: [https://github.com/ideas-labo/DaL](https://github.com/ideas-labo/DaL).
2302.05877
Physical charges versus conformal invariance in unimodular gravity
Unimodularity can be implemented in different ways. In this paper we consider only the formulation of Unimodular Gravity in which the unimodular metric is obtained out of an unrestricted one as $\g_{\m\n}=|g|^{-{1\over n}} g_{\m\n}$. This procedure induces an extra Weyl symmetry. Some physical implications of this symmetry on the conserved currents are discussed. Finally the results are illustrated for the Painlev\'e-Gullstrand extension of Schwarzschild spacetime.
Enrique Alvarez, Jesus Anero, Irene Sanchez-Ruiz
2023-02-12T09:31:12Z
http://arxiv.org/abs/2302.05877v5
# Physical charges versus conformal invariance in Unimodular Gravity. ###### Abstract There is a formulation of unimodular gravity (UG) in which the unimodular metric is obtained out of an unrestricted one as \(\gamma_{\mu\nu}=|g|^{-\frac{1}{h}}g_{\mu\nu}\). This unrestricted metric is only defined up to a Weyl rescaling. Physical charges like ADM energy or horizon entropy, however, are not Weyl invariant in general. In this paper a Weyl invariant definition of some physical charges is given, and the results are illustrated for the Painleve-Gullstrand extension of Schwarzschild spacetime. ###### Contents * 1 Introduction * 2 Conserved charges in UG. * 2.1 Hilbert's UG Lagrangian, * 2.2 Einstein's Lagrangian * 3 Schwarzschild spacetime in UG. * 4 Conclusions Introduction Unimodular Gravity (UG) is a restriction of General Relativity (GR) in which only unimodular metrics are considered, even in the functional integral when this is applicable. A recent review can be found in [2]. The first question is why unimodular gravity? The answer is that in this theory a constant vacuum density does not weigh, so that it does not induce a cosmological constant (cf. the first reference in [4]). Although that fact is not a full solution of the cosmological constant problem, it does solve half of it, namely why the vacuum energy usually believed to be generated in several phase transitions in the history of the universe do not induce a huge cosmological constant. It remains to explain the observed (small and positive) value of the cosmological constant. UG has nothing new to say about this. There seems not to be any free lunch in the universe, and the price to pay in this case is that the restriction to unimodular metrics implies that the theory is not invariant under the full diffeomorphism group (Diff), but only under volume preserving diffeomorphisms (VPD). This subgroup of the total diffeomorphism group, which is the invariance group of General Relativity (GR), consists on those Diffs (connected with the identity) that have unit Jacobian. \[x\to x^{\prime};\quad\det\frac{\partial x^{\prime}}{\partial x}=1 \tag{1}\] The tangent space at the identity of the _identity component_ of VPD is generated by 1_transverse vectors_\(\xi_{T}^{\mu}\in TDiff_{0}\subset TDiff\). Footnote 1: Diffeomorphism groups are rather tricky. For example [13] there are diffeomorphisms arbitrarily close to the identity which can not be obtained through the exponential map of the algebra.We shall in fact only consider in this work those diffeomorphisms that can be generated via the exponential map. We will always refer to this as the _algebra_ of the group and do not worry about topological considerations. \[\partial_{\mu}\xi_{T}^{\mu}=0 \tag{2}\] It is easy to check that if \(\xi_{1}\) and \(\xi_{2}\) are transverse, so is \([\xi_{1},\xi_{2}]\), so that transverse vectors really generate a subalgebra in any manifold \((M,g)\). In general [10] VPD can be defined as those diffeomorphisms that leave invariant some measure. We are in fact interested in those that preserve the Riemannian measure \[d(vol)\equiv\sqrt{|g|}\,d^{n}x \tag{3}\] Henceforth we will denote \[g\equiv|\det\,g_{\alpha\beta}| \tag{4}\] Actually, the corresponding TDiff generators are transverse in the covariant sense \[\nabla_{\lambda}\xi^{\lambda}=0 \tag{5}\] This constraint is equivalent to (2) for unimodular metric (owing to the fact that \(\Gamma_{\lambda\sigma}^{\lambda}=\frac{1}{2}\partial_{\sigma}\log\,g\)). The fact that the gauge parameters are not independent is the source of some technical complications. One way out is to write \[\xi^{\mu}=\nabla_{\lambda}\Omega^{\lambda\mu}, \tag{6}\] where the tensor \[\Omega_{\mu\nu}=-\Omega_{\nu\mu} \tag{7}\] is antisymmetric. Let us define the equations of motion (EM) as \[H_{\mu\nu}\equiv\frac{\delta S}{\delta g^{\mu\nu}} \tag{8}\] VPD invariance [3] implies that, \[\delta S=\int d(vol)H_{\mu\nu}\left(\nabla^{\mu}\xi^{\nu}+\nabla^{\nu}\xi^{\mu} \right)=\int d(vol)H_{\mu\nu}\left(\nabla^{\mu}\nabla_{\alpha}\Omega^{\alpha \nu}+\nabla^{\nu}\nabla_{\alpha}\Omega^{\alpha\mu}\right) \tag{9}\] Assuming that \(\Omega_{\alpha\beta}\) vanishes at the boundary, \(\left.\Omega_{\alpha\beta}\right|_{\hat{\sigma}M}=0\), we can integrate by parts [5]2 Footnote 2: When there is ambiguity, we follow the rule of underlining the indices which undergo antisymmetrization. \[\delta S=\int d(vol)\Big{[}\nabla_{\alpha}\nabla^{\lambda}H_{\lambda\beta}- \nabla_{\beta}\nabla^{\lambda}H_{\lambda\alpha}\Big{]}\Omega^{\alpha\beta} \tag{10}\] therefore \[\nabla_{[\underline{\alpha}}\nabla^{\lambda}H_{\lambda\underline{\beta}]}=0 \quad\Longrightarrow\quad\exists S\,s.t.\,\nabla^{\lambda}H_{\lambda\beta}= \nabla_{\alpha}S \tag{11}\] as long as the de Rham cohomology of the manifold is trivial \(H^{3}_{dR}(M)=0\); otherwise harmonic forms have to be included in the right hand side of the preceding equation. Of course, in the Diff invariant case, \(S=\) constant and Bianchi identities are obtained. It is interesting to notice that the determinant of the metric, \(g\)3 is actually inert under those because under a general diffeomorphism in \(Diff_{0}\) Footnote 3: We shall be somewhat sloppy and write \(\nabla_{\mu}g\equiv\partial_{\mu}g\) in spite of the fact that \(g\) is a scalar density of weight \(w=2\). Were we to define a covariant derivative like Synge and Schild do [15], then \[\nabla_{\lambda}T=g^{\frac{w}{2}}\nabla_{\lambda}\left(g^{-\frac{w}{2}}T \right)\therefore\quad\nabla_{\lambda}g=0 \tag{12}\] We insist that we are _not_ using this definition. \[\delta g=g\nabla_{\lambda}\xi^{\lambda} \tag{13}\] which vanishes for \(\xi\in TDiff_{0}\). When \(TDiff_{0}\) is defined by the non covariant condition \[\partial_{\mu}\xi^{\mu}=0 \tag{14}\] then \(g\) is a true scalar \[\delta g=\xi^{\lambda}\partial_{\lambda}g \tag{15}\] The reason why we brought the determinant to the fore is that instead of working directly with the unimodular metric \(\gamma_{\alpha\beta}\), we choose to work with the auxiliary one, \(g_{\alpha\beta}\) such that \[\gamma_{\alpha\beta}=|g|^{-\frac{1}{n}}\,g_{\alpha\beta} \tag{16}\] Recall that this induces a Weyl invariance \[g_{\alpha\beta}\longrightarrow\Omega^{2}(x)g_{\alpha\beta} \tag{17}\] In this paper we shall use the tags _conformal invariance_ and _Weyl invariance_ interchangeably. The total symmetry group of the theory is now \[\text{WTDiff}=\text{Weyl}\ltimes\text{TDiff}, \tag{18}\] Hilbert's lagrangian in UG variables read \[S_{\text{\tiny UG}}[g]=-\frac{1}{2\kappa^{2}}\int d^{n}xg^{\frac{1}{n}}\Big{[}R -\frac{(n-1)(5n-2)}{4n^{2}}g^{\mu\nu}\frac{\nabla_{\mu}g\nabla_{\nu}g}{g^{2}}+ \frac{(n-1)}{n}\frac{\Box g}{g}\Big{]} \tag{19}\] The corresponding EM read \[R_{\mu\nu}-\frac{1}{n}R\,g_{\mu\nu}+\Theta_{\mu\nu}=T_{\mu\nu}-\frac{1}{n}Tg_{ \mu\nu} \tag{20}\] where \[\Theta_{\mu\nu}=\frac{(2-n)(2n-1)}{4n^{2}}\left(\frac{\nabla_{\mu}g\nabla_{ \nu}g}{g^{2}}-\frac{1}{n}\frac{(\nabla g)^{2}}{g^{2}}g_{\mu\nu}\right)+\frac {n-2}{2n}\left(\frac{\nabla_{\mu}\nabla_{\nu}g}{g}-\frac{1}{n}\frac{\nabla^{2 }g}{g}\,g_{\mu\nu}\right). \tag{21}\] The way to solve those equations in vacuum is start with a fixed Weyl gauge; that is, solve first them for some unimodular metric \(\gamma_{\alpha\beta}\) \[\hat{R}_{\mu\nu}-\frac{1}{4}\gamma_{\mu\nu}\hat{R}=0 \tag{22}\] and then Weyl transform to the full Weyl orbit \[\gamma_{\alpha\beta}\longrightarrow g_{\alpha\beta}\equiv g(x)^{\frac{1}{4}} \gamma_{\alpha\beta} \tag{23}\] with an arbitrary function \(g(x)\). The thing is that in the Weyl orbit of spacetimes that are gauge equivalent to a physical one, which by assumption is the unimodular one, \(\gamma_{\alpha\beta}\) most physical characteristics are not Figure 1: Weyl gauge orbits and gauge fixing for metrics \(|g|=1\). TDiff orbits are not parallel to Weyl orbits, constant (because they are not Weyl gauge invariant). Those includes the ADM energy, and the associated entropy of spacetimes with a horizon, such as the Painleve-Gullstrand extension of Schwarzschild's. In fact not even asymptotic flatness or the presence of a horizon are Weyl invariant. In this paper we provide a Weyl invariant definition of conserved physical charges, which reduce to the standard one in general relativity (GR) in Weyl's unimodular gauge. The plan of the paper is as follows. After reviewing how to get conserved charges in diffeomorphism invariant theories, we proceed to explore in detail two lagrangians for UG, Hilbert's and Einstein's. We eventually find Weyl invariant charges, which reduce to GR ones in the Weyl gauge fixed \(g=1\). Related work on Noether currents in general Weyl invariant theories has recently appeared [1] although the emphasis there is slightly different from ours. ## 2 Conserved charges in UG. The Weyl invariance of UG is, in some sense, _tautological_ (this is what Duff [8] calls _pseudo-Weyl invariance_), in the sense that there is a field redefinition where the symmetry disappears. This translates into the fact that under conformal transformations \[\delta L_{UG}=0 \tag{24}\] (without any total derivative). The invariance of the action under a diffeomorphism generated by a vector \(\xi\) \[\delta S=0=\int\sqrt{|g|}d^{n}x\,\pounds(\xi)L\equiv\int d(vol)\,\pounds(\xi)L \tag{25}\] implies that the variation of the Lagrangian reads \[\delta\,{\cal L}\equiv\delta\left(\sqrt{|g|}L\right)=\partial_{\mu}({\cal L} \,\xi^{\mu}). \tag{26}\] On the other hand under a general variation of the metric (without changing the domain of integration) \[\delta S=\int d(vol)\biggl{\{}\frac{\delta S}{\delta g_{\alpha\beta}}\delta g _{\alpha\beta}+\nabla_{\lambda}N^{\lambda}\biggr{\}} \tag{27}\] on shell, that is, on the submanifold that obeys the EM \[\delta\,{\cal L}=\partial_{\mu}(\sqrt{g}N^{\mu}) \tag{28}\] (remember that we included the \(\sqrt{|g|}\) in \({\cal L}\)). The fact that this two variations have to be consistent independently of the integration domain means that the Noether current \[\partial_{\mu}j^{\mu}_{N}=\partial_{\mu}({\cal L}\xi^{\mu}-\sqrt{g}N^{\mu})=0 \tag{29}\] has got to be conserved. Under a linear Weyl transformation \(\Omega\equiv 1+\omega\) \[\delta g_{\alpha\beta}=2\omega g_{\alpha\beta} \tag{30}\] The current associated to Weyl invariance is then simply \[j^{\mu}_{W}=N^{\mu}\left[\delta g_{\alpha\beta}=2\omega g_{\alpha\beta}\right] \tag{31}\] Under a diffeomorphism connected with the identity generated by a vector \(\xi\in Diff_{0}\), \[h_{\alpha\beta}=\nabla_{\alpha}\xi_{\beta}+\nabla_{\beta}\xi_{\alpha} \tag{32}\] the current splits into \[j^{\mu}_{N}=\sqrt{g}\left(T^{\mu}_{\rho}\xi^{\rho}+U^{\mu\nu}_{\rho}\nabla_{ \nu}\xi^{\rho}+V^{\mu(\lambda\tau)}_{\rho}\nabla_{\lambda}\nabla_{\tau}\xi^{ \rho}\right) \tag{33}\] However, in VPD we have to write \(\xi^{\lambda}=\nabla_{\tau}\Omega^{\lambda\tau}\) with \(\Omega^{(\lambda\tau)}=0\). Assuming \(\left.\Omega_{\alpha\beta}\right|_{\partial M}=0\), we can integrate by parts and the ensuing _conserved current_ is \[\nabla_{[\underline{\alpha}}\nabla_{\mu}T^{\mu}_{\underline{\rho}]}+\nabla_{[ \underline{\alpha}}\nabla_{\nu}\nabla_{\mu}U^{\mu\nu}_{\underline{\rho}]}+ \nabla_{[\underline{\alpha}}\nabla_{\mu}\nabla_{\nu}\nabla_{\lambda}V^{ \lambda(\mu\nu)}_{\underline{\rho}]}=0 \tag{34}\] Let us be more specific. ### Hilbert's UG Lagrangian, The Unimodular Hilbert 4 action is Footnote 4: We do not want here to take issues on the priority of Hilbert over Einstein in coming up with this Lagrangian. In fact this is usually (also by ourselves) called the Einstein-Hilbert Lagrangian; we have chosen the names in order to be able to tell apart two different candidates for the GR Lagrangian. \[S_{\mbox{\tiny UG}}[\gamma]\equiv-\frac{1}{2\kappa^{2}}\int d^{n}xR[\gamma] \tag{35}\] where the metric is unimodular \(\gamma=-\mbox{det}\gamma_{\mu\nu}=1\). As has been already remarked, it is possible to formulate the theory in such a way that it has an added Weyl invariance by writing \[\gamma_{\mu\nu}=g^{-\frac{1}{n}}g_{\mu\nu} \tag{36}\] The action for this new unrestricted metric reads \[S_{\mbox{\tiny UG}}[g]=-\frac{1}{2\kappa^{2}}\int d^{n}xg^{\frac{1}{n}}\Big{[} R-\frac{(n-1)(5n-2)}{4n^{2}}g^{\mu\nu}\frac{\nabla_{\mu}g\nabla_{\nu}g}{g^{2}}+ \frac{(n-1)}{n}\frac{\Box g}{g}\Big{]} \tag{37}\] Under a general variation \(\delta g_{\mu\nu}\equiv h_{\mu\nu}\) (keeping total derivatives) \[\delta S_{H}=g^{1/n}\Bigg{\{}\frac{1}{n}Rg_{\mu\nu}-R_{\mu\nu}+ \frac{(n-2)(2n-1)}{4n^{2}}\left(\frac{\nabla_{\mu}g\nabla_{\nu}g}{g^{2}}- \frac{1}{n}\frac{\nabla_{\alpha}g\nabla^{\alpha}g}{g^{2}}g_{\mu\nu}\right)- \frac{n-2}{2n}\left(\frac{\nabla_{\mu}\nabla_{\nu}g}{g}-\frac{1}{n}\frac{ \Box g}{g}g_{\mu\nu}\right)\Bigg{\}}\,h^{\mu\nu}+\] \[+\sqrt{g}\Bigg{\{}\nabla_{\mu}\Big{[}g^{\frac{2-n}{2n}}\nabla_{ \nu}h^{\mu\nu}\Big{]}-\frac{1}{2}\nabla_{\nu}\Big{[}h^{\mu\nu}g^{\frac{2-n}{2 n}}\frac{\nabla_{\mu}g}{g}\Big{]}-\frac{1}{n}\left(\nabla_{\lambda}\Big{[}g^{ \frac{2-n}{2n}}\nabla^{\lambda}h\Big{]}-\frac{1}{2}\nabla_{\lambda}\Big{[}hg ^{\frac{2-n}{2n}}\frac{\nabla^{\lambda}g}{g}\Big{]}\right)\Bigg{\}} \tag{38}\] The corresponding EM is Weyl invariant and, up to total derivatives, traceless. This means that \[N^{\mu} =g^{\frac{2-n}{2n}}\bigg{[}\nabla_{\nu}h^{\mu\nu}-\frac{1}{2}h^{\mu \nu}\frac{\nabla_{\nu}g}{g}-\frac{1}{n}\left(\nabla^{\mu}h-\frac{1}{2}h\frac{ \nabla^{\mu}g}{g}\right)\bigg{]} \tag{39}\] and the total conserved current reads \[j_{N}^{\mu}=\sqrt{g}\left(T_{\rho}^{\mu}\nabla_{\alpha}\Omega^{\alpha\rho}+U_{ \rho}^{\mu\nu}\nabla_{\nu}\nabla_{\alpha}\Omega^{\alpha\rho}+V_{\rho}^{\mu( \lambda\tau)}\nabla_{\lambda}\nabla_{\tau}\nabla_{\alpha}\Omega^{\alpha\rho}\right) \tag{40}\] where \[T_{\rho}^{\mu} =\delta_{\rho}^{\mu}g^{\frac{2-n}{2n}}\bigg{[}R-\frac{(n-1)(5n-2 )}{4n^{2}}g^{\alpha\beta}\frac{\nabla_{\alpha}g\nabla_{\beta}g}{g^{2}}+\frac{( n-1)}{n}\frac{\Box g}{g}\bigg{]}\] \[U_{\rho}^{\mu\nu} =\frac{1}{2}g^{\frac{2-3n}{2n}}\bigg{[}g^{\mu\nu}\nabla_{\rho}g +\delta_{\rho}^{\mu}\nabla^{\nu}g-\frac{1}{n}\delta_{\rho}^{\nu}\nabla^{\mu}g \bigg{]}\] \[V_{\rho}^{\mu(\lambda\tau)} =g^{\frac{2-n}{2n}}\bigg{[}-g^{\lambda\tau}\delta_{\rho}^{\mu}- \frac{n-2}{2n}\left(\delta_{\rho}^{\lambda}g^{\mu\tau}+\delta_{\rho}^{\tau}g^{ \mu\lambda}\right)\bigg{]} \tag{41}\] In the unimodular Weyl gauge they reduce to \[T_{\rho}^{\mu} =\hat{R}[\gamma]\delta_{\rho}^{\mu}\] \[U_{\rho}^{\mu\nu} =0\] \[V_{\rho}^{\mu(\lambda\tau)} =-\gamma^{\lambda\tau}\delta_{\rho}^{\mu}-\frac{n-2}{2n}\left( \delta_{\rho}^{\lambda}\gamma^{\mu\tau}+\delta_{\rho}^{\tau}\gamma^{\mu\lambda }\right)\bigg{]} \tag{42}\] Assuming that \(\left.\Omega_{\alpha\beta}\right|_{\partial M}=0\), the UG _conservation law_ read \[\nabla_{[\underline{\alpha}}\nabla_{\mu}T_{\underline{\rho}]}^{\mu}+\nabla_{[ \underline{\alpha}}\nabla_{\nu}\nabla_{\mu}U_{\underline{\rho}]}^{\mu\nu}+ \nabla_{[\underline{\alpha}}\nabla_{\mu}\nabla_{\nu}\nabla_{\lambda}V_{ \underline{\rho}]}^{\lambda(\mu\nu)}=0 \tag{43}\] Under conformal/Weyl transformations \[j_{W}^{\mu} =g^{\frac{2-n}{2n}}\bigg{[}2\nabla_{\nu}(g^{\mu\nu}\omega)-\omega g ^{\mu\nu}\frac{\nabla_{\mu}g}{g}-2\nabla^{\mu}\omega+\omega\frac{\nabla^{\mu} g}{g}\bigg{]}=0 \tag{44}\] The Weyl current vanishes identically. This is of course due to the tautological cgaracter of conformal symmetry in our problem. ### Einstein's Lagrangian The UG Einstein's action [14] reads \[S_{\mbox{\tiny E}}\equiv-\frac{1}{2\kappa^{2}}\int d^{n}x\gamma^{\mu\nu}\left( \hat{\Gamma}_{\lambda\rho}^{\rho}\hat{\Gamma}_{\mu\nu}^{\lambda}-\hat{\Gamma}_ {\lambda\nu}^{\rho}\hat{\Gamma}_{\mu\rho}^{\lambda}\right) \tag{45}\] where the metric is unimodular \(\gamma=-\left|\mbox{det}\gamma_{\mu\nu}\right|=1\) and \(\hat{\Gamma}\) are the Christoffel symbols of said metric. Hilbert's and Einstein's Lagrangians differ by a total derivative [5], and thus yield the same equations of motion. The importance of Einstein's lagrangian stems from the fact that ot only involves first derivatives of the metric. This gives a new understanding of why Einstein's equations are second order. \[{\cal L}_{H}={\cal L}_{E}+\partial_{\mu}V^{\mu} \tag{46}\] where \[V^{\mu}=\sqrt{\gamma}\Big{[}\gamma^{\alpha\mu}\hat{\Gamma}^{\lambda}_{\lambda \alpha}-\gamma^{\alpha\beta}\hat{\Gamma}^{\mu}_{\alpha\beta}\Big{]} \tag{47}\] The fact that the metric is unimodular implies \[\gamma^{\alpha\beta}\hat{\Gamma}^{\mu}_{\alpha\beta}=-\frac{1}{ \sqrt{\gamma}}\partial_{\rho}\left(\sqrt{\gamma}\gamma^{\rho\mu}\right)=- \partial_{\rho}\left(\gamma^{\rho\mu}\right)\] \[\hat{\Gamma}^{\lambda}_{\lambda\mu}=\frac{1}{\sqrt{\gamma}} \partial_{\mu}\sqrt{\gamma}=0 \tag{48}\] Incidentaly, the canonical energy-momentum tensor associated to \({\cal L}_{E}\) is precisely _Einstein's energy-momentum pseudotensor_. \[t^{E}_{\mu\nu}\equiv\sum_{i}\frac{\partial L}{\partial(\partial^{\mu}\phi^{i} )}\partial_{\nu}\phi_{i}-Lg_{\mu\nu}=\frac{\partial{\cal L}}{\partial(\partial _{\mu}g_{\alpha\beta})}\partial^{\nu}g_{\alpha\beta}-g^{\mu\nu}{\cal L} \tag{49}\] and is conserved in the sense that \[\partial_{\mu}t^{\mu\nu}=0 \tag{50}\] To be specific, \[t^{\mu\nu}_{E} \equiv\frac{\partial{\cal L}}{\partial(\partial_{\mu}g_{\alpha \beta})}\partial^{\nu}g_{\alpha\beta}-g^{\mu\nu}{\cal L}= \tag{51}\] \[=g^{1/n}\Big{[}\Gamma^{\lambda}_{\alpha\beta}\Gamma^{\tau}_{ \tau\lambda}g^{\alpha\mu}g^{\beta\nu}-\Gamma^{\lambda}_{\lambda\alpha}\Gamma^ {\tau}_{\tau\beta}g^{\alpha\mu}g^{\beta\nu}+\frac{1}{2}\Gamma^{\lambda}_{ \lambda\beta}\Gamma^{\mu}_{\rho\sigma}g^{\rho\sigma}g^{\beta\nu}+\frac{1}{2} \Gamma^{\lambda}_{\lambda\beta}\Gamma^{\nu}_{\rho\sigma}g^{\rho\sigma}g^{ \beta\mu}-\] \[-\Gamma^{\rho}_{\beta\lambda}\Gamma^{\mu}_{\rho\tau}g^{\lambda \tau}g^{\beta\nu}-\Gamma^{\rho}_{\beta\lambda}\Gamma^{\nu}_{\rho\tau}g^{ \lambda\tau}g^{\beta\mu}+\frac{1}{2}\Gamma^{\rho}_{\rho\lambda}\Gamma^{\mu}_{ \beta\tau}g^{\lambda\tau}g^{\beta\nu}+\frac{1}{2}\Gamma^{\rho}_{\rho\lambda} \Gamma^{\nu}_{\beta\tau}g^{\lambda\tau}g^{\beta\mu}-\] \[-\frac{(n-2)}{2n}\Big{(}\Gamma^{\lambda}_{\alpha\beta}g^{\alpha \mu}g^{\beta\nu}\frac{\nabla_{\lambda}g}{g}+\frac{1}{2}\Gamma^{\mu}_{\beta \lambda}g^{\beta\nu}\frac{\nabla^{\lambda}g}{g}+\frac{1}{2}\Gamma^{\nu}_{ \beta\lambda}g^{\beta\mu}\frac{\nabla^{\lambda}g}{g}-\] \[-\Gamma^{\lambda}_{\beta\lambda}g^{\beta\nu}\frac{\nabla^{\mu}g}{ g}-\Gamma^{\lambda}_{\beta\lambda}g^{\beta\mu}\frac{\nabla^{\nu}g}{g}\Big{)}\Big{]}-g^{\mu\nu}{ \cal L}\] In the Weyl unimodular gauge \(g=1\) it reduces to \[t^{\mu\nu}_{E}=-\hat{\Gamma}^{\rho}_{\beta\lambda}\hat{\Gamma}^{\mu}_{\rho \tau}\gamma^{\lambda\tau}\gamma^{\beta\nu}-\hat{\Gamma}^{\rho}_{\beta\lambda} \hat{\Gamma}^{\nu}_{\rho\tau}\gamma^{\lambda\tau}\gamma^{\beta\mu}+\gamma^{ \mu\nu}\gamma^{\alpha\beta}\hat{\Gamma}^{\rho}_{\lambda\alpha}\hat{\Gamma}^{ \lambda}_{\beta\rho} \tag{52}\] Although we will not use it, let us mention the well-known [9] fact that \[(t_{E})^{\mu}_{\nu}=\partial_{\alpha}H^{\mu\alpha}_{\nu} \tag{53}\] Under a Weyl rescaling \[S_{{}_{\rm E}} =-\frac{1}{2\kappa^{2}}\int d^{n}xg^{1/n}\Big{[}g^{\mu\nu}\left( \Gamma^{\rho}_{\lambda\rho}\Gamma^{\lambda}_{\mu\nu}-\Gamma^{\rho}_{\lambda \nu}\Gamma^{\lambda}_{\mu\rho}\right)-\frac{(n-2)}{2n}\left(g^{\mu\nu}\Gamma^ {\lambda}_{\mu\nu}\frac{\nabla_{\lambda}g}{g}-\Gamma^{\lambda}_{\lambda\tau} \frac{\nabla^{\tau}g}{g}\right)- \tag{54}\] \[-\frac{(n-1)(n-2)}{4n^{2}}\frac{\nabla_{\lambda}g\nabla^{\lambda}g }{g^{2}}\Big{]}\] Under a general variation of the metric, \(\delta g_{\alpha\beta}\equiv h_{\alpha\beta}\), \[\delta S_{E}=g^{1/n}\Biggl{\{}\frac{1}{n}g_{\mu\nu}g^{\alpha\beta} \left(\Gamma^{\rho}_{\lambda\rho}\Gamma^{\lambda}_{\alpha\beta}-\Gamma^{\rho}_{ \lambda\alpha}\Gamma^{\lambda}_{\beta\rho}\right)-\left(\Gamma^{\rho}_{\lambda \rho}\Gamma^{\lambda}_{\mu\nu}-\Gamma^{\rho}_{\lambda\nu}\Gamma^{\lambda}_{ \mu\rho}\right)+\nabla_{\lambda}\Gamma^{\lambda}_{\mu\nu}-\frac{1}{n}g_{\mu\nu} g^{\alpha\beta}\nabla_{\lambda}\Gamma^{\lambda}_{\alpha\beta}-\] \[-\nabla_{\nu}\Gamma^{\lambda}_{\lambda\mu}+\frac{1}{n}g_{\mu\nu} \nabla^{\tau}\Gamma^{\lambda}_{\lambda\tau}+\frac{(n-2)}{2n}\biggl{(}\frac{ \nabla_{\mu}\nabla_{\nu}g}{g}-\frac{1}{n}g_{\mu\nu}\frac{\Box g}{g}\biggr{)}-\] \[-\frac{(n-2)(2n-1)}{4n^{2}}\biggl{(}\frac{\nabla_{\mu}g\nabla_{ \nu}g}{g^{2}}-\frac{1}{n}g_{\mu\nu}\frac{\nabla_{\lambda}g\nabla^{\lambda}g}{ g^{2}}\biggr{)}\Biggr{\}}h^{\mu\nu}+\] \[+\sqrt{|g|}\Biggl{\{}-\nabla_{\lambda}\biggl{[}g^{1/n-1/2}\biggl{(} \Gamma^{\lambda}_{\mu\nu}h^{\mu\nu}-\frac{1}{n}g^{\alpha\beta}\Gamma^{\lambda}_ {\alpha\beta}h\biggr{)}\biggr{]}+\nabla_{\nu}\biggl{[}g^{1/n-1/2}\biggl{(} \Gamma^{\lambda}_{\lambda\mu}h^{\mu\nu}-\frac{1}{n}g^{\mu\nu}\Gamma^{\lambda}_ {\lambda\mu}h\biggr{)}\biggr{]}-\] \[-\frac{(n-2)}{2n}\nabla_{\nu}\biggl{[}g^{1/n-3/2}\biggl{(}h^{\mu \nu}\nabla_{\mu}g-\frac{1}{n}hg^{\mu\nu}\nabla_{\mu}g\biggr{)}\biggr{]}\Biggr{\}} \tag{55}\] The corresponding EM is Weyl invariant and, up to total derivatives, traceless. The total derivative term is generated by \[N^{\mu} =-g^{1/n-1/2}\biggl{(}\Gamma^{\mu}_{\alpha\beta}h^{\alpha\beta}- \frac{1}{n}g^{\alpha\beta}\Gamma^{\mu}_{\alpha\beta}h\biggr{)}+g^{1/n-1/2} \biggl{(}\Gamma^{\lambda}_{\lambda\nu}h^{\mu\nu}-\frac{1}{n}g^{\mu\nu}\Gamma^{ \lambda}_{\lambda\nu}h\biggr{)}- \tag{56}\] \[-\frac{n-2}{2n}g^{1/n-3/2}\biggl{(}h^{\mu\nu}\nabla_{\nu}g-\frac{ 1}{n}hg^{\mu\nu}\nabla_{\nu}g\biggr{)}\] The corresponding Noether current is given by \[j^{\mu}_{N}=\sqrt{g}\left(T^{\mu}_{\rho}\nabla_{\alpha}\Omega^{\alpha\rho}+U^{ \mu\nu}_{\rho}\nabla_{\nu}\nabla_{\alpha}\Omega^{\alpha\rho}+V^{\mu(\lambda \tau)}_{\rho}\nabla_{\lambda}\nabla_{\tau}\nabla_{\alpha}\Omega^{\alpha\rho}\right) \tag{57}\] with \[T^{\mu}_{\rho} =\delta^{\mu}_{\rho}g^{\frac{2-n}{2n}}\biggl{[}g^{\alpha\beta} \left(\Gamma^{\tau}_{\lambda\tau}\Gamma^{\lambda}_{\alpha\beta}-\Gamma^{\tau }_{\lambda\beta}\Gamma^{\lambda}_{\alpha\tau}\right)-\frac{(n-2)}{2n}\left(g^{ \alpha\beta}\Gamma^{\lambda}_{\alpha\beta}\frac{\nabla_{\lambda}g}{g}-\Gamma^ {\lambda}_{\lambda\tau}\frac{\nabla^{\tau}g}{g}\right)-\] \[-\frac{(n-1)(n-2)}{4n^{2}}\frac{\nabla_{\lambda}g\nabla^{\lambda }g}{g^{2}}\biggr{]}\] \[U^{\mu\nu}_{\rho} =g^{\frac{2-n}{2n}}\biggl{[}2\,\Gamma^{\mu}_{\alpha\rho}g^{\alpha \nu}-\frac{2}{n}g^{\alpha\beta}\Gamma^{\mu}_{\alpha\beta}\delta^{\nu}_{\rho}- \Gamma^{\lambda}_{\lambda\rho}g^{\mu\nu}-\Gamma^{\lambda}_{\lambda\alpha}g^{ \alpha\nu}\delta^{\mu}_{\rho}+\] \[+\frac{2}{n}g^{\mu\tau}\Gamma^{\lambda}_{\lambda\tau}\delta^{\nu}_ {\rho}-\frac{n-2}{2n}\left(g^{\mu\nu}\frac{\nabla_{\rho}g}{g}+\delta^{\mu}_{ \rho}\frac{\nabla^{\nu}g}{g}-\frac{2}{n}\delta^{\nu}_{\rho}\frac{\nabla^{\mu} g}{g}\right)\biggr{]}\] \[V^{\mu(\lambda\tau)}_{\rho} =0 \tag{58}\] In the unimodular Weyl gauge they reduce to \[T^{\mu}_{\rho} =-\delta^{\mu}_{\rho}\gamma^{\alpha\beta}\hat{\Gamma}^{\tau}_{ \lambda\beta}\hat{\Gamma}^{\lambda}_{\alpha\tau}\] \[U^{\mu\nu}_{\rho} =2\hat{\Gamma}^{\mu}_{\alpha\rho}\gamma^{\alpha\nu}-\frac{2}{n} \gamma^{\alpha\beta}\hat{\Gamma}^{\mu}_{\alpha\beta}\delta^{\nu}_{\rho}\] \[V^{\mu(\lambda\tau)}_{\rho} =0 \tag{59}\] Assuming that \(\left.\Omega_{\alpha\beta}\right|_{\partial M}=0\), the modified conservation law now read \[\nabla_{[\underline{a}}\nabla_{\mu}T^{\mu}_{\underline{\rho}]}+\nabla_{[ \underline{a}}\nabla_{\nu}\nabla_{\mu}U^{\mu\nu}_{\underline{\rho}]}+\nabla_{[ \underline{a}}\nabla_{\mu}\nabla_{\nu}\nabla_{\lambda}V^{\lambda(\mu\nu)}_{ \underline{\rho}]}=0 \tag{60}\] ## 3 Schwarzschild spacetime in UG. Schwarzschild's metric in unimodular gauge read \[ds^{2}=\Big{(}1-\frac{r_{s}}{(3x+b)^{1/3}}\Big{)}dt^{2}-\frac{(3x+b)^{-4/3}}{1- \frac{r_{s}}{(3x+b)^{1/3}}}dx^{2}-(3x+b)^{2/3}\Big{[}\frac{d\psi^{2}}{1-\psi^{2 }}+(1-\psi^{2})d\phi^{2}\Big{]} \tag{61}\] with \(r_{s}=2GM\) and the range of coordinates is \[-\frac{b}{3}\leq x\leq\infty\] \[-1\leq\psi\leq 1\] \[0\leq\phi\leq 2\pi \tag{62}\] In fact, this gauge in GR has been used by Schwarzschild himself (and also by Einstein). The subtle difference between UG and GR in the unimodular gauge has been discussed in [2]. The change \[r=(3x+b)^{1/3}\] \[\psi=-\cos\theta \tag{63}\] allows us to recover the metric in the usual form \[ds^{2}=\Big{(}1-\frac{r_{s}}{r}\Big{)}\,dt^{2}-\frac{1}{\big{(}1-\frac{r_{s}}{ r}\big{)}}dr^{2}-r^{2}d\Omega^{2} \tag{64}\] with \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\). The unrestricted metric is given in general by a Weyl rescaling \[ds^{2}=\Omega^{2}(t,x,\psi,\theta)\Big{\{}\Big{(}1-\frac{r_{s}}{(3x+b)^{1/3}} \Big{)}-\frac{(3x+b)^{-4/3}}{1-\frac{r_{s}}{(3x+b)^{1/3}}}dx^{2}-(3x+b)^{2/3} \Big{[}\frac{d\psi^{2}}{1-\psi^{2}}+(1-\psi^{2})d\phi^{2}\Big{]}\Big{\}} \tag{65}\] and \[g=\Omega^{8} \tag{66}\] On the other hand the Schwarzschild metric in Painleve-Gullstrand coordinates is \[ds^{2}=\Big{(}1-\frac{r_{s}}{r}\Big{)}\,\,dt^{2}\mp 2\sqrt{\frac{r_{s}}{r}} dtdr-dr^{2}-r^{2}\,d\Omega^{2} \tag{67}\] In [11] it was shown that with the upper sign this coordinate system covers regions I and II in Kruskal's diagram; whereas with the lower sign it covers regions \(I^{\prime}\) and \(II^{\prime}\). This is worked out in detail in [12]. This metric can be easily written in the unimodular gauge by means of a new coordinate \(x\) such that the function \(r(x)\), such that \[r^{4}[x](r^{\prime}[x])^{2}=1 \tag{68}\] i.e \[r=(3x+b)^{1/3} \tag{69}\] Painleve-Gullstrand in the UG gauge reads \[ds^{2}=\left(1-\frac{r_{s}}{(3x+b)^{1/3}}\right)\,dt^{2}-\frac{2\sqrt{r_{s}}}{(3 x+b)^{5/6}}dtdx-\frac{1}{(3x+b)^{4/3}}dx^{2}-(3x+b)^{2/3}\Big{[}\frac{d\psi^{2}}{1- \psi^{2}}+(1-\psi^{2})d\phi^{2}\Big{]} \tag{70}\] The horizon \(r=r_{S}\) has an associated Bekenstein-Hawking entropy [6] given by \[S=\frac{A}{4G}=\pi\frac{r_{s}^{2}}{G} \tag{71}\] which according to Wald [16] can be interpreted as the value of the Noether charge associated to diffeomorphism invariance. In the Weyl orbit there are static spacetimes without a Killing horizon (of course there are also non-static spacetimes without any Killing). Given the fact that in the vicinity of the horizon \[g_{00}\sim\frac{x-x_{s}}{r_{s}^{3}}+\ldots \tag{72}\] (with \(x_{s}\equiv\frac{r_{s}^{3}-b}{3}\)) the rescaling \[g(x)\sim\frac{1}{(x-x_{s})^{4}} \tag{73}\] and corresponding singularities in the derivatives. This is the the price to pay to get rid of the horizon. The Einstein energy pseudotensor reads in this case \[\hat{t}^{\mu\nu}=\begin{pmatrix}\hat{t}^{tt}&0&0&0\\ 0&\hat{t}^{rr}&\hat{t}^{r\psi}&0\\ 0&\hat{t}^{\psi r}&\hat{t}^{\psi\psi}&0\\ 0&0&0&\hat{t}^{\phi\phi}\end{pmatrix} \tag{74}\] in detail \[\hat{t}^{tt} =\frac{1}{2}\Bigg{\{}-(1+3x)^{-4/3}-\frac{1}{(2+3x)^{2}\big{(}(2+3x)^{ 1/3}-r_{s}\big{)}^{2}}\Bigg{\}}r_{s}^{2}+\gamma^{tt}f[x,\psi,r_{s}]\] \[\hat{t}^{r} =-\frac{r_{s}^{2}}{2(1+3x)^{4/3}}-\frac{\big{(}8+12x-3(2+3x)^{2/3} r_{s}\big{)}^{2}}{2(2+3x)^{10/3}\big{(}(2+3x)^{1/3}-r_{s}\big{)}^{2}}-\frac{2 \big{(}3+3x-(3+3x)^{2/3}r_{s}\big{)}}{3(1+x)(-1+\psi^{2})}-\] \[-\frac{2\big{(}(4+3x)^{1/3}-r_{s}\big{)}\psi(-1+\psi^{2})}{(4+3x) ^{1/3}}\times\] \[\times\big{(}-4-3x+(4+3x)^{2/3}r_{s}+\frac{3+3x-3^{2/3}(1+x)^{2/3 }r_{s}}{(-1+\psi^{2})^{2}}+\frac{1}{(3+3x)(-1+\psi^{2})}-\frac{1-\psi^{2}}{4+ 3x}\big{)}+\] \[+\gamma^{rr}f[x,\psi,r_{s}]\] \[\hat{t}^{\psi\psi} =-\frac{2}{(2+3x)^{2}}+2\psi^{2}-\frac{2\psi^{2}}{(-1+\psi^{2})^ {2}}-\frac{2\big{(}3+3x-(3+3x)^{2/3}r_{s}\big{)}}{3(1+x)(-1+\psi^{2})}+\gamma^ {\psi\psi}f[x,\psi,r_{s}]\] \[\hat{t}^{\phi\phi} =2-\frac{2}{(2+3x)^{2}}-\frac{2\psi^{2}}{(-1+\psi^{2})^{2}}+ \frac{2r_{s}(-1+\psi^{2})}{(4+3x)^{1/3}}+\gamma^{\phi\phi}f[x,\psi,r_{s}]\] \[\hat{t}^{r\psi} =\hat{T}^{\psi r}=\psi\Bigg{\{}-4-3x+(4+3x)^{2/3}r_{s}+\frac{3+3x -3^{2/3}(1+x)^{2/3}r_{s}}{(-1+\psi^{2})^{2}}+\] \[+\frac{1}{(3+3x)(-1+\psi^{2})}-\frac{1-\psi^{2}}{4+3x}\Bigg{\}}\] where \[f[x,\psi,r_{s}]= \frac{1}{2}\Bigg{\{}\frac{2}{(2+3x)^{2}}+(3+3x)^{-2}+(4+3x)^{-2}+ \frac{3r_{s}^{2}}{4(1+3x)^{4/3}}+\frac{r_{s}^{2}}{4(2+3x)^{4/3}}+ \tag{76}\] \[+\frac{r_{s}^{2}}{4(1+3x)^{2}\big{(}(1+3x)^{1/3}-r_{s}\big{)}^{2} }+\frac{r_{s}^{2}}{4(2+3x)^{2}\big{(}(2+3x)^{1/3}-r_{s}\big{)}^{2}}+\] \[+\frac{\big{(}8+12x-3(2+3x)^{2/3}r_{s}\big{)}^{2}}{2(2+3x)^{10/3} \big{(}(2+3x)^{1/3}-r_{s}\big{)}^{2}}-4\psi^{2}+\frac{4\psi^{2}}{(-1+\psi^{2} )^{2}}+\] \[+\frac{3+3x-3^{2/3}(1+x)^{2/3}r_{s}}{(1+x)(-1+\psi^{2})}+\frac{2+ 3x-(2+3x)^{2/3}r_{s}}{(2+3x)(-1+\psi^{2})}+\] \[+\frac{\big{(}(2+3x)^{1/3}-r_{s}\big{)}(-1+\psi^{2})}{(2+3x)^{1/3 }}+\frac{3\big{(}\big{(}4+3x)^{1/3}-r_{s}\big{)}(-1+\psi^{2})}{(4+3x)^{1/3}}\Bigg{\}}\] By construction, the values of the physical energy /mass \[m=\int t_{0}^{0}dx\wedge d\psi\wedge d\phi \tag{77}\] are the same in the whole Weyl orbit generated by \(\Omega(x)\) (that is, they are independent of \(g\)). ## 4 Conclusions In the Weyl invariant formulation of UG, in which the unimodular metric is represented as \[\gamma_{\alpha\beta}\equiv g^{-\frac{1}{4}}\,g_{\alpha\beta} \tag{78}\] This introduces a Weyl abelian symmetry under \[g_{\alpha\beta}\rightarrow\Omega^{2}(x)\,g_{\alpha\beta} \tag{79}\] The manifold of solutions is then a set of Weyl gauge orbits, with one representative in each orbit with a unimodular metric \(\gamma_{\alpha\beta}\) and the other metric in the orbit related to that one by Weyl rescalings \[g_{\alpha\beta}=g(x)^{\frac{1}{4}}\gamma_{\alpha\beta} \tag{80}\] (with \(g(x)\) an arbitrary function). This Weyl invariance is a bit peculiar; we call it _tautological_, in the sense that it disappears under a field redefinition. This translates into the fact that the associated Weyl current vanishes identically \[j_{W}^{\mu}=0 \tag{81}\] The issue treated in this paper is that physical characteristics of different spacetimes (like total mass in the asymptotically flat case, or even entropy or the presence of horizons) are not Weyl invariant. This means that in some sense naive TDiff conserved charges are not constant on Weyl orbits. It seems desirable to have a definition of TDiff conserved charges that is Weyl invariant. What we provide in this paper is precisely such a Weyl gauge invariant formulation of the TDiff charges, although we claim that the physical value is the one corresponding to the unimodular gauge fixing. This is more or less obvious owing to the way that Weyl invariance has been introduced in the theory. ## Acknowledgments This work has received funding from the Spanish Research Agency (Agencia Estatal de Investigacion) through the grant IFT Centro de Excelencia Severo Ochoa MCIN CEX2020-001007-S. This project has also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska -Curie grant agreement No 860881-HIDDeN.
2306.09562
Reproducibility in NLP: What Have We Learned from the Checklist?
Scientific progress in NLP rests on the reproducibility of researchers' claims. The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to remind them of key information to include. We provide the first analysis of the Checklist by examining 10,405 anonymous responses to it. First, we find evidence of an increase in reporting of information on efficiency, validation performance, summary statistics, and hyperparameters after the Checklist's introduction. Further, we show acceptance rate grows for submissions with more Yes responses. We find that the 44% of submissions that gather new data are 5% less likely to be accepted than those that did not; the average reviewer-rated reproducibility of these submissions is also 2% lower relative to the rest. We find that only 46% of submissions claim to open-source their code, though submissions that do have 8% higher reproducibility score relative to those that do not, the most for any item. We discuss what can be inferred about the state of reproducibility in NLP, and provide a set of recommendations for future conferences, including: a) allowing submitting code and appendices one week after the deadline, and b) measuring dataset reproducibility by a checklist of data collection practices.
Ian Magnusson, Noah A. Smith, Jesse Dodge
2023-06-16T00:39:25Z
http://arxiv.org/abs/2306.09562v1
# Reproducibility in NLP: What Have We Learned from the Checklist? ###### Abstract Scientific progress in NLP rests on the reproducibility of researchers' claims. The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to remind them of key information to include. We provide the first analysis of the Checklist by examining 10,405 anonymous responses to it. First, we find evidence of an increase in reporting of information on efficiency, validation performance, summary statistics, and hyperparameters after the Checklist's introduction. Further, we show acceptance rate grows for submissions with more Yes responses. We find that the \(44\%\) of submissions that gather new data are \(5\%\)_less_ likely to be accepted than those that did not; the average reviewer-rated reproducibility of these submissions is also \(2\%\) lower relative to the rest. We find that only \(46\%\) of submissions claim to open-source their code, though submissions that do have \(8\%\) higher reproducibility score relative to those that do not, the most for any item. We discuss what can be inferred about the state of reproducibility in NLP, and provide a set of recommendations for future conferences, including: a) allowing submitting code and appendices one week after the deadline, and b) measuring dataset reproducibility by a checklist of data collection practices. ## 1 Introduction Reproducibility is a foundational component of scientific progress. NLP systems are complex, and even when their behavior is carefully measured, incentives to publish quickly and limitations in the publishing process can lead to underreporting of information necessary for reproducible science. The ramifications of this extend beyond the research community; the audience of NLP papers published years ago was largely other NLP researchers, but today the world is watching developments in the field, looking for advances that will lead to broadly-adopted applications. As the impact of NLP grows, so too do the consequences of reproducibility in our field. Of course, NLP is not the first field to evaluate reproducibility; some have even described a "producibility crisis" in science (Aarts et al., 2015; Baker, 2016). One tool designed to improve reproducibility is a checklist filled out at paper submission time. Such a checklist can descriptively remind authors of relevant information to report, while preserving the freedom for authors to do so however they see fit. For example, the journal _Nature_ requires authors fill out a Reporting Checklist for Life Sciences Articles (Nature, 2018). In 2019 NeurIPS started to require that submissions fill out the ML Reproducibility Checklist (Pineau et al., 2021), partly inspired by the _Nature_ checklist, and in 2021 AAAI required their own checklist.1 CVPR, ICCV, ECCV, and IJCAI provide a checklist but do not collect responses. Footnote 1: aaiai.org/Conferences/AAAI-21/aaai21call/ In this work, we provide the first analysis of Figure 1: Submissions to EMNLP 2021 binned by count of Yes responses to the NLP Reproducibility Checklist items. The Accept rate is given for each bin. Papers with more Yes responses are more likely to be accepted, except those that mark Yes to all checklist items, which we hypothesize contain responses which do not accurately represent the associated paper. the NLP Reproducibility Checklist (Dodge et al., 2019). We have gathered 10,405 anonymized responses from EMNLP 2020 and 2021, NAACL 2021, and ACL 2021. For the latter two we are also able to obtain reviewer scores, reproducibility judgements, and feedback on the Checklist. Our findings include: (1) Most checklist items are frequently reported, and submissions reporting them are more often accepted and perceived as reproducible. (2) Submissions that collect new data are accepted less and viewed as less reproducible, and these gaps are not explained by non-reporting of any current Checklist items. (3) Only about half of submissions report open sourcing code and many that do not also lack reporting on efficiency measures and even evaluation metrics. (4) A majority of reviewers describe the checklist as useful, and by contrasting responses to observed rates prior to the Checklist we evidence a possible increase in reporting. We conclude with a discussion of what can be inferred from these findings about the state of reproducibility in NLP and offer recommendations to address the gaps we have measured. ## 2 The NLP Reproducibility Checklist The NLP Reproducibility Checklist was originally introduced by Dodge et al. (2019). Each item on the checklist is phrased as a statement, like "The number of parameters in each model," and authors can mark Yes if they include that information in their paper, No if they do not include it in their paper, or N/A if that information does not make sense for their submission (e.g., they do not use any models to report parameter counts for). The checklist items were a part of the submission form, and it was required that authors fill it out to submit their paper.2 Thus, the checklist responses act as a (self-reported) overview of the contents of papers submitted to NLP conferences. Importantly, authors were not required to include any information in their papers, they were only required to indicate whether or not they did include information. Answers were made available to reviewers, who were expressly asked to assess the reproducibility of the work. The filled checklists were not released with the published papers. Footnote 2: NAACL 2021 permitted authors to leave items Blank. ## 3 Data and Methodology In Table 1 we list the Checklist items and the abbreviations we will use for them throughout this \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{2}{c}{Abbreviation} & Full Checklist Item \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} ModelDescription \\ \end{tabular} } & A clear description of the mathematical setting, algorithm, and/or model \\ \cline{2-4} & LenToCode & A link to a downloadable source code, with specification of all dependencies, including external libraries \\ \cline{2-4} & InPRa & A description of computing infrastructure used \\ \cline{2-4} & Runtime & Average runtime for each approach \\ \cline{2-4} & Parameters & The number of parameters in each model \\ \cline{2-4} & ValidationPref & Corresponding validation performance for each reported test result \\ \cline{2-4} & Metrics & Explanation of evaluation metrics used, with links to code \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} NoTrainingEvalus \\ \end{tabular} } & The exact number of training and evaluation runs \\ \cline{2-4} & HyperBound & Bounds for each hyperparameter \\ \cline{2-4} & HyperBestComing & Hyperparameter configurations for best-performing models \\ \cline{2-4} & HyperSearch & Number of hyperparameter search trials \\ \cline{2-4} & HyperMethod & The method of choosing hyperparameter values (e.g., uniform sampling, manual tuning, etc.) and the criterion used to select among them (e.g., accuracy) \\ \cline{2-4} & ExpectedPref & Summary statistics of the results (e.g., mean, variance, error bars, etc.) \\ \hline \hline \multirow{4}{*}{ \begin{tabular}{c} DataStats \\ \end{tabular} } & Relevant statistics such as number of examples \\ \cline{2-4} & DataSplit & Details of train/validation/test splits \\ \cline{2-4} & DataProcessing & Explanation of any data that were excluded, and all pre-processing steps \\ \cline{2-4} & DataDownload & A link to a downloadable version of the data \\ \cline{2-4} & NewDataDescription & For new data collected, a complete description of the data collection process, such as instructions to annotators and methods for quality control \\ \cline{2-4} & DataLanguages & For natural language data, the name of the language(s) \\ \hline \hline \end{tabular} \end{table} Table 1: Checklist abbreviations and standardized phrasing. Phrasing per conference in Table 6 (Appendix). \begin{table} \begin{tabular}{l r r r r} \hline \hline Conference & Sub & Wdrn & Main/Findings \\ \hline EMNLP 2020 & 3,666 & 660 & \(24.9\%\) & \(14.8\%\) \\ EMNLP 2021 & 4,815 & 1,555 & \(25.8\%\) & \(12.9\%\) \\ NAACL 2021 & 1,797 & 565 & \(38.7\%\) & N/A \\ ACL 2021 & 3,377 & 470 & \(24.4\%\) & \(15.7\%\) \\ \hline Overall & 13,655 & 3,250 & \(39.4\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Submissions, Withdrawn/Desk-Rejects, and Main conference and Findings acceptance rates in our data.** paper. There are three categories of items: (1) for all reported experimental results, (2) for results involving multiple experiments, like hyperparameter search, and (3) for all datasets used. 16 of 19 items appeared in all four conferences. We compare specific checklist items between the ML Reproducibility Checklist and the NLP Reproducibility Checklist conference variations in Appendix A.1. Full phrasing for each conference is listed in Table 6 (Appendix). Our data includes, for a given submission, the checklist responses (Yes, No, N/A for each item), Main + Findings acceptance status (Accept \(\in\) {accepted, rejected}), and the Track. No data includes any deanonymizing information, such as authors or paper titles. For NAACL 2021 and ACL 2021, we have the following metadata for each review: overall recommendation score ("Should this paper be accepted to <conference name>?") averaged to \(\textsc{AvgRec}\in[1,5]\), perceived reproducibility score ("How do you rate the paper's reproducibility? Will members of the ACL community be able to reproduce or verify the results in this paper?") averaged to \(\textsc{AvgReprod}\in[1,5]\) or N/A if any reviewer responds N/A, and reproducibility checklist feedback ("Are the authors' answers to the Reproducibility Checklist useful for evaluating the submission?") aggregated by majority vote to \(\textsc{ChecklistFeedback}\in\) {Not useful, Somewhat useful, Very useful}.3 Footnote 3: Full reviewer instructions available at 2021.naacl.org/downloads/NAACL2021-Review-Form.pdf and 2021.aclweb.org/downloads/Review_Form.pdf As shown in Table 2, there were a total of 13,655 submissions across the four conferences. We remove all withdrawn and desk-rejected submissions from analysis,4 comprising \(3,250\) submissions (\(23.8\%\) of the data), leaving a total of \(10,405\) submissions for analysis. Footnote 4: Withdrawn and desk-rejected submissions lack reviews and include blank test submissions and place holders. We recognize that the checklist responses are self-reported information, and thus in some cases might not be accurate representations of the associated submission (e.g., authors may mark Yes to an item on the checklist when in fact their paper does not include that information). We discuss this in Appendix A.2. To the best of our knowledge, the creators of the checklist indicated that the data would not be made public; while we currently do not plan to fully open source the data, the data can be made available upon request (and we welcome feedback on this policy).5 Footnote 5: The filled-out checklist will not be released with the published version of an accepted paper, it is meant as a tool for authors and reviewers,” (Dodge and Smith, 2020). In all analyses, error bars represent 95% confidence intervals. These are computed by Clopper-Pearson interval for binary values and bootstrap for continuous values, both using scipy version 1.9.1 (Virtanen et al., 2020). All comparisons of differences in results are absolute differences unless explicitly stated as relative. ## 4 What Can We Learn About How Reproducibility Already Works? We begin by measuring current practice, according to the (self-reported) Checklist data. Across all items and conferences, \(62.7\%\) of responses were Yes. Figure 4 shows that most items are reported in most submissions. Moreover, we can measure reviewers' perception of reproducibility as well as differences in rates of reporting for items among papers that do and do get accepted by *CL review. **More Yes responses to checklist items associate with higher acceptance.** In Figures 1 and 2 we show positive associations between answering more items as Yes and Accept rate. Each point in these figures represents the Accept rate among all the submissions with the same number of Yes responses among items. We regress the Accept rate on a single variable counting checklist items answered Yes for a submission. When pooling responses across all shared questions on all conferences \(r^{2}=0.53\).6 Notably, submissions with Yes responses to all items are consistently below the trend. We hypothesize in Appendix A.2 that these submissions include responses which do not accurately represent the associated paper; recall that authors were required to fill out the checklist in order to submit, so marking the same response to all items is, in some sense, as close as they can get to not filling it out. The lower acceptance rate suggests that reviewers are not scoring papers based on the responses to the checklist itself, instead evaluating the contents of the paper, as intended. Footnote 6: The \(r^{2}\) value for these regressions ranges across the conferences from \(0.22\) to \(0.46\), and the trend is consistently positive. **Reviewer assessed reproducibility associates with acceptance rate.** In Figure 3 we compare Accept rate across quantiles of AvgReprod and AvgRec. Though Accept rate grows much more slowly for AvgReprod than AvgRec, reviewers assessment of reproducibility is still evidently associated with acceptance. **In all but three checklist items, Yes responses are associated with higher Accept rates.** Figure 5.A presents Accept rates conditioned on a given response. Yes responses receive a \(0.9\%\) higher rate than the overall average, while No and N/A receive \(0.7\%\) and \(1.1\%\) lower rates respectively. Figure 5.B presents the three exceptions where answering Yes to that checklist item receives a lower than average rate. These are discussed in detail in Section 5 and Appendix A.3. **In all but one checklist item, Yes responses are associated with higher AvgReprod scores.** Figure 6.A shows the mean AvgReprod score conditioned on a given response in NAACL 2021 and ACL 2021. Reassuringly, Yes responses receive \(0.04\) higher scores than average, while No or N/A score \(0.04\) and \(0.04\) lower than average, respectively. Figure 6.B shows that LinkToCode has the highest score, \(0.18\) above average. We also highlight NewDataDescription as it is the only item with a lower than average score when answered Yes. This exception is discussed further in Section 5, where we hypothesize this reflects a lower than average perceived reproducibility of submissions presenting new data. ## 5 The Data Collection Gap Natural language processing has long been a field driven by data. A body of work has proposed best practices for documenting the characteristics and creation of datasets (Bender and Friedman, 2018; Gebru et al., 2018; Hutchinson et al., 2020; Dodge et al., 2021; Rogers et al., 2021; Pushkarma et al., 2022). Among other concerns, such documentation is critical for the difficult task of dataset reproduction (Recht et al., 2019, _inter alia_). From the checklist item NewDataDescription, which asks that collection is described if new data is presented, we find that \(38.2\%\) and \(6.3\%\) of submissions mark Yes and No, respectively. This implies that \(44.5\%\) of submissions to our NLP conferences collect new data; if almost half of submissions collect new data, we argue that data collection and dissemination practices deserve further attention. This also highlights clear room for improvement in the community: \(14.1\%\) of submissions that collect new data do not describe how it was collected, totalling \(650\) papers. **Submissions with new data have lower than average Accept rate and AvgReprod scores.** Alarmingly, submissions that collect new data (i.e., submissions that mark Yes or No to NewDataDescription) have a \(5.1\%\) lower acceptance rate than those that do not (i.e., mark N/A or Blank). A low acceptance rate for answering No to NewDataDescription would, by itself, be encouraging, perhaps indicating that reviewers expect data collection to be well documented. However, Figure 5B shows that, even when answering Yes Figure 3: Accept rates across quantiles for perceived reproducibility (AvgReprod) and overall recommendation (AvgRec) for NAACL and ACL 2021. Perceived reproducibility trends positively with acceptance. Figure 2: Accept rate among submissions binned by count of Yes responses. Yes response count and Accept rate trend consistently positive. All-Yes responses are notably below trend, as discussed in Appendix A.2. to describing the data collection process, Accept rate is \(1.6\%\) lower than average. Meanwhile, there is a similar gap in AvgReprod. Scores for submissions that collect data are lower by \(2.4\%\) relative to those that do not. Again, this is not limited to submissions that fail to describe the data collection process. Figure 6B reveals that the mean score over submissions that do describe their data collection is \(0.04\) below the mean of all submissions. When considering only accepted papers, however, the gap in AvgReprod disappears. The review process ends up with accepted dataset papers with similar AvgReprod to non-dataset papers, but along the way many more dataset than non-dataset submissions are rejected and those have lower AvgReprod. We hypothesize that these phenomena arise both because dataset papers may indeed be more challenging to (re)produce and also because of the persistent (and problematic) tendency to value modeling over data collection Rogers (2021). **High compliance among the dataset checklist items does not reveal the source of the Accept rate and AvgReprod gap.** DataStats, DataSplit, DataProcessing, and DataLanguages receive the highest rates of reporting other than ModelDescription and Metrics. This only grows when looking just at submissions presenting new datasets, reaching \(97.3\%\), \(91.7\%\), \(91.4\%\), and \(86.6\%\) respectively, and NewDataDescription is also reported in \(86.0\%\) of these submissions. Un Figure 4: Yes response rate per item. Most items are reported for most submissions. Note that NAACL 2021 respondents were able to leave questions Blank. Other answers shown in Figure 12 (Appendix). Figure 5: Accept rates over all conferences for submissions with a given response. (A) shows rate conditioned on response regardless of item. (B) shows the only items where Yes Accept rates are below average. Total count of each response is shown on the bar. Items with higher No acceptance than Yes could indicate the community has not fully embraced these practices as norms. High acceptance rate for NewDataDescription N/A indicates that papers that do not collect data are more likely to be published. Figure 6: Reviewer perceived reproducibility score (AvgReprod\(\in[1,5]\)) for submissions with a given response from NAACL and ACL 2021, excluding ones with N/A AvgReprod. (A) shows score conditioned on response regardless of item. (B) shows the items with highest (LinkToCode) and lowest (NewDataDescription) Yes score. Total count of each response is shown on the bar. NewDataDescription is the only item with a below average Yes score. like the other dataset items, DataDownload is less frequently reported, but its occurrence and associated Accept rates and AvgReprod scores are similar whether considering submissions presenting new data or not. This suggests that additional checklist items for data collection should be introduced to measure where this gap in perceived reproducibility is coming from. **28.9% of submissions with new data do not provide a downloadable version of the data.** More generally, a clear area for improvement is that \(25.3\%\) of submissions overall answer No to DataDownload; providing a link to download a dataset is still important for previously released datasets as it might be ambiguous which version of a dataset was used. But for newly collected data, answering No to DataDownload implies the data is not publicly available at all. Moreover, when DataDownload is No, the rate of submissions reporting the collection process in NewDataDescription drops \(14.2\%\). Figure 7 further reveals the interaction between DataDownload and NewDataDescription. When a new dataset submission provides neither a description of the data collection process nor access to the data itself, this leaves very little for reviewers to assess, at least with regard to the data contributions of the paper. Yet \(28.9\%\) of these papers are accepted. ## 6 Code (Un)availability We see that on average \(45.9\%\) of submissions report linking to code (\(47.5\%\) for accepted papers). We see in Figure 6 that whether submissions answer LinkToCode as Yes or No has the largest difference in AvgReprod scores, with a gap of \(0.30\). Yet Accept rates for submissions with or without LinkToCode are nearly the same. **We find similar rates of links to code as at ML conferences.** Pineau et al. (2021) reveal a \(38.8\%\) self-reported rate of code availability at submission time for NeurIPS 2019. They find this number drops to \(27.7\%\) when checked by at least one reviewer. Extrapolating from this false reporting rate, the true code availability rate among accepted papers in our data might be \(32.8\%\). Meanwhile, a study on ICML 2019 by Chaudhuri and Salakhutdinov (2019) finds \(36\%\) of submitted and \(43\%\) of accepted papers have code at submission time, though it unclear if these are self-reported. **Previous efforts to measure camera-ready code availability have found widely different rates than our reported LinkToCode at submission time.** Unfortunately our data does not cover code availability at camera-ready, except insofar as some authors may interpret this checklist item to permit promises to later release code. \(24.3\%\) of papers at NAACL 2022 opted in to submitting a code link to the Reproducibility Track Figure 8: Proportion (row labels) and Accept rates (horizontal purple bars) for efficiency response patterns with \(>100\) submissions when LinkToCode is No and no responses are N/A or Blank. More than 1 in 5 of these do not report any efficiency items, which are difficult to infer without source code. Figure 7: Proportion (row labels) and Accept rates (horizontal purple bars) for all response patterns on dataset availability and creation (excluding instances where either item is N/A or Blank). Nearly 1 in 11 of these neither share the data nor describe its collection, yet \(28.9\%\) of those are accepted. and received an Open Source Code badge.7 We recognize this was optional for authors, and thus it is likely the case that the true number of camera-ready papers that included a link to code was higher. The studies mentioned before found that \(74.4\%\) and \(64\%\) of camera-ready papers had links to code at NeurIPS 2019 and ICML 2019. Narrowing the range of these measurements should be a worthwhile effort, as these studies found code being available _during_ review was useful in 1,315 reviews in NeurIPS 2019, and \(18.3\%\) of ICML 2019 reviewers surveyed were able to look at code and found it useful. Footnote 7: naacl2022-reproducibility-track.github.io/ **Items on compute efficiency are completely reported in only 29.8% of submissions without code.** Figure 8 shows patterns for these efficiency items that occurred more than 100 times. While Accept rates are somewhat lower when items are not reported, \(21.2\%\) of these without-code submissions report none of the efficiency measures. There may be unavoidable impediments to making code available, such as intellectual property. But in this case even greater emphasis should be placed on reporting efficiency measures, as estimating these without code is quite difficult. Similarly \(19.6\%\) of submissions with no code report No to explaining Metrics which may render evaluations irrevocerably ambiguous if there are varying implementations of a metric. ## 7 How Effective is the Checklist? Dodge and Smith (2020) describe the Checklist as intended to improve "reporting of the setup and results of the experiments that authors have conducted." Though self-reported data do not directly answer this question, we find potential evidence of such an improvement. Diachronic analysis also shows that reporting rates may have stagnated after initial improvement. We also examine reviewer and author views on the Checklist. **Compared against manually checked data from before the Checklist introduction, our data shows increases in 8 of 10 items.** Figure 9 shows rates of a subset of items that were manually checked by Dodge et al. (2019) in 50 papers sampled from EMNLP 2018. The self-reported rates available in our data are not ideal comparisons as they likely overestimate. However, the EMNLP 2018 sample may also overestimate, as only "experimental results" are included for which we would expect fewer N/As, given the Checklist's focus on empirical work. **There is little variation in response proportions between conferences.** Excluding two types of outliers likely caused by changes in the Checklist (see Appendix A.3), the maximum difference between conferences for an item is \(6.6\%\) and the maximum difference averaged over all items is \(2.2\%\). This does demonstrate that measured response patterns are robust across conferences. However it also indicates that reproducibility reporting has stagnated over this one year period. **When asked, a majority of reviewers found the Checklist to be somewhat or very useful.** In NAACL 2021 and ACL 2021 reviewers gave feedback on the Checklist. \(59.9\%\) found the checklist "Somewhat Useful," \(17.0\%\) found it "Very Useful," and \(23.2\%\) found it "Not Useful." While this is higher than the \(34\%\) of reviewers who answered "yes" that the similar NeurIPS 2019 Checklist was "useful for evaluating the submission" (Pineau et al., 2021), it is worth noting that respondents to the question for NeurIPS could answer that they did not read the checklist results. **Author comments from submissions where the majority of reviewers found the Checklist "Not Useful," show possible gaps in checklist coverage.** Some comment on not training mod Figure 9: Reporting rates before and after the implementation of the Checklist. Dodge et al. (2019) manually check a subset of items in 50 randomly sampled EMNLP 2018 papers. We compare to accepted papers from all conferences in our data. While our self-reported data likely overestimate rates, it appears all but 2 items are now reported more often. els or using hyperparameters from previous work. Many such submissions are represented among the \(22.0\%\) that answer N/A to all hyperparameter questions. Others comment on referring readers to citations for details of standard models, data, or metrics. Re-elaboration is pedagogically important but, comments argue, especially onerous for survey papers. Finally a comment notes that the Checklist is less relevant to psycholinguistics and cognitive modeling, and indeed the N/A rate of "Linguistic Theories, Cognitive Modeling and Psycholinguistics" Track submissions is \(33.7\%\), an increase of \(14.6\%\) above the N/A rate over all Tracks. ## 8 Discussion Our findings from the NLP Reproducibility Checklist can both help inform new interventions and guide improvements to future checklists that will measure the outcomes of those interventions. These findings suggest that, after an initial increase, rates of reporting have stagnated in the period examined and will need new approaches to improve further. **The conference system should better support papers that collect new data.** As discussed in Section 5, papers that collect new data have \(5.1\)% lower acceptance rate than those that do not. Whether or not this gap is a cause or effect of the lack of prestige given to data work that Rogers (2021) describes, increasing awareness and resources for this work can help more high quality data reach publication. Checklists should also increase coverage of this topic. In our data a single item, NewDataDescription, covers all reporting regarding data collection. We find that papers with new data are perceived as less reproducible both when answering No or Yes to describing how they collected data. Likely a combination of several factors lead reviewers to score the reproducibility of papers with new data lower by \(2.4\)% relative to papers without. To discover which are lacking, best practices in data reproducibility documentation Gebru et al. (2018); Dodge et al. (2021) should be tracked individually with checklists. **Incentivize authors releasing code.** We find that releasing code is the single most influential checklist item on perceived reproducibility. This aligns with work across diverse fields that argues open source code is key for transparent and reproducible science Eglen et al. (2017); Celi et al. (2019); Shamir et al. (2013). These works also suggest that beyond reproducibility, open source code enables more impactful research by allowing other researchers to build on introduced methods and better understand findings through reading code. However, we find that less than half of papers in our study report releasing code at submission. We encourage conferences to incentivize code release at submission and especially camera-ready, and authors should be made aware of the significant benefit that code submission can have for the review process.8 Initiatives like the NAACL 2022 Reproducibility Track are a step in the right direction, as they publicly recognize open source code and verify code availability rather than only relying on self-reporting. However, in our data we see no evidence that code availability is increasing over time, so more direct incentives from publication venues are needed. Footnote 8: Even when code cannot be made publicly available due to intellectual property concerns, private code submission should be facilitated and extra emphasis should be placed on reporting items such as efficiency measures that are hard to reconstruct without public code. **Make checklist responses public.** Self-reported data is notoriously unreliable, but making the checklist responses public will add accountability.9 In addition, the checklist responses can reference specific sections and act as an index of the paper, so a reader knows where to look for what information. This will be implemented at ACL 2023, and we recommend other conferences follow. Footnote 9: This should still be combined with studies that manually audit reporting in papers. **Conferences should allow submission of checklists, unlimited appendices, and code a week after the main deadline.** Doing so can help establish a norm of code submission as _part_ of the review process. Likewise, additional time could improve completeness and accuracy of the checklist. Many pieces of information important for reproducibility are appropriate to include in the appendix of a paper without counting towards the page limit (e.g., a full list of hyperparameter values). This need not increase the burden on reviewers, as they can consult checklists rather than the appendix to assess reporting. **Looking Forward** Checklists collected during submission can measure practices in NLP at a comprehensive scale. To our knowledge, our work and Pineau et al.'s (2021) are the only analyses of submitted reproducibility checklists at AI conferences. These are examples of metascience in AI, or applying scientific rigor to the process of AI research; we expect that as NLP matures, we will see more examples of work analyzing and improving the scientific process. There are also examples of other work which manually audits papers (Fokkens et al., 2013; Gundersen and Kjensmo, 2018; McDermott et al., 2019; Haibe-Kains et al., 2020; Marie et al., 2021), which can compliment self-reported checklists, and other conference submission metadata (Chen et al., 2022), with validated samples. As standard practices in our field evolve, we will have to update all parts of the conference process, from checklists to reviews to paper presentations. As a positive example, ACL Rolling Review implemented the Responsible NLP Checklist,10 which includes ethics as well as reproducibility items. While we do not have data with which to evaluate the Responsible NLP checklist, our findings show the need for just such efforts to expand the coverage of checklists to better serve the community. Footnote 10: aclrollingreview.org/responsibleNLPresearch/ ### Limitations Our analyses rely on data from checklists filled in by authors and ratings provided by reviewers. Checklists are self-reported and thus not necessarily accurate. We discuss where these bad faith responses might influence our results in Appendix A.2. Another data limitation is that phrasing changes between conferences for some items, and 3 items do not appear in all conferences (see Appendix A.1). NAACL 2021 also introduces Blank as a possible answer when respondents do not choose any answer. There is also possible ambiguity between the No and N/A answers as it is apparent from the checklist open text comments that some authors used No when the item was not applicable to their work. Our data also only covers four conferences across 2020 and 2021, and as such it is difficult to assess any temporal trends. Reviewer data is also subject to inaccuracy; for instance reviewer perceived reproducibility scores are only subjective estimations of the likelihood of actual reproducibility. Rushed reviewers could easily miss where some important information is reported in a paper. Moreover, we only have reviewer data for 2 of 4 conferences. Our finding that papers that collect data have a gap in acceptance and perceive reproducibility relies on an indirect inference about which papers collect data. Checklists did not ask this explicitly but rather NewDataDescription should be answered N/A for all papers that do not collect data. Our findings about code and data availability are limited by the ambiguity of when they must be made available to qualify for answering Yes. It is evident from the open text checklist comments that some authors answer Yes, No, or even N/A when they have not yet made code or data available but plan to do so on acceptance. Any self-reported inaccuracies in our data would particularly affect our findings about the impact of the Checklist introduction on reporting rates. By definition, we are not able to compare to self-reported rates from before Checklist introduction, so we instead rely on Dodge et al.'s (2019) manually checked rates. 9 of the items in our data are not covered in the previous work, but the items that are share have similar phrasing. Finally, pooling results over conferences can obscure conference-specific dynamics, such as differences in which items have lower than average Yes Accept rates discussed in Appendix A.3. We check that trends that we highlight in our analyses are consistent across conferences. And we also present unaggregated figures in the appendix. Likewise, we find that Accept rates are nearly identical across conferences (see Appendix A.4), enabling us to contrast against an overall acceptance rate. ### Ethics Statement Scientific reproducibility is key to the benefits science can bring to society. Simply put, findings that cannot be reproduced cannot be relied upon, which can lead to wasted societal resources or even to harmfully incorrect understandings that misguide interventions. Our work focuses on the use of checklists to improve reporting of reproducibility information in scientific publications. While overly prescriptive and general rules about reproducibility could stifle less represented research communities whose practices may be less well understood by conference organizers, checklists attempt to mitigate this risk by only reminding authors of possibly salient information while still permitting authors to determine which items are or are not applicable. At the same time, checklists which are filled out and collected for data analysis have the additional ethical risks associated with work that attempts to make social practices legible. That is, a check list may neglect to cover practices used in a research community and thereby efface their role in the overall scientific endeavor, or conversely some practice may receive unfair scrutiny in excess of that given to other more prestiged practices. In the long term, checklists are perhaps most important as documents for guiding new generations of researchers writing their first papers, and thus even without being enforced they may still be taken as normative statements about best practices in the field. To guide efforts to improve reproducibility in the field of NLP, we have analyzed responses to the NLP Reproducibility Checklist collected by four conferences. The Checklist data is covered by the default terms as it has no stated license, and we use it with direct permission from the conference organizers who collected it. The authors of the first version of the checklist state that it is intended for "improved reporting of the setup and results of the experiments that authors have conducted" and that it will be used to "quantitatively analyze our checklist responses" [5]. We have endeavored to maintain the privacy of respondents by keeping the data anonymized and presenting results at a sufficient level of aggregation to prevent deanonymization. Nevertheless all work that seeks to describe the opinions of groups of humans caries an ethical burden to do so accurately and consistently with the wishes of those represented. To that end, we take care to point out limitations in what can be inferred from the data, and as originally intended by the data creators we do not make the data publicly available. ## Acknowledgements We thank the organizers and Program Chairs that provided the data for our analysis: Trevor Cohn, Yulan He, Yang Liu (EMNLP 2020), Fei Xia, Wenjie Li, Roberto Navigli (ACL 2021), Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur (NAACL 2021), Xuanjing Huang, Lucia Specia, Scott Wentau Yih (EMNLP 2021). We thank Julian Michael, Kyle Lo, Lucy Lu Wang, and Ari Holtzman for fruitful conversations about metascience and feedback on paper drafts.
2301.02490
Fuzzers for stateful systems: Survey and Research Directions
Fuzzing is a security testing methodology effective in finding bugs. In a nutshell, a fuzzer sends multiple slightly malformed messages to the software under test, hoping for crashes or weird system behaviour. The methodology is relatively simple, although applications that keep internal states are challenging to fuzz. The research community has responded to this challenge by developing fuzzers tailored to stateful systems, but a clear understanding of the variety of strategies is still missing. In this paper, we present the first taxonomy of fuzzers for stateful systems and provide a systematic comparison and classification of these fuzzers.
Cristian Daniele, Seyed Behnam Andarzian, Erik Poll
2023-01-06T13:02:15Z
http://arxiv.org/abs/2301.02490v1
# Fuzzers for stateful systems: Survey and Research Directions ###### Abstract. Fuzzing is a security testing methodology effective in finding bugs. In a nutshell, a fuzzer sends multiple slightly malformed messages to the software under test, hoping for crashes or weird system behaviour. The methodology is relatively simple, although applications that keep internal states are challenging to fuzz. The research community has responded to this challenge by developing fuzzers tailored to stateful systems, but a clear understanding of the variety of strategies is still missing. In this paper, we present the first taxonomy of fuzzers for stateful systems and provide a systematic comparison and classification of these fuzzers. stateful fuzzering, state model, active learning + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: journal: Acm + Footnote †: Footnote †: journal: Acm + [MISSING_PAGE_POST] FootnoteFootnote †: Footnote †: Footnote †: ## 2. Concepts and terminology By a stateful system we mean a system that takes a sequence of messages as input, producing outputs along the way, and where each input may result in an internal state change. To avoid confusion, we reserve the term _message_ or _input message_ for the individual input that the System Under Test (SUT) consumes at each step and the term _trace_ for a _sequence_ of such messages that make up the entire input. We use the term _response_ for the output that the SUT produces along the way. In the case of a synchronous protocol, there is usually one response after each input message. In this case, the state machine describing the input-output behaviour will be a Mealy machine. The input language of a stateful system consists of two levels1: 1) the language of the individual messages, which we will refer to as the _message format_, and 2) the language of traces, built on top of that. A description or specification of such an input language will usually come in two parts, one for each of the levels: for example, a context-free grammar for the message format and a finite state machine describing sequences of these messages. We will call the latter the _state model_ or, if it is described as a state machine, the _protocol state machine_. Footnote 1: For text-based protocols, as opposed to binary protocols, there may even be a third level, namely the character set or character encoding used, but none of the fuzzers studied use that. The state model usually involves a notion of _message type_ where messages of one type trigger a different transition than messages of another type. So then the set of messages types is the input alphabet of any protocol state machine. This alphabet will typically abstract away from payloads inside messages. For some protocols, messages simply include an instruction byte in a header that determines the message type. For output messages, it is common to distinguish between error responses and non-error responses, and possibly a finer-grained distinction of error responses based on the error code. Fuzzers that try to infer the protocol state machine (using active or passive learning, as will be discussed later) may require the user to specify the abstraction function that maps concrete messages to message types or even to provide an implementation of this function. There can be two abstraction functions, one for input messages and one for responses. In some protocols the format of input messages and responses is very similar. For fuzzing it is then a good strategy to also include responses as inputs as this may trigger unexpected behaviour: client and server are likely to share a part of the codebase and one may then accidentally will process messages only intended for the other2 Footnote 2: CVE-2018-10933 is an interesting example of a bug of this kind in Libsh: if the message that the server sends to a client to confirm that the client has successfully authenticated is sent to the server, a malicious client could skip the whole authentication phase. We use the term _protocol state_ to refer to the abstract state of an SUT that determines how future input messages are handled. The SUT will have a concrete program state, which is related to this protocol state, but which usually carries much more detail. The term'state' can quickly become overloaded: not only the SUT has state, even if it implements a stateless protocol, but the fuzzer itself also has a state. We use the term _stateful fuzzing_ to refer to the fuzzing of stateful systems, but we avoid the term'stateful fuzzer' as even a fuzzer for stateless systems will have internal state. There are basically two ways for an SUT to record the protocol state: 1) it can keep track of the protocol state using program variables that record state information or 2) the state can be more implicitly recorded by the program point where the execution is at (and possibly the call stack). Of course, these two ways can be combined. For fuzzers that use a grey- or white-box approach (discussed in more detail later), the difference can be important: white-box approaches that observe the values of program variables will work better for 1) than for 2), whereas grey-box approaches that observe execution paths will work better for 2) than for 1). _Stateless vs stateful systems._ There is not always a clear border between stateless systems and stateful systems. For example, if a system has a memory leak then it is (unintentionally) stateful, even though its behaviour will appear to be stateless for a very long time, namely until it runs out of memory and crashes. More generally, we can think of any stateful system that takes a sequence of messages as input as a stateless system which takes that whole trace as single input. Conversely, we can view a stateless system that takes a single complex value as input as a stateful system that processes a sequence of smaller inputs, say bits or bytes. For instance, a stateless program that processes JPEGs, which will always process the same JPG in the same way, can be viewed as a _stateful_ program that takes a sequence of bytes as input, and which will process the same byte in different ways depending on where in the JPEG image it occurs. But a fundamental difference between a stateful system and a stateless system viewed as stateful one in this way is that the former will typically provide some observable output after processing each input. Another difference is knowing that inputs are made up of smaller messages can help in making useful mutations, by swapping the order of messages or repeating messages. Some stateless systems can process sequences of inputs like stateful systems do, but then the idea is that previous inputs do not have any effect on how subsequent inputs are handled. Some fuzzers use this possibility to avoid the overhead of having to restart the SUT between inputs. This is called _persistent fuzzing._ This does then involve a sequence of inputs, but it is the polar opposite of stateful fuzzing: the goal is not to explore the statefulness of the SUT, but rather it presupposes that there is no statefulness. ## 3. Background This section discusses existing classifications of fuzzers used in the literature, as this provides a starting point for our classification of stateful fuzzing, and makes some initial observations about how and why fuzzing stateful systems is different. There are some good surveys papers about fuzzing, but none pay much attention to issues specific to stateful fuzzing. The survey by Zhu et al. (Zhu et al., 2017) classifies close to 40 fuzzers. Only two of these, namely AFLNet (Zhu et al., 2017) and de Ruiter et al. (de Ruiter et al., 2017) specifically target stateful systems. The more extensive survey by Manes et al. (Manes et al., 2017) categorises over 60 fuzzers. Thirteen of these are fuzzers for 'network protocols' so presumably these are fuzzers geared to fuzzing stateful systems; all of these 13 fuzzers are black-box fuzzers. One section in this survey (Section 5.1.2) discusses the statefulness of the SUT: here it discusses the inference of state machine models. The only attempt at classifying fuzzers for stateful systems that we are aware of is given in the paper by Yu et al. about SGPFuzzer (Yu et al., 2017): Table 1 in this paper lists twelve other fuzzers for stateful systems (or "protocol fuzzers" in the terminology used in the paper), namely AutoFuzz (Zhu et al., 2017), AspFuzz, SECFuzz (Aspespe et al., 2017), Sulley 3, BooFuzz (Zhu et al., 2017), Peach 4, SNOOZE (Bou et al., 2017), PULSAR (PulSAR, 2017), TLS-fuzzer, DTLS-fuzzer, ICS-fuzzer, and NLP-fuzzer. The authors identify four challenges based on the shortcomings of these 12 fuzzers and then design SGPFuzzer to address these. Unfortunately, the definitions of the features used for the comparison are left implicit and the comparison fails to point out some very fundamental differences between tools, for instance that the man-in-the-middle nature of AutoFuzz and SecFuzz comes with an important inherent limitation, namely that the order of messages cannot be fuzzed (as we discuss in Section 4.6). ### Existing classifications of fuzzers The standard classification of fuzzers in the literature (e.g. [11; 31; 50]) distinguishes _black-box_, _grey-box_ and _white-box fuzzers_, where the black-box fuzzers are sub-divided into _grammar-based_ fuzzers and _mutation-based fuzzers_. Even though this classification is fairly standard, the terminology varies between papers, and there are combinations of approaches that do not neatly fit into one of these categories. We discuss this classification in more detail below. _Black-box fuzzers._ As the name suggest, black-box fuzzers only observe the SUT from the outside. To stand any change of producing interesting inputs, black-box fuzzers require some knowledge of the input format. One approach here, taken by _generation-based_ aka _grammar-based_ fuzzers, is that the fuzzer is given knowledge of the input format in the form of grammar or model. A downside of such fuzzers is the effort required to produce such a model or grammar. Another approach, taken by _mutation-based fuzzers_, is that the fuzzer is supplied with a set of sample inputs which are then mutated in the hope of discovering bugs. Most grammar-based fuzzers allow users to supply grammar for an arbitrary input format (or protocol) they want to fuzz, but there are also grammar-based fuzzers which have a specific input format hard-coded in them, such as the KiF fuzzer [1] for the SIP protocol. There are also commercial fuzzers for fuzzing specific protocols, for example, Codenomicon's DEFENSIS fuzzer 5 (since acquired by Synopsys), which grew out of the PROTOS [25] project at the University of Oulu in Finland started in 1999. Footnote 5: www.codenomicon.com/defensics/ Some fuzzers combine the generation-based and mutation-based approach. A grammar-based fuzzer should not only produce grammatically correct inputs, but also malformed ones; this either has to involve some form of mutation or the grammar has to be too 'loose' to begin with. Conversely, a mutation-based fuzzer can be given some knowledge about the input format, for instance by providing a list of keywords or byte values with a special meaning, which the fuzzer can use in mutations in the hope of generating interesting mutations. _White-box fuzzers._ White-box fuzzers require access to the (source or binary) program code and analyse that code in order to provide interesting inputs. With access to the code, it is possible to see which branches there are to then construct inputs that trigger particular branches. Typically white-box fuzzers use symbolic or concolic execution to construct interesting test cases. Microsoft's SAGE fuzzer [19] is the best-known example of this class. _Grey-box fuzzers._ Grey-box fuzzers occupy the middle ground and can observe some aspects of the SUT as it executes and use this feedback to steer the fuzzer. This is also called _evolutionary fuzzing_ as the inputs will gradually evolve into more interesting mutations. Grey-box fuzzers can be considered as a special kind of mutational fuzzers because the evolution always involves mutation. Grey-box fuzzers are sometimes called smart mutational fuzzers; the black-box mutational fuzzers that lack a feedback mechanism to guide the evolution are then called dumb mutational fuzzers. Grey-box fuzzers often require some instrumentation of the code or running the code in some emulator. The approach has been popularised by the fuzzer AFL, which observes the code execution path - or, more precisely, the branches are taken in the execution - to see if some input mutation results in new execution paths. Grey-box fuzzers that observe the execution path in this way are also called _coverage-guided greybox fuzzers (CGF)_. This approach has proved to be very successful, providing much better coverage than 'dumb' mutational fuzzers but without the work of having to provide a grammar. All fuzzers that involve mutation - dumb mutational fuzzers, evolutionary fuzzers, but also grammar-based fuzzers that use mutation - can be parameterised by _mutation primitives_, for instance random bit flips, repeating sub-sequences of the input, or inserting specific bytes, characters, or keywords. Alternative classificationsInstead of classifying fuzzers into white-, grey- and black-box, an orthogonal classification is to consider the kind of applications targeted and the kind of input this involves [(31)]: e.g. some fuzzers are geared towards fuzzing file formats, others to network traffic, and others still to web applications or OS kernels. Fuzzers for web applications are often called'scanners'. There is a relation between this classification and statefulness: applications that take a file as input are usually not stateful, whereas applications that implement a network protocol usually are. ### White-, grey-, and black-box fuzzing for stateful systems For stateful systems some basic observations about the classification into white-, grey-, and black-box can be made: * The terms 'grey-box fuzzing' and 'evolutionary fuzzing' are often used as synonyms, but for stateful systems, they are not: for a stateful system the evolution of inputs can also be steered by the outputs of the SUT, which is then evolutionary but black-box. This is a key difference between a stateful and a stateless system: the response that the SUT produces is an observation that the fuzzer can make without any instrumentation of the code. * For grammar-based fuzzers it does not really matter if the SUT is stateful or not: the grammar describing the system can describe both the message format and the protocol state machine. * For dumb mutational black-box fuzzers it also does not matter that much if the SUT is stateful or not. Of course, it helps if fuzzer is aware of the fact that inputs are traces of messages, so that it can try swapping the order of messages, removing messages or repeating messages as interesting mutations. The same goes for any grammar-based fuzzer, which should also try re-ordering, repeating or dropping messages as interesting corruptions of the grammar. * The techniques used in grey-box and white-box fuzzing to observe program execution may shed some light on the state that the SUT is in. But as discussed in Section 2, there are different ways in which the SUT can record protocol state: the state can be recorded in some program variables, it can be recorded in the program point that the SUT is in (and possibly the call stack), or a combination of these. The way in which the SUT does this can make a difference in how well some grey- or white-box technique can observe the protocol state. The statefulness of the SUT may complicate observation for a grey- or whitebox fuzzer. For white-box fuzzers that rely on symbolic or concolic execution input, the statefulness of the SUT is obviously a serious complication: a symbolic execution of a program handling a single input can already be tricky, and the execution for sequence of symbolic inputs will be an order of magnitude harder. For example, if the SUT implements some loop to handle incoming messages then that loop would have to be unwound. ### Bug detection for stateful systems In addition to a mechanism to generate inputs, a fuzzer also requires some mechanism to observe the SUT to detect if an input triggered a bug. Typically fuzzers look for memory corruption bugs that crash the SUT using sanitisers such as ASan (AddressSanitizer) [(40)], MSan (MemorySanitizer) [(43)], or older less efficient sanitisers such as Valgrind. When fuzzing programs written in memory-safe languages, e.g. when fuzzing Java programs with Kelinci [(26)], instead of looking for memory corruption bugs we can look for uncaught runtime exceptions; even if these bugs cannot be exploited in the way memory corruption can, they can still lead to Denial of Service problems. A type of bug that is specific to stateful systems are deviations from the correct state behaviour: if a system is expected to behave in a specific way, for instance by only responding to certain inputs after authentication, then a deviation from this behaviour may be a security issue. For security protocols such as TLS any deviations from the expected state behaviour are highly suspicious: security protocols are very fragile and even small deviations may break the security guarantees the protocol aims to provide. Unlike the detection of memory corruption bugs, this cannot be totally automated: it either requires a specification of expected behaviour (or, conversely, of unwanted behaviour), for instance with a state machine or in temporal logic, or it requires some post-hoc manual analysis of the state behaviour inferred by the fuzzer. ## 4. Fuzzers for stateful systems We have identified seven categories of fuzzers for stateful fuzzing. Table 1 summarises the main characteristics of each category. Some categories can be regarded as a combination or sub-category of other categories, as illustrated in Fig. 1. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Section** & **Category** & **Input required** & \begin{tabular}{l} **Generate** \\ **state model** \\ \end{tabular} & \begin{tabular}{l} **Human** \\ **interaction** \\ \end{tabular} \\ \hline 4.1 & Grammar-based & Grammar & No & No \\ \hline 4.2 & Grammar-learner & Sample traces & Yes & Yes / No \\ \hline 4.3 & Evolutionary & Sample traces & No & Yes / No \\ \hline 4.4 & Evolutionary grammar-based & Grammar & No & No \\ \hline 4.5 & Evolutionary grammar-learner & Sample traces & Yes & No \\ \hline 4.6 & Man-in-the-Middle fuzzers & Live traffic & No & Yes / No \\ \hline 4.7 & Machine learning fuzzers & Many sample traces & No & No \\ \hline \end{tabular} \end{table} Table 1. The seven categories of fuzzers with their main characteristics. Human interaction refers to manual code or grammar annotation. Figure 1. The five categories of fuzzers that involve a grammar, an evolutionary feedback mechanism, or both. Before we discuss each category in more detail in the sections below, we first discuss common ingredients involved in some of them: * Some fuzzers require sample traces as inputs, either a few traces to act as seeds to further mutation, or many traces so that grammar can be inferred or machine model can be trained. * Many fuzzers involve some form of grammar. This can be a grammar describing just for the message format, a grammar describing just the protocol state machine, or both. Some fuzzers require such grammars as inputs, but others can provide grammars that are inferred during the fuzzing as output. * Many fuzzers use some form of learning to infer information about the message format, the protocol state machine, or both. Evolution can be regarded as a form of learning because it produces and uses new knowledge about the input format, even though this knowledge is (usually) not expressed in the form of a regular expression, state machine, or context-fee grammar. Evolution is a form of _active learning_ because it involves interaction with the SUT, where the next input we try can depend on the outcome of previous tests. Some fuzzers use forms of _passive learning_ instead of (or in addition to) such active learning. By this we mean approaches where information about the input format is inferred _after_ a set of traces has been collected, so without interactively trying new experiments. There is a long line of research into algorithms for inferring formal language descriptions, either actively or passively, which includes research into _regular inference_ and _grammatical inference_ that focus specifically on inference of regular expressions and context-free grammar, respectively. Research in this field is presented at the bi-annual International Conference on Grammatical Inference (ICGI) and there are entire textbooks on the subject (e.g. [14]). For active learning of a protocol state machine, an algorithm that can be used is L\({}^{*}\)[2] or one of its improvements, e.g. the TTT algorithm used in LearnLib [24]. For the passive learning of protocol state machines, some fuzzers use ad-hoc solutions. For instance, the fuzzer by Hsu et al. [22] uses an algorithm called partial finite state automaton reduction. An important limitation of some learning algorithms, notably L\({}^{*}\) and its improvements, is that they cannot deal with the non-deterministic behaviour of the SUT, as it will cause the algorithm to diverge. A very different form of (passive) learning used by some fuzzers is _machine learning_. This does not produce knowledge in a nice concrete format like a regular expression, finite state machine, or context-free grammar. Also, it typically requires more samples. Still, possible advantages are that there are many existing machine learning approaches that can be used and that these may cope more easily with non-deterministic behaviour of the SUT. Below we first give a general description of the seven categories of fuzzers. In the subsequent sections, we discuss each category in more detail: 1. _Grammar-based fuzzers_ Any grammar-based fuzzer can be used to fuzz stateful systems without any special adaptions. The grammar that is supplied to the fuzzer will have to describe the two levels of the input language, with some rules of the grammar describing the message format and some rules describing the protocol state machine. Apart from that, no change in the fuzzer itself is needed, except that course, swapping, dropping and repeating messages are useful - if not essential - mutation strategies for the fuzzer to include. But for a stateless SUT where the format of the inputs is quite complex it can also be useful to include swapping, dropping and repeating sub-components of inputs as mutation strategies. 2. _Grammar-learner fuzzers_ Whereas the grammar-based fuzzers require the users to provide a grammar, these fuzzers are able to extract a grammar from a set of sample traces. They can be considered as the sequential composition of two tools: a _grammar extractor_ that infers the grammar from a set of sample traces (using so-called passive grammatical inference) and a grammar-based fuzzer that then does the actual fuzzing using this inferred grammar. As for the grammar-based fuzzers, for the grammar-learner fuzzers the statefulness of the SUT does not make any fundamental difference: it only means that the grammar will have two levels. So grammar-learner fuzzers can be applied to stateless as well as stateful SUTs. 3. _Evolutionary fuzzers_ These fuzzers basically take the same approach as stateless evolutionary fuzzers such as AFL: they take some sample traces as initial input and mutate these using a feedback system to steer the mutation process. Of course, evolutionary fuzzers for stateful systems should be aware that an input trace is a sequence of messages and should include swapping, omitting or repeating these messages as mutation strategies. A difference between stateful and stateless systems when it comes to evolutionary approaches of fuzzing is that the responses that a stateful SUT provides after individual messages can be used in the feedback to guide the evolution, as mentioned before in Section 3.1. 4. _Evolutionary grammar-based fuzzers_ These fuzzers use both a grammar provided to the user to generate (correct, protocol-compliant) traces and an evolution mechanism to mutate these traces. We can think of them as evolutionary fuzzers that use a grammar instead of a set of sample input traces to provide the initial traces that will be mutated. We can also think of them as grammar-based fuzzers that include a feedback mechanism to steer the evolution of mutations. So in Fig. 1 they are the intersection of the evolutionary fuzzers and the grammar-based fuzzers. 5. _Evolutionary grammar-learner fuzzers_ This is the most complex category of fuzzers. These tools all use some form of grammar to describe the protocol state machine; one also uses a grammar to describe the message format. They involve two feedback mechanisms to steer two forms of evolution: (i) one for the mutation of individual messages, in the style of conventional evolutionary fuzzers like AFL, and (ii) another for the mutation of sequences, which then infers a protocol state machine. The second form of evolution is based on the response that the SUT provides as feedback, so it is black-box. The final two categories of fuzzers are very different from the five above: 1. _Man-in-the-Middle fuzzers_: These fuzzers sit in the middle between the SUT and a program interacting with it and modify messages going to the SUT, as illustrated in Fig. 7. Responses coming back from the SUT are left untouched. These fuzzers can take a dumb mutational approach to modify the messages, but they may leverage a protocol specification (automatically inferred or given as input) to modify messages. 2. _Machine learning fuzzers_ These fuzzers use a Machine Learning (ML) model trained on a large set of input traces. The model outputs slightly different -- hopefully malicious -- mutated traces. Machine learning methods used by these fuzzers include Seq2seq and Seq-GAN. These fuzzers are similar to the grammar learner fuzzers in that they require a set of sample traces as input that is then used to infer a model of the input format which is then the basis for the fuzzing. The key difference is that for the grammar learner fuzzers this model is a grammar, whereas for these machine learning fuzzers the model is an ML model. Figure 4: Grammar learner fuzzers Figure 5: Evolutionary grammar-based fuzzers Figure 3: Grammar-based fuzzers Figure 6: Evolutionary grammar learner fuzzers Figure 2: Evolutionary fuzzers ### Grammar-based fuzzers Table 2 lists the grammar-based fuzzers. As shown in Fig. 3, these fuzzers use a grammar provided by the user. In the case of a stateful SUT, this grammar should describe the syntax of the messages and the protocol state machine. For some fuzzers, e.g. Peach7 and SNOOZE [7], this grammar is supplied in some XML format. Footnote 7: Here we mean the community edition, available at [https://gitlab.com/gitlab-org/security-products](https://gitlab.com/gitlab-org/security-products), which lacks some features of Peach Fuzzer Professional. The obvious downside of these fuzzers is that they require an accurate grammar. Producing one can be a lot of work and it can be challenging and error-prone. Not all errors will matter - or matter equally: if the grammar is a bit too 'loose' this is not much of a problem, but if the grammar omits interesting parts of the language it may be, as this would mean that the fuzzer will not explore that part of the language. Ideally the documentation of the SUT, or the specification of the protocol it implements, simply provides a formal grammar that can be used. However, this will \begin{table} \begin{tabular}{|l|l|l|} \hline **Fuzzer** & **Based on** & **Mutates** \\ \hline Peach & - & - Message \\ \hline SNOOZE [7] & - & - Message \\ \hline PROTOS[25] & - & - Message \\ \hline Sulley & - & - Message \\ \hline BooFuzz [35] & Sulley & - Message \\ \hline Fuzzowski 6 & BooFuzz [35] & - Message \\ \hline AspFuzz [27] & - & - Message \\ & & - Trace \\ \hline \end{tabular} \end{table} Table 2. Grammar-based fuzzers Figure 8. Machine learning fuzzers Figure 7. Man-in-the-middle fuzzers often not be the case: documentation or specifications may be unclear or incomplete. That the SUT is stateful does not make a difference here, Still, in earlier research [(37)] we found that documentation is more likely to include a clear (but informal) specification for the message format than for the protocol state machine. The protocol state machine are often - very poorly - specified in prose scattered throughout specification documents. Some grammar-based fuzzers, e.g. SNOOZE [(7)] and Sulley, come with grammars for some standard protocols, so that for these the hard work to produce a grammar has already been done for the user, but for other protocols the user still has to do it themselves. ### Grammar learner fuzzers Table 3 presents the grammar learner fuzzers. These fuzzers operate in two phases: first, they infer a grammar from a set of collected traces; then they do the actually fuzzing using that inferred grammar just like a grammar-based fuzzer would do. So each of these fuzzers is effectively the composition of two tools: 1. a _grammar learner_: a special component with the goal to build a grammar as much as possible similar to the real one 2. an _actual fuzzer_: in principle any of the grammar-based fuzzers discussed in the previous section. All these fuzzers will require a comprehensive and complete set of traces, as e.g. the makers of PULSAR explicitly point out [(18)], to give good fuzzing performance. For the first phase, the fuzzers in Table 3 not only use different inference techniques, but also try to infer different aspects of the input format: * PULSAR [(18)] infers both the message format and a protocol state machine, passively, from observed traffic. The learning techniques it uses are the ones developed earlier for the PRISMA fuzzer [(28)]. These can also infer rules for dependencies between messages, such as increasing sequence numbers. As the authors note, the approach relies on the completeness of the set of observed network traces and will be unable to model protocol paths not included in this traffic. * GLADE [(8)] uses a new active learning algorithm for inferring context-free grammars which can infer both the message format and the protocol state machine. Strictly speaking, GLADE is not a fuzzer, but just a tool for inferring a context-free grammar. This inference uses active learning, so it does involve some fuzzing of the SUT. But GLADE has been extended to be used as a front-end for a grammar-based fuzzer. In Table 3 we refer to this extension as GLADE\({}^{+}\) to avoid confusion. The algorithm used by GLADE\({}^{+}\) is shown to have better precision and recall that the active learning algorithms L\({}^{*}\)[(2)] and RNPI [(34)] for the case studies tried out by the makers [(8)]. The results of GLADE\({}^{+}\) are also compared \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Fuzzer** & **Learns** & **Based on** & **Input needed** \\ \hline PULSAR [(18)] & - State model & - Passive learning & - Traces \\ & - Message fields & (using PRISMA [(18)]) & - Traces \\ \hline GLADE\({}^{+}\)[(8)] & - Message fields & - Active learning & - Traces \\ & & (using GLADE [(8)] ) & - Traces \\ \hline Hsu et al.[(22)] & - State model & - Passive learning & - Message field \\ & & (using partial finite state & specification \\ & & automaton reduction [(22)]) & - Traces \\ \hline \end{tabular} \end{table} Table 3. Grammar learner fuzzers with AFL for some of these case studies. However, the case studies are not typical stateful protocols but include interpreters for Python, Ruby and JavaScript. As AFL is best at fuzzing binary formats, it is maybe not that surprising that \(\text{GLADE}^{+}\) beats AFL here. * i.e. using passive learning - using a new algorithm they introduce. Once this state machine is inferred, the SUT can be fuzzed. Here a collection of mutation primitives is used, including mutations to mutate individual messages and mutations to reorder messages in the input trace. The first phase of this fuzzer, i.e. inferring a protocol state machine given a known message format, is very similar to what tools like LearnLib (Rubin, 2017) do. But it uses passive learning, whereas LearnLib uses active learning with a variant of L\({}^{*}\). Hsu et al. report that they also tried active learning for this initial phase, using a variant of L\({}^{*}\), as they also did in earlier work (Hsu et al., 2019), but abandoned that approach because of 1) the difficulty in constructing concrete messages that active learning requires and 2) it being inefficient and not learning an accurate model. ### Evolutionary fuzzers Table 4 presents the evolutionary fuzzers. As shown in Fig. 2, these use feedback to guide the mutation of inputs. This feedback can use different types of observation, namely the five options listed below or a combination: * _Response._ Some fuzzers use the response of the SUT. This is the only type of observation that can be done black-box. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Fuzzer** & \begin{tabular}{c} **Feedback** \\ **system** \\ \end{tabular} & **Based on** & **Input needed** \\ \hline nyx-net (Rubin, 2017) & - Coverage & - AFL & - Target binary - Protocol specification - Seed inputs (optional) \\ \hline FitM fuzzer (Rubin, 2017) & - Coverage & - AFL & - Client binary - Server binary - Seed inputs \\ \hline SNPSfuzzer (Rubin, 2017) & - Coverage & - AFL & - Target binary - Seed inputs \\ \hline Chen et al. (Chen et al., 2017) & - Coverage - Branches & - AFL - Manual code annotation & - Target binary - Seed inputs \\ \hline SGFuzz (Gerner, 2017) & - Coverage - Variables & - AFL - Automatic code annotation & - Target binary - Seed inputs \\ \hline IJON (Chen et al., 2017) & - Coverage - Variables & - AFL - Manual code annotation & - Target source code \\ \hline \end{tabular} \end{table} Table 4. Evolutionary fuzzers * _Coverage._ Some fuzzers observe branch coverage in the style of AFL, i.e. using a bitmap to observe branches taken during execution. This is a greybox approach that either requires re-compilation to instrument the code or running code in some emulator, just like AFL does. * _Branches_: Some fuzzers observe branch coverage not by observing all branches like AFL does, but by observing specific branches that are manually marked as interesting to the user. This is a white-box approach and requires manual annotation of code by the user. * _Variables_: Some fuzzers observe the value of specific program variables. This is a white-box approach and requires manual annotation of code by the user. The idea is that the program variables observed record information about the protocol state. * _Memory_: Instead of observing specific individual program variables, one fuzzer observes memory segments: it takes snapshots of memory areas to see if inputs affect these. The idea is that changes in the memory signal change the protocol state. The only fuzzer using this, StateAFL, is not an evolutionary fuzzer but one of the more complicated evolutionary grammar learner, so it is discussed in Section 4.5. All the evolutionary fuzzers in Table 4 are based on AFL, so all of them at least observe branch coverage in the style of ALF (i.e. F2, _Coverage_) to steer the evolution, but some tools use an additional feedback mechanism on top of this. Regarding F3: the fuzzer by Chen et al. allows the user to mark some specific branches in the code. The idea is that taking these marked branches is an indication of the SUT moving to a different protocol state. Given that the AFL instrumentation already observes branch coverage, it is somewhat surprising that additional observation of selected branches improves the performance of the fuzzer. The fuzzer not just observes if these branches are taken in execution, but when this happens it effectively starts a new AFL session for this specific state (i.e. using a new bitmap for recording branches and creating a new queue of messages to mutate). So whereas AFL and all the other AFL-like evolutionary fuzzers in Table 4 maintain a single bitmap to record which branches have been taken, the fuzzer by Chen et al. has one such bitmap for each of the marked branches. This allows it to learn different strategies for generating test cases for different protocol states. Intuitively this makes sense: messages in different stages of a protocol may have different formats, so learning different mutation strategies, each tailored to a specific protocol state, can improve the fuzzing. Regarding F4: IJON [4] observes specific program variables during the fuzzing. The user has to mark these in the source code. The idea is that the user marks variables that record information about the protocol state. SGFuzz is an improvement of this: instead of the user having to annotate code to specify which program variables record interesting state information, the fuzzer automatically infers which program variables have an enumeration type, and it assumes that all these program variables record state information. As discussed in Section 2, there are different ways in which the SUT can record its protocol state. If the protocol state is recorded in program variables, approach F4 of IJON and SGFuzz can be expected to work well. If the program point is used the protocol state, approach F3 as used by Chen et al. might work better. All evolutionary fuzzers require initial seeds as input traces to start fuzzing. The choice of these initial seeds can influence the performance. Some fuzzers provide some automation to create initial seeds: for instance, Nyx-net [39] provides functionality (in the form of a Python library) to generate seeds messages from PCAP network dumps. The creators of IJON [4] note that in some cases IJON's feedback mechanism works so good that manually picking good seeds is no longer necessary to obtain good coverage; in some experiments, they could simply use a single uninformative seed containing only the character 'a' [4]. ### Evolutionary grammar-based fuzzers Table 5 presents the evolutionary grammar-based fuzzers. These fuzzers combine the _grammar-based_ and _evolutionary_ approaches, as shown in Fig. 5: they require a grammar as the starting point to generate messages but they also include some feedback to observe the effects of inputs in an effort to fuzz more intelligently. RESTler (Becker et al., 2017) is an open-source fuzzer by Microsoft for fuzzing REST APIs. It uses a grammar in the form of an OpenAPI8 specification (as can be produced by Swagger tools) to generate messages but then observes responses combinations of messages that always lead to the same error. The fact that RESTful API typically come with grammar in the form of an OpenAPI spec is a big win: it means we can use a grammar-based approach but avoid the downside of having to produce a grammar. Footnote 8: [https://www.openapis.org](https://www.openapis.org) SPFuzz (Shen et al., 2017) and EPF (F For all tools except StateAFL the feedback used here is the response from the SUT (i.e. F1) or some information extracted from that response; for example, for AFLNet it is the response code in the response, for EPF it is just information about whether the connection was dropped. StateAFL observes whether the content of long-lived memory areas has changed (i.e. F5). LearnLib and the fuzzer by Doupe et al. are odd ones out in Table 6 in that they are very limited in the kind of fuzzing they do. They do not mutate individual messages but only try combinations of a fixed set of input messages to infer the state machine. Here LearnLib uses the TTT algorithm [(24)], an improvement of L*. The fuzzer of Doupe et al. uses an ad-hoc algorithm developed for the tool: it is a fuzzer for web applications, so the response of the SUT is a web page, and the tool analyses these web pages for similarity in an attempt to crawl the entire website. ### Man-in-the-middle fuzzers Table 7 presents the man-in-the-middle-fuzzers. As shown in Fig. 7, these fuzzers sit between the SUT and another application that interacts with the SUT to intercept the communication and modify the communication going to the SUT. If the SUT is a server then this other application will be a client. An fundamental limitation of these fuzzers is that they are only able to modify the order of the messages in a limited way. In fact, AutoFuzz [(20)] and Black-Box Live Protocol Fuzzing [(44)] do not modify the order of messages at all, so the exploration of the protocol state machine will be very limited. SECFuzz [(45)] does fuzz the order of the messages, but only a little bit, namely by inserting well-formed messages at random positions in the input trace (i.e. in the sequence of messages sent by the other application). Even though the overall set-up is the same, the fuzzers use different techniques: \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Fuzzer** & **Learns** & **Feedback (i)** & **Feedback (ii)** & **Inputs needed** & **Based on** \\ \hline AFLNet [(36)] & - State model & - Coverage & - Response & - Target binary & - AFL \\ & & & - Sample traces & - \\ \hline FFUZZ [(10)] & - State model & - Coverage & - Response & - Target binary & - AFLNet \\ & & & - Sample traces & - \\ \hline StateAFL [(33)] & - State model & - Coverage & - Memory & - Target binary & - AFLNet \\ & & - Coverage & - Response & - Target binary & - AFL \\ & - Message fields & - Coverage & - Response & - PCAP file & - AFL \\ \hline LearnLib [(38)] & - State model & N/A & - Response & - Set of messages & - TTT [(24)] \\ \hline Doupe et al. [(16)] & - State model & N/A & - Response & No input required & - Web crawling \\ \hline \end{tabular} \end{table} Table 6. Evolutionary grammar learner fuzzers \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Fuzzer** & **Limitations** & **Uses** & **Input needed** \\ \hline AutoFuzz [(20)] & - Cannot fuzz the message order & Passive learning [(9)][(22)] & Live traffic \\ \hline Black-Box Live & - Cannot fuzz the message order & \multirow{2}{*}{N/A} & \multirow{2}{*}{Live traffic} \\ Protocol Fuzzing [(44)] & - User needs to specify & & \\ & the fields to fuzz & & \\ \hline SECFuzz [(45)] & - Limited fuzzing of message order & N/A & Live traffic \\ \hline \end{tabular} \end{table} Table 7. Man-in-the-middle fuzzers * Like the grammar learner fuzzers, AutoFuzz operates in two phases. Prior to the actual fuzzing it starts with a passive learning phase to infer an approximation of the protocol state machine. For this AutoFuzz uses the same algorithm as Hsu et al. (2019), i.e. partial finite state automaton reduction. So, as for Hsu et al. the user has to supply implementations of abstraction functions that map concrete messages to some abstract alphabet. During the fuzzing AutoFuzz then its knowledge of the protocol state machine to guide the input selection. * Black-Box Live Protocol Fuzzing uses a function to generate the message field specification from a PCAP file, but the user is required to choose the fields of the messages to fuzz. * SECFuzz is able to deal with the cryptography of the protocol. To do that, the client has to share with the fuzzer (through a log file) all the information necessary for the decryption. ### Machine learning fuzzers Table 8 presents the machine learning fuzzers. As shown in Fig. 8, like the grammar learner and the evolutionary grammar learner fuzzers, these fuzzers require a set of traces that are used as dataset to train the machine learning model. Once trained, the machine learning model is able to output traces that slightly differ from legit, correct traces. Likewise the man-in-the-middle fuzzers, the machine learning fuzzers observe protocol executions that follow the happy flow. This cause an unbalanced dataset in favour of the correct traces and the model's inability to outcomes traces with messages in the wrong order. Although these fuzzers use a machine learning model -- trained on real protocol execution -- to output traces to forward to the SUT, they employ different strategies. GANFuzz (Gan, 2017) uses a generative adversarial network (GAN) and an RNN (recursive neural network), while the fuzzer by Fan et al. (2019) and SeqFuzzer (Serban et al., 2019) use seq2seq. We refer to the review by Wang et al. (2019) for a more exhaustive explanation of fuzzing using machine learning. ## 5. Generic Fuzzers Improvements Irrespective of the category of fuzzer, there are some generic improvements that several fuzzers include. Pre-processing of raw network trafficMany fuzzers take raw network traffic in the formal of a PCAP file as input and provide some automated pre-processing of that input. Each tool implements it in their own way but it includes some common ingredients, such as chopping up the traces to extract the individual messages to then clustering similar messages or recognizing specific fields in the messages. Using snapshotsOne factor that makes fuzzing of stateful systems slow is that a fuzzer often needs to repeat a sequence of inputs to get the SUT in a particular state, to then start the actual fuzzing in that program state. To avoid the overhead, some fuzzers (Serban et al., 2019; Wang et al., 2019; Wang et al., 2019) use snapshots (aka checkpoints) to capture the program state of the SUT at a particular point in time, to then be able to quickly re-start fuzzing from that point on. (The same idea is behind the \begin{table} \begin{tabular}{|l|l|l|} \hline **Fuzzer** & **Based on** & **Input needed** \\ \hline GANFuzz (Gan, 2017) & seq-gan model & Traces \\ \hline Machine Learning for & & \\ Black-box Fuzzing of & seq2seq model & Traces \\ Network Protocols (Gan, 2017) & & \\ \hline SeqFuzzer (Serban et al., 2019) & seq2seq model & Traces \\ \hline \end{tabular} \end{table} Table 8. Machine learning fuzzers use of forking by AFL, where even for stateless SUTs it has been shown to improve performance off.) This can speed up fuzzing, as the initial trace to reach some specific state does not have to be repeated, but taking and re-starting snapshots also introduces overhead, so in the end it may not be faster. Depending on the execution platform there are different snapshotting techniques that can be used. For instance, FitM and SNPSFuzzer use CRIU's userspace snapshots and nyx-net uses hypervisor-based snapshot. For SNSP different snapshotting technologies have been compared (Shi et al., 2017): CRIU 9, DMTCP (Dai et al., 2018), BLCR 10, and OpenCZ 11. Footnote 9: [https://github.com/checkpoint-restore/criu](https://github.com/checkpoint-restore/criu) Footnote 10: [https://github.com/angelos-se/blcr](https://github.com/angelos-se/blcr) Footnote 11: [https://openvz.livejournal.com](https://openvz.livejournal.com) #### Mutation primitives and heuristics Any fuzzer that uses some form of mutation (of individual messages or of traces) can use a variety of strategies and primitives to do this. For individual messages this may include random bit-flipping, deleting some parts of a message or inserting some data. For traces as opposed to individual messages) interesting mutation primitives are of course removal, insertion, or repetition of messages. The fuzzers we discussed come with variety of primitives for all this. Some offer possibilities for the user to provide their own custom mutators. We have not gone into the details of this, as the focus was on understanding the overall approach. Some fuzzers, notably SNOOZE, PROTOS, SPFuzz, SGPFuzzer, and the fuzzer by Hsu et al. (2019), provide more advanced heuristics and tricks for mutations than some of the others. For example, SNOOZE can provide mutations to try out SQL or command injection or use specific numbers to test boundary conditions. SPFuzz distinguishes different types of data inside messages (e.g. headers vs payloads) to then use different mutation strategies for specific types of data. In practice it may of course make a big difference for a particular case study which mutation primitives or heuristics are used. ## 6. Conclusions It took us quite some effort to disentangle the ways that various fuzzers for stateful systems work and arrive at the classification we presented. It seems like every fuzzer picks another set of building blocks, combines them in its own way, and then adds some ad-hoc heuristics and possibly performance optimisations. New fuzzers are typically evaluated on some case studies and then compared with some other fuzzers, but it is hard to draw broader conclusions that then go beyond a particular case study or a particular pair of fuzzers. This underlines the importance of initiatives such as ProFuzzBench (PuzzBench, 2018) for bench-marking stateful fuzzing approaches. Benchmarking has also been pointed out as a challenge for fuzzers in general, not just for stateful fuzzing (Gardner et al., 2018). We have noted some apparent contradictory observations - though this may simply be because researchers looked at different case studies. For instance, Shu et al. (2018) abandoned the use active learning of protocol state machine using L\({}^{*}\) (or its variants) because they found it too slow and inaccurate, while in other research this has proved to be very useful in finding security flaws (Shou et al., 2018). It is not surprising that the performance of fuzzer may depend heavily on the case study. When fuzzing a stateful system there is a trade-off between a) trying out many variations of individual messages and b) trying out many different _sequences_ of messages. The complexity of an application (and hence the likely problem spots) application may more in the message format or more in the protocol state machine; a corresponding strategy when fuzzing, focusing more on a) or on b), is then most likely to find bugs. Very broadly we can make a rough distinction into three classes of tools, illustrated in Fig. 9: I) fuzzers that are very good at aggressively exploring the protocol state machine but poor at trying out variations of messages; III) fuzzers that are good at trying out variations in messages but poor at exploring the protocol state machine; and II) fuzzers which try to explore both the protocol state machine and the format of individual messages. There is some relation between this classification and the seven categories we have described. For instance, the man-in-the-middle fuzzers and the machine learning fuzzers are in class III, as they do not explore the protocol state machine and mainly (or even exclusively) stick to message sequences observed in real communications between client and server. The grammar-based fuzzers can deal quite well with both dimensions of fuzzing so are in class II. Evolutionary-based fuzzers that try to infer the protocol state machine (typically using the response of the SUT as feedback mechanism) are good at exploring the protocol state space, but may lack mutation primitives or observation mechanisms to aggressively explore the message formats. LearnLib is an extreme instance of class I as it _only_ fuzzes the message order. The exact positioning of tools in Figure 9 is not based on experimental data, but more informally based on the general characteristics of the tools, so should be taken with a pinch of salt. Also, for tools that require grammars as input or manual code annotation a lot will depend on the quality of these. It may seem like fuzzers of type II are the best of both worlds, but given the rapid state space explosion when we fuzz both individual messages and sequences of messages this need not be the case: Using a fuzzer of type I and a fuzzer of type III to explore different aspects may be more effective than using one fuzzer of type II that tries to do both. Figure 9. Cluster of fuzzers For fuzzing of non-stateful systems it has already demonstrated that using a combination of tools may be the optimal approach, especially if these tools can exchange information [12]; we expect that this will be even more so for stateful systems. By providing insight into the components used in various fuzzing approaches, our research suggests several interesting directions for future research. One direction is in trying our new combinations of approaches and components, for example, using LearnLib as a pre-processing phase may be useful get a good initial approximation of the protocol state machine, or using the SUT response as feedback in man-in-the-middle fuzzers to build a more accurate protocol state model. Some of the performance optimisations implemented by specific fuzzers (e.g. the use of snapshots) can be applied to a broader set of fuzzers. Another direction is in more systematic, empirical comparison: having identified that some tools use the same overall approach but a different algorithms for some sub-component allows a more systematic comparison, where we just observe the effect of changing this one sub-component.
2308.11859
Example-Based Framework for Perceptually Guided Audio Texture Generation
Controllable generation using StyleGANs is usually achieved by training the model using labeled data. For audio textures, however, there is currently a lack of large semantically labeled datasets. Therefore, to control generation, we develop a method for semantic control over an unconditionally trained StyleGAN in the absence of such labeled datasets. In this paper, we propose an example-based framework to determine guidance vectors for audio texture generation based on user-defined semantic attributes. Our approach leverages the semantically disentangled latent space of an unconditionally trained StyleGAN. By using a few synthetic examples to indicate the presence or absence of a semantic attribute, we infer the guidance vectors in the latent space of the StyleGAN to control that attribute during generation. Our results show that our framework can find user-defined and perceptually relevant guidance vectors for controllable generation for audio textures. Furthermore, we demonstrate an application of our framework to other tasks, such as selective semantic attribute transfer.
Purnima Kamath, Chitralekha Gupta, Lonce Wyse, Suranga Nanayakkara
2023-08-23T01:29:46Z
http://arxiv.org/abs/2308.11859v2
# Example-Based Framework for Perceptually Guided Audio Texture Generation ###### Abstract Generative models for synthesizing audio textures explicitly encode controllability by conditioning the model with labelled data. While datasets for audio textures can be easily recorded in-the-wild, semantically labeling them is expensive, time-consuming, and prone to errors due to human annotator subjectivity. Thus, to control generation, there is a need to automatically infer user-defined perceptual factors of variation in the latent space of a generative model while modelling unlabeled textures. In this paper, we propose an example-based framework to determine vectors to guide texture generation based on user-defined semantic attributes. By synthesizing a few synthetic examples to indicate the presence or absence of a semantic attribute, we can infer the guidance vectors in the latent space of a generative model to control that attribute during generation. Our results show that our method is capable of finding perceptually relevant and deterministic guidance vectors for controllable generation for both discrete as well as continuous textures. Furthermore, we demonstrate the application of this method to other tasks such as selective semantic attribute transfer. Audio Textures, Controllability, Analysis-by-Synthesis, Gaver Sounds, StyleGAN, Latent Space Exploration ## I Introduction Audio textures are sounds generated by the super-position of multiple similar acoustic events [1, 2], such as the sounds made by water filling a container or a wooden drumstick repeatedly hitting a metallic surface. Guided or controllable generation of such audio textures using deep neural networks is useful for generating background environmental sound scores for movies, games, and automated Foley sound synthesis [3, 4, 5]. Such guidance during generation is usually achieved by conditioning generative models using semantically labeled data. While large datasets for audio textures can be readily recorded in-the-wild, semantically labeling these kinds of sounds is difficult. There is thus a need for generative models to infer the vectors for semantic attribute guidance automatically, without the supervision of large labeled datasets. Generative adversarial networks (GANs) [6] such as StyleGANs [7, 8] generate disentangled latent spaces by learning the most statistically significant factors of variation within a dataset. Algorithms in the image domain [9, 10], statistically analyse such latent spaces to automatically find guidance vectors for controllable generation. Ideally, controllable generation should operate at a perceptual or a human-understandable semantic level [11]. We define semantic attributes in audio as a set of factors that matter to human perception of sound. Such attributes may not always have a one-to-one correspondence to the statistically found guidance vectors. Thus, sound design processes would be greatly improved by intuitive user-defined semantic guidance vectors for controllable audio textures generation. Recent research has focused on describing semantic attributes to guide audio generation using text or language abstractions [12, 13, 14, 15]. This multi-modal guidance is achieved by training generative models on large audio datasets in conjunction with text annotations such as captions [16, 17]. While abstracting control using such language-based prompts is a step towards generalized sound generation, it is difficult to use text for fine-grained and continuous controls required for applications of audio texture synthesis. In this paper, we propose using audio examples to guide latent space access and navigation. Similar to music information retrieval (MIR) techniques such as query-by-example [18, 19, 20, 21] and query-by-humming [22, 23, 24, 25] we generate synthetic sound examples representative of the semantic attribute we want to control during generation. We encode these examples into the latent space of a GAN unconditionally trained on real-world audio textures. Then we use these latent embeddings to define guidance vectors in the latent space along which desired semantic attributes can be systematically varied during texture generation. As shown on our webpage1, we use these guidance vectors to guide texture generation for various user-defined semantic attributes such as "Brightness", "Rate" or "Impact Type" for discrete impact sounds and "Fill-Level" for the continuous texture of water filling. Footnote 1: [https://augmented-human-lab.github.io/audio-guided-generation/](https://augmented-human-lab.github.io/audio-guided-generation/) We validate the effectiveness of our method for user-defined semantic guidance of texture generation through a comprehensive attribute rescoring analysis. We also conduct perceptual listening tests to evaluate the effectiveness of our method in changing specific attributes for various randomly generated sounds. In summary, our contributions are: * An Example-Based Framework (EBF) to find user-defined attribute guidance vectors to semantically control audio texture generation. * A synthetic audio query approach for latent space exploration of a generative model. * An application of our framework for the task of semantic attribute transfer between textures. ## II Related Work ### _Supervised Controllability in Audio_ Generative models for audio [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36] enable controllability by training on datasets with labels that help organize the latent space according to the timbre-specific features in the datasets. Some datasets, especially in the area of music generation, are labelled during dataset creation [28], others are weakly labelled by crowdsourcing [37] or are inferred from the audio samples themselves. GANSynth [26] uses instrument identifiers and musical pitch conditioning to generate and interpolate between various musical instrument timbres. DDSP [27] integrates classic signal processing algorithms with deep neural networks to build differentiable synthesizers with latent representations for attributes such as pitch or loudness to guide generation. In DrumGAN [38], a Progressive GAN (PGAN) [39] is trained on drum sounds labeled with features extracted automatically using Audio Commons [40] and Essenatin [41]. Features such as hardness, sharpness or warmth of the sound can then be used to control generation. In DarkGAN [42], a Progressive GAN is trained on soft labels distilled from an audio tagging classifier [43] trained on tags from the AudioSet ontology. Esling et. al [44, 45] construct generative timbre spaces regularized using similarity ratings collected via perceptual studies that can be used for controllable, descriptor-based sound synthesis. In the domain of audio texture generation, MorphGAN [46] trains on floating-point class and parameter labels generated from the penultimate layer of a classifier which can then generate smooth inter-class and intra-class morphs. The Sound Model Factory [47] trains a GAN which is used to create novel timbres followed by an RNN trained on sounds produced from the GAN and conditioned on points along smoothly parameterized trajectories through the GAN latent space. All of these supervised training methods rely on additional class or parametric information while training generative algorithms. Since GANs, particularly StyleGANs, are able to disentangle the latent space based on semantics in the training data [7], our research explores finding user-defined semantic directions to guide generation without the need for any explicit conditioning or labelled data during training. ### _Unsupervised Controllability_ In computer vision, algorithms such as [9, 10, 48, 49, 50, 51] leverage StyleGANs ability to disentangle the latent space to find directional vectors for editing semantics on images. Similarly in audio, GANSpaceSynth [52] applies the GANSpace algorithm to control a pre-trained GANSynth trained on musical instruments in an unsupervised manner. More recently, in computer vision, Semantic Factorization (SeFa) [10] was shown to perform better than other unsupervised algorithms to find vectors for controllable generation in the latent space of a pre-trained GAN. In this method, the weights of the layers that create the disentangled representation are decomposed to find the vectors for maximum variation. Such vectors are then used to edit semantics on unconditionally generated images. However, the directional vectors generated using SeFa need to be semantically labelled manually after observing edits across multiple samples. For speech and music [53, 54] infer controllability based on supervision from a few labels. For images, FLAME [55] uses supervision from a few positive-negative image pairs by semantically editing and inverting real images in a StyleGAN's latent space. Direction vectors for semantic attribute editing are found by optimizing for cosine similarity between the pairs' difference vectors. In our work, we modify the FLAME method for audio textures and propose using a few fully synthetically generated examples to assist in deriving vectors in the latent space of a GAN for attribute controllability. A cluster of similar synthesized audio examples is inverted [56, 57] to define clusters in the GAN's latent space. A prototype [58] latent vector is derived from each cluster, and is an abstract average of the semantic cluster they represent. Since such prototypes are designed to differ in a specific attribute, the difference vector between them in the latent space can be used for guiding audio texture synthesis and for semantic attribute transfer. ### _Synthetic Texture Generation_ While real-world sounds could also be inverted to find latent representations in a trained GAN, they are much more difficult to control than parametric acoustic sound synthesizers [59, 60, 61, 62] or physics-based models [63]. For our inharmonic textures, we use a physically informed synthesis technique found in William Gaver's seminal work on auditory perception [64, 65]. His approach is based on the idea that humans hear and describe sound events in terms of their sources and source attributes better than in terms of acoustic properties of the sounds themselves. The sound events in Gaver sounds are modeled on the physics of the objects interacting to produce the sound such as the hardness of the material under impact or the force of impact. Gaver [64] refers to analysis-by-synthesis as a process of updating synthesis parameters to match a target sound, which we use to discover GAN latent vectors with synthetic audio queries. Although algorithmically synthesized sounds can sound unnatural, we employ them only for querying and searching the latent space of a GAN. Multi-event synthetic textures can be quickly and easily generated using an analysis-by-synthesis approach that have attributes adequate for this exploration task. ## III Proposed Framework As shown in Figure 1 a, b and c, we partition our goal to find semantic attribute vectors for controllable texture generation and propose a framework comprised of the following modules: * A Generator module (\(\mathbf{G}\)) of a GAN trained on real-world audio for high fidelity texture synthesis, * A GAN Encoder (\(\mathbf{E}\)), also known as a GAN inversion network, to encode an audio example into the latent space of a pre-trained GAN, * A parametric Gaver synthesizer for sounds used to locate desired points in the latent space of the GAN, * An algorithm to derive semantic attribute clusters and prototype vectors for guiding semantic synthesis trajectories in the latent space of the GAN. Figure 1 illustrates our framework (during inference). G is a GAN generator and E is the GAN Encoder. (a) We generate synthetic Gaver sounds for a semantic attribute we want to control. In the diagram above we demonstrate this using "Rate" or number of impact sounds in a sample as the semantic attribute. We encode these synthetic sound examples into the latent space of a GAN to find their \(w\)-embeddings. (b) Next, we derive the semantic attribute clusters and generate prototypes using the algorithms elaborated in section III-D. The direction vector to guide generation for that semantic concept is indicated by "d". (c) Shows how we can use direction vector "d" to guide generation on any randomly generated audio sample to increase or decrease "Rate". ### _GAN for Audio Textures_ While our framework can be applied to derive attribute guidance vectors for both Progressive GANs or StyleGANs for audio, in this paper we demonstrate this using StyleGAN2 [7] trained on audio textures. A StyleGAN2's generator can be modeled as a function \(G(.)\) that maps a \(\delta-\)dimensional latent space \(z\subseteq Z,z\in\mathbb{R}^{\delta_{z}}\) to the higher dimensional spectrogram space \(s\subseteq\mathcal{S},s\in\mathbb{R}^{f\times t}\), such that \(s=G(z)\). Here \(\delta_{z}\) is the dimensionality of the \(Z\) space and \(f\), \(t\) are the number of frequency channels and time frames of the generated spectrogram respectively. StyleGANs further learn an intermediate representation \(w\subseteq\mathcal{W},w\in\mathbb{R}^{\delta_{w}}\) between that of \(Z\) and \(\mathcal{S}\) via a mapping network \(G_{m}(.)\) which further disentangles factors of variation as compared to the latent \(Z\) space. Further, a synthesis network \(G_{s}(.)\) maps the \(w\) vector to the spectrogram \(\mathcal{S}\) space. Our framework and method operates in the \(\mathcal{W}\) space to find semantically meaningful directions for controllability. ### _GAN Encoder_ While GANs learn to map latent space embeddings to real-world sounds, GAN inversion techniques learn the inverse mapping, i.e., from the real-world sounds to the latent space embeddings. We adapt the encoder model from [57] to estimate a \(w\) vector from an audio spectrogram randomly sampled from a pre-trained StyleGAN2. Previously, [66, 67] have shown that masking techniques are effective while learning generalized vector representations for audio. We extend this idea of arbitrarily masking the spectrogram to learn a \(w\) vector representation. This approach is especially useful during inference to generalize the encoder to synthetic Gaver sounds. It assists in projecting the synthetic sounds into a reasonable part of the latent space even though the encoder (or the GAN) is not directly trained on these sounds. For noisy textures, such as the sounds made by water filling a container, we further employ amplitude thresholding of the spectrogram during training. This thresholding ensures that the encoder ignores the low level noise and focuses on the most prominent events and frequencies in the spectrogram while estimating the \(w\) vector. To train the Encoder, we modify the loss function from [57] to estimate only in the \(\mathcal{W}\) space instead of \(Z\) as: \[\mathcal{L}=\mathbb{E}_{z\sim\mathcal{N}(0,1),w=G_{m}(z),s=G_{s}(w )}[\|s-G_{s}(E(s))\|^{2}_{2}\\ +\|w-E(s)\|^{2}_{2}] \tag{1}\] In Equation 1, \(G_{m}(z)\) is the mapping network, \(G_{s}(w)\) is the synthesis network of the StyleGAN2, \(E(s)\) is the Encoder that inverts the spectrogram \(s\) to the \(\mathcal{W}\) space. While training the encoder, we randomly sample a \(z\) from the \(Z\) space to generate the target spectrogram \(s\) using \(G(z)\). We estimate the \(w\) for this spectrogram using the encoder \(E(s)\). For the first loss term, we pass the inverted \(w\) through the synthesis network of the generator \(G_{s}(w)\) and find the mean squared error (MSE) loss between the original and reconstructed samples. The second term is the MSE loss between the actual and the estimated \(w\) vector. The loss function of the original Encoder algorithm [57] additionally used a perceptual similarity loss term called LPIPS [68] that calculates the distance between image patches in order to preserve the perceptual similarity of the estimated images. Unlike images, the spectrograms of audio textures are characterized by temporal similarity due to the parameters that remain constant over time [1]. In our experiments, in addition to Equation 1, to preserve the temporal structure of the generated sounds, we use a Gram Matrix feature based autocorrelation loss term suggested by Antognini et al. [69] instead of LPIPS - \[\mathcal{L}_{\text{autocor}}=\frac{\sum_{k,\mu}(\mathcal{A}^{k}_ {\mu}-\mathcal{A}^{k}_{\mu})^{2}}{\sum_{k,\mu}(\mathcal{A}^{k}_{\mu})^{2}}\\ \text{where }\mathcal{A}^{k}_{\mu}=\mathcal{F}^{-1}_{f}\left[ \mathcal{F}_{l}[F^{k}_{\mu}]\mathcal{F}_{l}[F^{k}_{\mu}]^{*}\right] \tag{2}\] Fig. 1: A schematic diagram outlining our framework during inference. In Equation 2, \(F_{\mu}^{k}\) is the \(k\)th feature map for filter \(\mu\), \(\mathcal{A}_{\tau}^{\text{k}}\) is the autocorrelation of the \(k\)th feature map, \(\mathcal{F}\) represents the Fourier transform and * represents its complex conjugation. As shown in [69], such an autocorrelation loss assists in preserving the periodic structure in the resulting sounds. ### _Synthesizing Gaver Sounds_ To generate audio examples for querying the GAN latent space, we use two Gaver synthesis methods - (1) based on physical parameters of the interacting objects, and (2) based on object resonance as a series of band pass filters. The first method is useful in generating sharp impact sounds or dripping sounds, and the second for producing larger variety of impacts and scraping sounds. More formally, a synthetic impact sound can be described as \(F(t)=\sum_{n}\phi_{n}e^{\delta_{n}t}cos\omega_{n}t\), where \(F(t)\) describes the generated sound, \(\phi_{n}\) is the amplitude of the \(n^{th}\) partial, \(\delta_{n}\) is a damping constant, and \(\omega\) is the frequency of the partial, \(\sum\) signifies a sum over the total number of partials. From an ecological perspective, each component in the equation controls a physical aspect of the objects interacting to generate the impact sound. For instance, \(\delta\) in the equation controls the material hardness, \(\phi\) controls the force of impact, and \(\omega\) and \(n\) control the size of the object. With method 2, impact and scraping sounds are created by passing Gaussian noise \(\mathcal{N}(0,I)\) through band-pass and fade filters. The amplitude of the impact sound governs the force of impact, while the frequency bands together with damping provided by linear or exponential fade filters govern the material of impact of the sound. ### _Semantic Clusters, Prototypes, and Guidance Vectors_ Having generated synthetic Gaver sounds, we invert them to generate the \(w\) embeddings. We then cluster the sounds together in the \(\mathcal{W}\) space to generate prototypes as shown in Figure 2. Assume for example that we want to derive directional vectors to control the attribute of "Brightness" of an impact sound. We define brightness as an attribute to indicate the presence or absence of high frequency components in a sound. We generate a cluster of Gaver sounds where the semantic attribute is present (represented by blue dots in the figure) and another cluster of Gaver sounds where the semantic attribute is absent (or "dull" impact sounds represented by orange dots). We find the prototypes **P1** and **P2** representative of each semantic attribute cluster using Algorithm 1. The direction vector for editing the semantic attribute is given by the difference vector between **P1** and **P2**. To generate our prototypes, we adapt a technique from computer vision for generating Eigenfaces. We stack all \(w\) embeddings for the synthetic Gaver samples in a semantic cluster together as columns of a matrix and perform singular value decomposition and select the component associated with the maximum singular value to construct the prototype. The intuition behind doing this is that after decomposition the component with the highest singular value has the most common prominent feature amongst all the samples being analyzed, i.e. the semantic attribute being modelled. Constructing an "average" prototype this way is more robust to outliers or artificial synthesis artefacts. The direction vector for edits is the difference between the \(w\) embeddings of the two prototypes as shown in Algorithm 2. This direction vector can be used to continuously and sequentially edit the semantic attribute of any randomly generated sample. ``` Input: \(List(s_{1_{n}})\) is list of synthetic spectrogram indicating attribute presence (for P1 prototype), \(s_{1_{n}}\in\mathbb{R}^{f\times t}\); \(List(s_{2_{n}})\) is list of synthetic spectrogram indicating attribute absence (for P2 prototype), \(s_{2_{n}}\in\mathbb{R}^{f\times t}\); \(w\_avg\) is the centre of mass of \(\mathcal{W}\) space; Output: \(direction\) attribute guidance vector Function\(GetDirection\)(\(List(s_{1_{n}})\), \(List(s_{2_{n}})\), \(w\_avg\)): \(\{w_{1_{0}},..w_{1_{n}}\}\gets E(s_{1_{0}},..s_{1_{n}})\) ; \(\triangleright\) encode into \(\mathcal{W}\) \(\{w_{2_{0}},..w_{2_{n}}\}\gets E(s_{2_{0}},..s_{2_{n}})\); \(w_{1_{prototype}}\gets GetPrototype(\{w_{1_{0}},..w_{1_{n}}\})\); \(w_{2_{prototype}}\gets GetPrototype(\{w_{2_{0}},..w_{2_{n}}\})\); \(direction\gets w_{1_{prototype}}-w_{2_{prototype}}\); return direction ``` **Algorithm 2**Get Attribute Guidance Vector To generate our prototypes, we adapt a technique from computer vision for generating Eigenfaces. We stack all \(w\) embeddings for the synthetic Gaver samples in a semantic cluster together as columns of a matrix and perform singular value decomposition and select the component associated with the maximum singular value to construct the prototype. The intuition behind doing this is that after decomposition the component with the highest singular value has the most common prominent feature amongst all the samples being analyzed, i.e. the semantic attribute being modelled. Constructing an "average" prototype this way is more robust to outliers or artificial synthesis artefacts. The direction vector for edits is the difference between the \(w\) embeddings of the two prototypes as shown in Algorithm 2. This direction vector can be used to continuously and sequentially edit the semantic attribute of any randomly generated sample. ``` Input: \(List(s_{1_{n}})\) is list of synthetic spectrogram indicating attribute presence (for P1 prototype), \(s_{1_{n}}\in\mathbb{R}^{f\times t}\); \(List(s_{2_{n}})\) is list of synthetic spectrogram indicating attribute absence (for P2 prototype), \(s_{2_{n}}\in\mathbb{R}^{f\times t}\); \(w\_avg\) is the centre of mass of \(\mathcal{W}\) space; Output: \(direction\) attribute guidance vector Function\(GetDirection\)(\(List(s_{1_{n}})\), \(List(s_{2_{n}})\), \(w\_avg\)): \(\{w_{1_{0}},..w_{1_{n}}\}\gets E(s_{1_{0}},..s_{1_{n}})\) ; \(\triangleright\) encode into \(\mathcal{W}\) \(\{w_{2_{0}},..w_{2_{n}}\}\gets E(s_{2_{0}},..s_{2_{n}})\); \(w_{1_{prototype}}\gets GetPrototype(\{w_{1_{0}},..w_{1_{n}}\})\); \(w_{2_{prototype}}\gets GetPrototype(\{w_{2_{0}},..w_{2_{n}}\})\); \(direction\gets w_{1_{prototype}}-w_{2_{prototype}}\); return direction ``` **Algorithm 3**Get Attribute Guidance Vector ## IV Experiments ### _Datasets_ We use two audio texture datasets in our experiments: (1) The Greatest Hits dataset [70] to demonstrate the effectiveness Fig. 2: Schematic for generating semantic attribute clusters, prototypes and the direction vector. of our approach on discrete impact sounds and (2) a Water filling a container dataset [71] for continuous and noisy audio textures. Through these two datasets we demonstrate the effectiveness of our method to cover a range of discrete and continuous textures. #### Iii-A1 The Greatest Hits Dataset This dataset contains audio and video recordings of a wooden drumstick probing indoor and outdoor environments by hitting, scraping, and poking different objects of different material densities. We use this dataset to explore the rich timbres arising out of the interactions between the wooden drumstick and various hard and soft surfaces such as tree trunks, dirt, leaves, metal cans, ceramic mugs, carpets, soft cushions etc. The dataset contains approximately 10 hours of denoised audio split into 977 audio files each approximately 35 seconds. Each file contains impact sounds interacting with different types of objects. We split the audio files into consecutive 2 second sounds sampled at 16kHz to unconditionally train our StyleGAN2. We develop semantic attribute clusters, prototypes and attribute guidance vectors for the attributes _Brightness_ (whether the sound contains mostly high frequency components or is dark or dull containing mostly low frequency components), _Rate_ (whether the number of impact sounds in a sample is high or low), and _Impact Type_ (whether the sounds are sharp impacts or scraping/scratchy sounds made by dragging the stick across the surface). #### Iii-A2 Water filling a container This dataset [71] contains 50 audio recordings of water filling a container at an approximately constant rate for an average duration of \(\sim\)30 seconds. We develop semantic attribute clusters, prototypes and attribute guidance vectors for the continuously varying attribute of _Fill-Level_ of the container. We sample the recorded audio files using a sliding window of 100ms to generate approximately 10,000 2-second audio files sampled at 16kHz to unconditionally train our StyleGAN2. We choose a small sliding window size of 100ms to achieve better interpolability [11] for _Fill-Level_ in the \(\mathcal{W}\) space of the GAN. ### _Implementation Details_ StyleGAN2We set \(Z\) and \(\mathcal{W}\) space dimensions \(\delta_{z}\) and \(\delta_{w}\) both to \(128\) and use 4 mapping layers in the Generator for all our experiments. Further, we use the log-magnitude spectrogram representations generated using a Gabor transform [29](n_frames\(=256\), sft_channels\(=512\), hop_size\(=128\)), a Short-Time Fourier Transform (STFT) with a Gaussian window, to train the StyleGAN2 and the Phase Gradient Heap Integration (PGHI) [72] for high-fidelity spectrogram inversion of textures to audio [73]. For training the generator and discriminator of the StyleGAN2 we use an Adam optimizer with a learning rate of \(0.0025\), \(\beta_{1}\) as \(0.0\) and \(\beta_{2}\) as \(0.99\). Encoder TrainingWe use a ResNet-34 [74] backbone as the architecture for our GAN Encoder network. We use an amplitude thresholding of -17db for Water and -25db for the Greatest Hits. That is, we mask the frequency components with magnitude below -17 or -25db for the respective datasets. We use an Adam optimizer to train the Encoder with a learning rate of \(0.00001\), \(\beta_{1}\) as \(0.5\), and \(\beta_{2}\) as \(0.99\). Gaver Sound SynthesisIn all our experiments, we use 10 synthetic Gaver examples (5 per semantic attribute cluster) to generate the guidance vectors for controllable generation. We outline a cluster-based analysis for real and synthetic sounds using UMAP visualizations on our supplementary webpage. ### _Evaluation metric_ For audio quality we utilize the _Frechet Audio Distance_[75](FAD) metric. FAD is the distance between the distribution of the embeddings of real and synthesized audio data extracted from a pre-trained VGGish model. We utilize this metric to evaluate the quality of sounds generated by inverting the synthetic Gaver sounds as well as real-world sounds from the latent space of the GAN. To evaluate the effectiveness of our method in changing a semantic attribute of a texture we perform _rescoring analysis_. By _rescoring_, we mean the change in accuracy scores reported by an attribute classifier before and after the change in the semantic attribute on a sound. For this, we train an attribute presence or absence classifier based on [76]. We then manipulate an attribute on some sounds and observe how the classifier score changes for those sounds. Further, we evaluate if the attribute change occurs without modifying other attributes. For instance, when editing _Brightness_, we first analyse if the intended attribute of brightness changes. We then analyse if other attributes such as _Rate_ changes with it. As our datasets are unlabelled, to train the _rescoring analysis_ classifier, we manually curate and label a small subset of sounds. To do this, we selected approximately 250 samples Fig. 3: (Top Row) Spectrogram examples of guided generation using our method based on change in the attributes of (a) Rate (increases L to R), (b) Impact Type (becomes scratchy L to R), and (c) Brightness (decreases L to R). Note that for each example, as one attribute changes the other attributes do not undergo a change. (Bottom Row) Examples of guided generation for water filling a container based on Fill-Level. Note how the Fill-Level and its respective frequency components gradually increase from L to R. All sounds can be auditioned on our webpage [https://augmented-human-lab.github.io/audio-guided-generation/](https://augmented-human-lab.github.io/audio-guided-generation/). of 2 seconds sounds for each semantic attribute under consideration. This manual curation involved visually analysing the video and auditioning the associated sounds to detect the semantic attribute. For more details on the dataset curation please see our webpage. Note that this curated dataset is only used for quantitative analysis and not used to train our GAN or Encoder models. ### _Baseline Selection_ We evaluate our method's effectiveness in finding user-defined attribute guidance vectors in the latent space of the GAN by comparing it with an unsupervised method for latent semantic discovery called closed-form Semantic Factorization (SeFa) [10]. SeFa decomposes the pre-trained weights of a GAN to find statistically significant vectors for guided generation. Although SeFa is relatively under-studied in the domain of audio, we use it as a baseline for comparison because, like our method, SeFa works on unconditionally trained GANs. Given the novelty of our task in deriving guidance vectors in a post-hoc fashion, to the best of our knowledge, the SeFa method is the state-of-the-art method in this regard. We thus use it for comparison. ### _Experimental Details_ We first conduct ablation studies to understand the effects of individual components of the loss functions outlined in section III-B. We report this analysis using both _rescoring analysis_ and _FAD_ scores. Next, we study the impact of change of an attribute on other attributes under consideration. We then compare our method (EBF) with the SeFa method as baseline. Finally, we qualitatively study the effectiveness of our method by conducting listening tests. We also present an interactive interface to demonstrate the practicality and effectiveness of our method. Figure 3 shows some spectrogram examples of guided generation using our method. The standard error of means in all tables in this section were reported by bootstrapping the samples over 100 iterations. #### Iii-D1 Ablation Studies We conduct two types of ablation studies in our paper - (1) to study the effect of different components of the loss function on the Encoder, and (2) to study the effect of number of synthetic samples needed to create a semantic cluster. _Ablating Encoder Loss Components_: We first study the effect of using LPIPS, autocorrelation, and MSE loss terms while training the Encoder. Table I shows the _rescoring_ accuracy scores for attribute changes for each type of Encoder. \((\uparrow)\) indicates higher values are better. For each attribute change we report the accuracy for the main attribute as well as the average change reported in other attributes. Ideally, we would like the main attribute accuracy to be high and change in other attributes to be low. We find that with MSE+GM autocorrelation loss, the system outperforms the one with MSE+LPIPS loss for the attributes brightness, rate, and fill-level possibly because with autocorrelation loss, the Encoder is able to preserve the frequency components and the occurrence of sharp attack transients in the resulting sounds, but is unable to preserve the impulse width of the impact in the time domain. Further, given the scope of our Encoder to invert synthetic sounds to derive guidance vectors, we find that for Greatest Hits dataset, using an Encoder with MSE-only, with or without thresholding, works best and is sufficient as compared to others. Also, the average change in other attributes for both the MSE-only based Encoders are comparable (means are within one standard error of each other). For Water, using an Encoder with MSE and thresholding works best. Table II shows the FAD scores for GAN generated sounds (column called GAN) and Encoder reconstructions for each type of Encoder for both GAN generated sounds as well as synthetic Gaver sounds. The FAD Scores were computed based on 10,000 randomly generated samples in comparison with the entire training set. We find that the encoder trained using MSE only or MSE+thesholding outperforms the other systems in terms of the quality of the generated audio (FAD scores). Thus, based on this table and Table I, we choose the Encoder with MSE and thresholding as the best performing Encoder for both datasets for the remainder of the paper. _Ablating the Number of Gaver Samples_: Next, we study the effect of the number Gaver samples used to find the guidance vectors for different attributes. We derive guidance using different \(N\), starting with \(N\)=1 to \(N\)=5. We observe that, as \(N\) increases, the effectiveness of the directional vector \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{Greatest Hits} & \multicolumn{2}{c|}{Water} \\ \hline & \multicolumn{2}{c|}{GAN} & \multicolumn{2}{c|}{GAN} & \multicolumn{2}{c|}{Gaver} & \multicolumn{1}{l|}{GAN} & \multicolumn{1}{l|}{GAN} & \multicolumn{1}{l|}{Gaver} \\ & \multicolumn{2}{c|}{Recon.(\(\downarrow\))} & \multicolumn{2}{c|}{Recon.(\(\downarrow\))} & \multicolumn{2}{c|}{Recon.(\(\downarrow\))} & \multicolumn{1}{l|}{Recon.(\(\downarrow\))} & \multicolumn{1}{l|}{Recon.(\(\downarrow\))} \\ \hline EBF: MSE+LPIPS [68] & \multirow{2}{*}{\(0.6\)} & 1.12 & 4.40 & \multirow{2}{*}{1.17} & 1.92 & 9.45 \\ \cline{1-1} \cline{5-5} EBF: MSE+GM AutoCorrelation [69] & & 1.27 & 4.27 & & 9.22 & 8.26 \\ \cline{1-1} \cline{5-5} EBF: MSE (Ours) & & **0.72** & 4.61 & & 1.59 & 11.77 \\ \cline{1-1} \cline{5-5} EBF: MSE (Ours) + Thresholding & & 2.83 & **4.16** & & **1.42** & **7.92** \\ \hline \end{tabular} \end{table} TABLE II: FAD Scores for GAN generated sounds and Encoder reconstructions \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{Greatest Hits} & \multicolumn{2}{c|}{Water} \\ \hline & \multicolumn{2}{c|}{Brightness} & \multicolumn{2}{c|}{Rate} & \multicolumn{2}{c|}{Impact Type} & \multicolumn{1}{l|}{Fill-Level} \\ \hline & Acc.(\(\uparrow\)) & \multicolumn{2}{c|}{Avg. Chng} & \multicolumn{1}{l|}{Acc.(\(\uparrow\))} & \multicolumn{2}{c|}{Avg. Chng} & \multicolumn{1}{l|}{Acc.(\(\uparrow\))} & \multicolumn{1}{l|}{Avg. Chng} & \multicolumn{1}{l|}{Acc.(\(\uparrow\))} \\ & \multicolumn{2}{c|}{Others (\(\downarrow\))} & \multicolumn{2}{c|}{Others (\(\downarrow\))} & \multicolumn{2}{c|}{Others (\(\downarrow\))} & \multicolumn{2}{c|}{Others (\(\downarrow\))} & \multicolumn{1}{l|}{Acc.(\(\uparrow\))} \\ \hline EBF: MSE+LPIPS [68] & \(0.53\pm_{0.08}\) & \(0.24\pm_{0.04}\) & \(0.64\pm_{0.08}\) & \(0.35\pm_{0.07}\) & \(0.69\pm_{0.07}\) & \(0.20\pm_{0.04}\) & \(0.60\pm_{0.1}\) \\ \hline EBF: MSE+GM AutoCorrelation [69] & \(0.79\pm_{0.06}\) & \(0.21\pm_{0.06}\) & \(0.85\pm_{0.06}\) & \(0.45\pm_{0.09}\) & \(0.46\pm_{0.07}\) & \(0.49\pm_{0.07}\) & \(0.77\pm_{0.09}\) \\ \hline EBF: MSE (Ours) & \(0.71\pm_{0.06}\) & \(0.14\pm_{0.03}\) & \(0.88\pm_{0.06}\) & \(0.30\pm_{0.07}\) & \(\textbf{0.81}\pm_{0.04}\) & \(0.24\pm_{0.4}\) & \(0.27\pm_{0.1}\) \\ \hline EBF: MSE (Ours) + Thresholding & \(\textbf{0.82}\pm_{0.05}\) & \(0.21\pm_{0.06}\) & \(\textbf{0.89}\pm_{0.09}\) & \(0.35\pm_{0.1}\) & \(0.80\pm_{0.06}\) & \(0.27\pm_{0.06}\) & \(\textbf{0.97}\pm_{0.04}\) \\ \hline \end{tabular} \end{table} TABLE I: Ablation Studies edits also increases. Also, such edits preserve other un-edited attributes better with higher N. The samples with different \(N\)'s can be auditioned on our supplementary webpage. #### Vi-C2 Baseline Comparison Table III reports the _rescoring analysis_ for each attribute using our method in comparison with SeFa. We report both, the score for change in the main intended attribute being edited as well as the average change in other attributes. (\(\uparrow\)) indicates higher values are better. For both datasets our method reports better accuracies for change in the main attribute as compared to SeFa. We further report pairwise attribute edit comparisons to study the effect of change in one attribute individually on every other attribute. Tables IV and V show this for Greatest Hits dataset and Table VI for Water dataset. For SeFa, since we do not know which vector (of the \(\delta_{w}\)=128 dimensions) edits a specific attribute, we report scores for edits performed by the top 10 vectors with the highest singular values (top 10 for Greatest Hits and top 3 for Water Filling) for comparison in the table. (\(\uparrow\)) indicates higher values are better and scores highlighted with "*" indicates no significant differences (\(p>0.05\)). Each row indicates a semantic attribute manipulation using a specfic guidance vector and each column evaluates how the scores changed for that attribute. The darkened cells in the table indicate dimensions with the highest score for a semantic attribute (in that column). For both datasets, our method reports a significant change in the main attribute being manipulated. Further, we analyse if each dimension or direction vector from both methods manipulates only a single attribute. For this, we perform a two-way t-test for the scores between any two SeFa dimensions. We particularly notice that for SeFa, the semantic attribute of _Impact Type_ is affected by at least two dimension vectors, namely Dimension 2 and Dimension 7 in Table V. This implies that methods such as SeFa may not always guarantee one-to-one correspondence between statistically found vectors for guidance and the semantic attributes of interest. Furthermore, the first dimension associated with the largest singular value extracted using SeFa does not correlate with any of the main perceptually varying attributes in both datasets. This implies that such automated methods do not always guarantee to find vectors that control perceptually relevant attributes in the latent space of a generative model for audio. #### Vi-C3 Listening tests We recruited 20 participants on Amazon's Mechanical Turk to evaluate the sounds modified by using both methods. For the Greatest Hits dataset, we randomly sampled 20 sounds from the StyleGAN2's latent space and modified them using vectors derived using our method for _Brightness_, _Impact Type_ and _Rate_. For SeFa, we used the vectors with the highest rescoring accuracy from Table V to manipulate the samples. The participants were presented with the unmodified original reference sound as well as the manipulated samples. They were asked to evaluate if the two samples differed on the 3 attributes. For the Water dataset, we randomly sampled the latent space 10 times and modified the samples using vectors for _Fill-Level_. As the _Fill-Level_ for Water varies continuously, we wanted to evaluate if manipulating the sound samples sequentially and linearly using both methods preserves the interim _Fill-Level_s (such as when the bucket is empty, quarter or half full etc.). To do this, we use the rank-ordering interfaces outlined in [77] to measure the perceptual linearity of linearly manipulating the sample \begin{table} \begin{tabular}{l c c c} & Brightness(\(\uparrow\)) & Rate(\(\uparrow\)) & Impact Type(\(\uparrow\)) \\ \cline{2-4} Dimension 0 & \(0.10\pm 0.07\) & \(0.07\pm 0.06\) & \(0.18\pm 0.13\) \\ \cline{2-4} Dimension 1 & \(0.30\pm 0.11\) & \(\mathbf{0.45\pm 0.12}\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 2 & \(0.31\pm 0.11\) & \(0.09\pm 0.06\) & \(\mathbf{0.42\pm 0.15^{*}}\) \\ \cline{2-4} Dimension 3 & \(\mathbf{0.49\pm 0.12}\) & \(0.09\pm 0.06\) & \(0.30\pm 0.16\) \\ \cline{2-4} Dimension 4 & \(0.12\pm 0.08\) & \(0.14\pm 0.08\) & \(0.17\pm 0.12\) \\ \cline{2-4} Dimension 5 & \(0.31\pm 0.11\) & \(0.10\pm 0.06\) & \(0.30\pm 0.14\) \\ \cline{2-4} Dimension 6 & \(0.32\pm 0.11\) & \(0.14\pm 0.08\) & \(0.19\pm 0.13\) \\ \cline{2-4} Dimension 7 & \(0.14\pm 0.08\) & \(0.10\pm 0.06\) & \(\mathbf{0.38\pm 0.16^{*}}\) \\ \cline{2-4} Dimension 8 & \(0.18\pm 0.09\) & \(0.09\pm 0.07\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 9 & \(\mathbf{0.34\pm 0.10}\) & \(0.11\pm 0.07\) & \(0.20\pm 0.13\) \\ \end{tabular} \end{table} TABLE V: Pairwise rescoring for Greatest Hits (SeFa) \begin{table} \begin{tabular}{l c c c c c} & Brightness(\(\uparrow\)) & Rate(\(\uparrow\)) & Impact Type(\(\uparrow\)) \\ \cline{2-4} & \(0.10\pm 0.07\) & \(0.07\pm 0.06\) & \(0.18\pm 0.13\) \\ \cline{2-4} Dimension 1 & \(0.30\pm 0.11\) & \(\mathbf{0.45\pm 0.12}\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 2 & \(0.31\pm 0.11\) & \(0.09\pm 0.06\) & \(\mathbf{0.42\pm 0.15^{*}}\) \\ \cline{2-4} Dimension 3 & \(\mathbf{0.49\pm 0.12}\) & \(0.09\pm 0.06\) & \(0.30\pm 0.16\) \\ \cline{2-4} Dimension 4 & \(0.12\pm 0.08\) & \(0.14\pm 0.08\) & \(0.17\pm 0.12\) \\ \cline{2-4} Dimension 5 & \(0.31\pm 0.11\) & \(0.10\pm 0.06\) & \(0.30\pm 0.14\) \\ \cline{2-4} Dimension 6 & \(0.32\pm 0.11\) & \(0.14\pm 0.08\) & \(0.19\pm 0.13\) \\ \cline{2-4} Dimension 7 & \(0.14\pm 0.08\) & \(0.10\pm 0.06\) & \(\mathbf{0.38\pm 0.16^{*}}\) \\ \cline{2-4} Dimension 8 & \(0.18\pm 0.09\) & \(0.09\pm 0.07\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 9 & \(\mathbf{0.34\pm 0.10}\) & \(0.11\pm 0.07\) & \(0.20\pm 0.13\) \\ \end{tabular} \end{table} TABLE VI: Pairwise rescoring for Water (EBF and SeFa) \begin{table} \begin{tabular}{l c c c c c} & Brightness(\(\uparrow\)) & Rate(\(\uparrow\)) & Impact Type(\(\uparrow\)) \\ \cline{2-4} Brightness & \(0.82\pm 0.06\) & \(0.06\pm 0.04\) & \(0.40\pm 0.07\) \\ \cline{2-4} Rate & \(0.40\pm 0.09\) & \(0.89\pm 0.06\) & \(0.38\pm 0.09\) \\ \cline{2-4} Impact Type & \(0.35\pm 0.07\) & \(0.19\pm 0.03\) & \(\mathbf{0.80\pm 0.05}\) \\ \hline \end{tabular} \end{table} TABLE VI: Pairwise rescoring for Water (EBF and SeFa) \begin{table} \begin{tabular}{l c c c c} & Brightness(\(\uparrow\)) & Rate(\(\uparrow\)) & Impact Type(\(\uparrow\)) \\ \cline{2-4} Dimension 0 & \(0.10\pm 0.07\) & \(0.07\pm 0.06\) & \(0.18\pm 0.13\) \\ \cline{2-4} Dimension 1 & \(0.30\pm 0.11\) & \(\mathbf{0.45\pm 0.12}\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 2 & \(0.31\pm 0.11\) & \(0.09\pm 0.06\) & \(\mathbf{0.42\pm 0.15^{*}}\) \\ \cline{2-4} Dimension 3 & \(\mathbf{0.49\pm 0.12}\) & \(0.09\pm 0.06\) & \(0.30\pm 0.16\) & \(0.30\pm 0.16\) \\ \cline{2-4} Dimension 4 & \(0.12\pm 0.08\) & \(0.14\pm 0.08\) & \(0.17\pm 0.12\) \\ \cline{2-4} Dimension 5 & \(0.31\pm 0.11\) & \(0.10\pm 0.06\) & \(0.30\pm 0.14\) \\ \cline{2-4} Dimension 6 & \(0.32\pm 0.11\) & \(0.14\pm 0.08\) & \(0.19\pm 0.13\) \\ \cline{2-4} Dimension 7 & \(0.14\pm 0.08\) & \(0.10\pm 0.06\) & \(\mathbf{0.38\pm 0.16^{*}}\) \\ \cline{2-4} Dimension 8 & \(0.18\pm 0.09\) & \(0.09\pm 0.07\) & \(0.28\pm 0.16\) \\ \cline{2-4} Dimension 9 & \(0.34\pm 0.10\) & \(0.11\pm 0.07\) & \(0.20\pm 0.13\) \\ \end{tabular} \end{table} TABLE VI: Pairwise rescoring for Water (EBF and SeFa) \begin{table} \begin{tabular}{l c c c c} & Brightness(\(\uparrow\)) & Rate(\(\uparrow\)) & Impact Type(\(\uparrow\)) \\ \cline{2-4} Dimension 0 & \(0. using the guidance vector for _Fill-Level_. The interfaces for the listening tests can be viewed on our supplementary webpage. Table VII shows the accuracy scores from our listening tests for both datasets and their respective attributes. \((\uparrow)\) indicates that higher values are better and scores highlighted with "*" indicates no significant differences \((p>0.05)\). For Water, participants were able to perceptually rank-order the water filling sounds in increasing order of _Fill-Level_ significantly better when using our method. For the Greatest Hits dataset, participants found our method to perform significantly better while manipulating the sounds for _Rate_ and _Impact Type_. However, for the attribute of _Brightness_ participants found both methods to perform equally well. By qualitatively listening and comparing the brightness samples generated by the two algorithms, we find that samples generated using our method cover a wider range of brightness as compared to SeFa (visit the supplementary webpage for examples). #### Iv-B4 Interactive editing with semantic guidance To qualitatively demonstrate the simplicity and effectiveness of our method, we developed two interfaces - (1) to query in real-time the latent space of a StyleGAN2 using synthetic sounds shown in Figure 3(a), and (2) to interactively and in real-time semantically control generation of novel textures shown in Figure 3(b). A video demonstration of our interfaces can be viewed on our supplementary webpage. semantic attributes of \(\mathbf{sI}\) as well as its original structure (position or location of the impact events along the time axis). The method to transfer semantic attribute is outlined in Algorithm 3. Results are shown in Figure 6 and our webpage. ## VI Conclusion In this paper, we propose an audio example-based method to perceptually and deterministically guide generation of audio textures. By using a synthesizer to create a few examples, we are able to develop attribute guidance vectors in the latent space of a StyleGAN2 to controllably generate both discrete impact sounds as well as continuously varying water filling audio textures. We show the effectiveness of our method in providing linearly varying controls for texture generation using both objective metrics as well as perceptual listening tests. Furthermore, we demonstrate an application of our method to other signal processing tasks, namely semantic attribute transfer. Although our method has been more effective than algorithms such as SeFa at modifying the user-defined semantic attributes on a texture, some manual curation of synthetic samples is needed to find the relevant guidance vectors. On the other hand, algorithms such as SeFa are automatic and can be applied to any pre-trained GAN without any manual intervention. Thus in our future work we will explore the potential of combining SeFa's ability to automatically discover vectors for attribute manipulation with our method for improving the accuracy of editing the semantic attributes.
2304.06028
RECLIP: Resource-efficient CLIP by Training with Small Images
We present RECLIP (Resource-efficient CLIP), a simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we leverage small images to learn from large-scale language supervision efficiently, and finetune the model with high-resolution data in the end. Since the complexity of the vision transformer heavily depends on input image size, our approach significantly reduces the training resource requirements both in theory and in practice. Using the same batch size and training epoch, RECLIP achieves highly competitive zero-shot classification and image-text retrieval accuracy with 6 to 8x less computational resources and 7 to 9x fewer FLOPs than the baseline. Compared to the state-of-the-art contrastive learning methods, RECLIP demonstrates 5 to 59x training resource savings while maintaining highly competitive zero-shot classification and retrieval performance. Finally, RECLIP matches the state of the art in transfer learning to open-vocabulary detection tasks, achieving 32 APr on LVIS. We hope this work will pave the path for the broader research community to explore language supervised pretraining in resource-friendly settings.
Runze Li, Dahun Kim, Bir Bhanu, Weicheng Kuo
2023-04-12T17:59:58Z
http://arxiv.org/abs/2304.06028v2
# RECLIP: Resource-efficient CLIP by Training with Small Images ###### Abstract We present RECLIP (Resource-efficient CLIP), a simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we leverage small images to learn from large-scale language supervision efficiently, and finetune the model with high-resolution data in the end. Since the complexity of the vision transformer heavily depends on input image size, our approach significantly reduces the training resource requirements both in theory and in practice. Using the same batch size and training epoch, RECLIP achieves highly competitive zero-shot classification and image-text retrieval accuracy with 6 to 8\(\times\) less computational resources and 7 to 9\(\times\) fewer FLOPs than the baseline. Compared to the state-of-the-art contrastive learning methods, RECLIP demonstrates 5 to 59\(\times\) training resource savings while maintaining highly competitive zero-shot classification and retrieval performance. We hope this work will pave the path for the broader research community to explore language supervised pretraining in more resource-friendly settings. ## 1 Introduction Representation learning is a foundational problem in computer vision and machine intelligence. Effective image representation can benefit a myriad of downstream tasks, including but not limited to image classification, object detection, semantic segmentation, and 3D scene understanding. In the past decade, the community has witnessed the rise of supervised learning (Deng et al., 2009; Sun et al., 2017), then self-supervised learning (Chen et al., 2020; He et al., 2020; Bao et al., 2022), and most recently language-supervised learning (Radford et al., 2021; Jia et al., 2021; Yu et al., 2022). Language-supervised representation gains much traction for its exceptional versatility. It exhibits outstanding performance in zero-shot classification (Radford et al., 2021), linear probing (Radford et al., 2021; Yu et al., 2022), few-shot learning (Zhou et al., 2022), full finetuning, and CLIP (Radford et al., 2021) has been a strong finetuning model for downstream tasks, and finds great applications in text-guided image generation (Ramesh et al., 2021). Much like the role of supervised pretraining (Deng et al., 2009) before, language-supervised pretraining has emerged as a simple yet powerful methodology for representation learning today. Traditional supervised learning uses a predetermined set of labels, and is effective across a wide range of data and computational resources. In contrast, natural language offers richer learning signals such as object categories or instances, named-entities, descriptions, actions, and their relations at multiple levels of granularity. Unfortunately, this rich supervision also leads to a higher level of noise in the data, where many image-text pairs have only loose connections. To address this noise, data and computational scaling have proven to be highly effective and necessary. For example, training CLIP models require \(\sim\)3k V100-GPU-days, and likewise CoCa requires \(\sim\)23k TPU-v4-core-days. Apart from the lengthy training time, the large batch requirement of contrastive learning recipes also demand substantial amount of device memory at all times. These factors limit the research of language supervised learning to institutions with high-end infrastructure, and hinder the exploration by the broader community. Thus, improving efficiency of contrastive training has drawn substantial research interest. For example, Zhai et al. (2022) precomputes the image features by a pretrained classification model to reduce the training cost, and Zhai et al. (2023) utilizes sigmoid loss to avoid the use of all-gather operation and improves learning with a smaller batch size. Moreover, Yao et al. (2021) leverages masked images to speed up contrastive learning. The community have also explored smaller batch sizes (Dong et al., 2022) or curated academic datasets (Li et al., 2022; Lei et al., 2022) for contrastive learning. However, it is not clear how well the findings in smaller batch and data size settings generalize to larger batch and data size. We present RECLIP (Resource-efficient CLIP), a simple method designed to make CLIP more affordable and reproducible for the community (see Fig. 2). Consider images 1-3 in the top left of Fig. 2. Humans can effortlessly match the images with the corresponding texts below them, e.g. "a boy is playing a soccer ball in grass" matching image 1. Although the images are only of size \(64\times 64\), they contain adequate amount of visual information for pairing with texts. Our main insight is to train on small images during the main training phase, and finetune the model with high-resolution images for a short schedule in the end. Intuitively speaking, our approach re-introduces the idea of "coarse-to-fine" from classical computer vision to contrastive learning, whereby pretraining incorporates high-level information from small images and finetuning enables the model to refocus its attention on the important details. There is no need for multi-view supervisions (Li et al., 2022; Yao et al., 2021), feature distillation (Lei et al., 2022), other contrastive losses (Zhai et al., 2023), pretrained classifiers (Zhai et al., 2022), or image masking (Li et al., 2022). Surprisingly, RECLIP achieves highly competitive zero-shot classification and retrieval performance using \(64\times 64\) images, which significantly reduces computational resource usage. We attribute this to the complexity of image tower being quartic with respect to the image size (see Eqn. 4). In addition, RECLIP demonstrates the efficiency and effectiveness of using short sequence length for image language representation learning. Existing iamge-text pretraining methods typically use long sequence lengths, e.g. 441 (Radford et al., 2021) or 784 (Yu et al., 2022) to achieve strong downstream zero-shot transfers. Long sequence image encoding has been validated to benefit image classification (Beyer et al., 2022) and object detection (Chen et al., 2022) with vision transformers. Hu et al. (2022) find the sequence length is a key factor for masked image represen Figure 1: Zero-shot accuracy _vs._ compute resource in cores\(\times\)hours trade-off. RECLIP-X: RECLIP training for 300k and 600k steps with image size \(X\) where \(X=64,80,112\). RECLIP-64-F20k: RECLIP-64 finetuned for 20k steps. Our CLIP repro.: our reproduction of CLIP (Radford et al., 2021). Zero-shot image-text retrieval results are averaged from image-to-text and text-to-image Recall@1 on two benchmark datasets, Flickr30K (Plummer et al., 2015) and MSCOCO (Chen et al., 2015). RECLIP consumes significantly less compute resource and is more accurate on zero-shot image-text retrieval and highly competitive classification results on ImageNet-1K validation set. tation learning. Different from these methods that advocate for long sequence length, RECLIP demonstrates that using only **16** tokens for the image encoding is sufficient for the main training phase, and can achieve highly competitive zero-shot transfer capabilities via a short high-resolution finetuning schedule. Interestingly, our image sequence length is 4 to 5\(\times\) shorter than the _text_ sequence lengths of popular recipes e.g. 76 (Radford et al., 2021) or 64 (Jia et al., 2021; Yu et al., 2022). In Fig. 1, we present zero-shot classification and retrieval performance, and resource costs in cores\(\times\)hours of training RECLIP models and the baseline model for short and long schedules. Experiments show that, using the same batch size and training steps, RECLIP reduces the computation resources by 6 to 8\(\times\) and largely preserves the classification and retrieval accuracy. When comparing to state-of-the-art (SOTA) methods, RECLIP significantly saves resource usage by 5 to 59\(\times\) and shows highly competitive zero-shot classification and retrieval accuracy. In summary, our contributions are: * We present a new language image pretraining methodology, Resource-efficient CLIP (RECLIP) to minimize computational resource requirements. * We leverage small images for the main contrastive learning phase to enable the model to be trained with language supervisions fast and then finetune the model on high-resolution data with a short schedule in the end. * RECLIP significantly saves compute resource, reduces FLOPs and achieves highly competitive performance on both zero-shot classification and image-text retrieval benchmarks. We believe RECLIP could enable the broader research community to explore and understand language supervised pretraining in a more resource friendly setting. Figure 2: Top: Resource-efficient CLIP (RECLIP) training pipeline. Bottom: existing CLIP training methods. RECLIP leverages small images for the main training phase which significantly reduces computational resource requirements through much shorter image sequence length. ## 2 Related Work ### Learning with Low-Resolution Images Deep learning techniques have been utilized on a wide-variety of computer vision tasks, e.g. visual recognition (He et al., 2016; Dosovitskiy et al., 2021), video analysis (Tran et al., 2018), images generations (Ramesh et al., 2021), etc. Most of existing work follow the standard training and testing paradigms to exploit very deep models by using images with the fixed resolution, e.g. \(224\times 224\). This setting has been one of fundamental standards for various computer vision tasks. However, an increasing number of studies have been conducted to investigate to train deep learning models with low-resolution data. Touvron et al. (2019) have observed significant discrepancy on image sizes caused by augmentation methods during the train and test period, and further validated the effectiveness of using lower resolution images for training than testing. Driven by the needs for specific tasks, e.g. face recognition, surveillance images analysis, etc., Singh et al. (2019, 2022) and Huang et al. (2022) study learning with low resolution images and generally focus on using high resolution images as auxiliary data to help to train models with low resolution data, which causes difficulties to generalize on broader visual recognition tasks. For video understanding, Wu et al. (2020) propose to use variable mini-batch shapes with different spatial-temporal resolutions for training deep video models and obtain optimal performance and time trade-offs. With recent advances of vision transformers (Dosovitskiy et al., 2021; He et al., 2022), Guo et al. (2022) speedup image pretraining by using masked image modelling with low resolution data. Liu et al. (2022) introduce a a log-spaced continuous position bias for pretraining vision models by using smaller images and transfer to high-resolution localization tasks. ### Language-supervised Learning Due to the natural co-occurrence of image and language data on the web, language-supervised learning has become a highly effective and scalable representation learning methodology. Researchers have explored a variety of paired image-text data such as image tags (Chen & Gupta, 2015; Divvala et al., 2014; Joulin et al., 2016), captions (Desai & Johnson, 2021; Sariyildiz et al., 2020; Wang et al., 2009; Sharma et al., 2018), alt-texts (Jia et al., 2021; Schuhmann et al., 2021), image search queries (Radford et al., 2021), page title (Chen et al., 2022b), or a combination of these sources (Chen et al., 2022b). From a modeling perspective, contrastive learning is particularly suitable for recognition and retrieval tasks, because of its simplicity and versatility. However, the high requirements of computational resources have limited the research from the broader community. To fully leverage capabilities of vision and language pretraining, large batch size (e.g. 16k (Jia et al., 2021), 32k (Radford et al., 2021; Yao et al., 2021), or 64k (Yu et al., 2022)) and web image text data have been adopted widely. This requires a large amount of computational resources which many academic institutions and industry labs cannot afford. To address such limitation, Zhai et al. (2022) proposes to precompute the image features with frozen classifier backbone, while Zhai et al. (2023) proposes sigmoid loss which better supports small batch training. In addition, masked image learning (Yao et al., 2021), multi-views data augmentations (Li et al., 2022; Yao et al., 2021), knowledge distillations (Lei et al., 2022) and masked self-distillation Dong et al. (2022) have been proposed. Since many of these methods are trained and evaluated on smaller scale/data, it is unclear how well they may scale up to larger batch and data. For example, Weers et al. (2023) shows that the advantage of contrastive learning approaches on smaller scales may not always hold at larger scales. In contrast, we propose a simple and novel recipe for language image pretraining, which (1) significantly reduces computation resource requirements and (2) works well on a large-scale web dataset Chen et al. (2022b) with minimal change to the established CLIP recipe (Radford et al., 2021). ## 3 Method ### Preliminaries **Contrastive Language Image Pretraining.** Following existing works (Radford et al., 2021; Yu et al., 2022), we utilize a transformer-based contrastive model which consists of an image encoder and a text encoder. The image and text encoders are trained to output image-level representation and sentence-level representations respectively. The image embeddings \(\{p\}\) and text embeddings \(\{q\}\) are obtained by global average pooling at the last layers of image and text encoders. The cosine similarity of the embeddings in batch \(B\), scaled by a learnable temperature \(\tau\) are the input to the InfoNCE loss (Oord et al., 2018; Radford et al., 2021). The image and text contrastive loss is obtained by \(L_{con}=(L_{\text{2T}}+L_{\text{T2I}})/2\), with: \[L_{\text{2T}}=-\frac{1}{B}\sum_{i=1}^{B}\log(\frac{\text{exp}(p_{i}q_{i}/\tau)} {\sum_{j=1}^{B}\text{exp}(p_{i}q_{j}/\tau)}). \tag{1}\] \[L_{\text{T2I}}=-\frac{1}{B}\sum_{i=1}^{B}\log(\frac{\text{exp}(q_{i}p_{i}/\tau )}{\sum_{j=1}^{B}\text{exp}(q_{i}p_{j}/\tau)}). \tag{2}\] where \(i,j\) are indexes within the batch. This loss is optimized to learn both the image and language representation in the dual-encoder model. ### Resource-efficient CLIP At a high-level, our method utilizes small images to reduce computation and leverage a brief finetuning stage at the end of training to adapt for high-resolution inference. Intuitively, the use of smaller images presents a trade-off between how much detail we encode per example and how many samples we process per unit of computation resource. Fig. 2 shows the RECLIP training pipeline on the top. There are two phases: low-resolution main training, and high-resolution finetuning. In the first phase, we leverage small images which contain sufficient visual concepts with paired texts as the input to the image and text encoders. By using an image size of \(64\) and a text length of 16, RECLIP processes the training data significantly faster than existing methods. In the second phase, we finetune the model for a short cycle on high-resolution data to provide valuable image details, which largely enhances the representation quality of the model. Below we delve deeper into specific aspects of RECLIP design. Structure preservation by learning from small images.In Fig. 2, we observe that small images can preserve visual structure and contain sufficient concepts well. For instance, human can easily tell the object, "a dog", in the third image and associate the image with the text of "a brown dog waits for...", and this is a fundamental principle for our RECLIP to leverage small images for the main language-supervised pretraining. Because down-sampling is a structure-preserving operation i.e. global appearance remains similar, we are able to reduce the token length aggressively without compromising the performance of the model. This is different from other techniques to reduce the sequence length (e.g. random masking) where the global appearance may change significantly with reduced sequence lengths. Additional visualization presents a comparison between various image resolutions and sheds light on how small images effectively preserve visual appearance (see Fig. 3). Training complexity with small images.The computation cost of contrastive learning mostly depends on the cost of processing images (Radford et al., 2021; Li et al., 2022; Yu et al., 2022), partly because the image encoder is typically heavier than the text encoder, partly because the image token length tends to be greater than that of text tokens. Below we provide theoretical analysis to understand the efficiency of using small images. Let the number of tokens from the image encoder be: \[N=hw/p^{2} \tag{3}\] , where \(h/w\) are height/widths of the image, and \(p\) is the patch size. If we replace \(h\) by \(H/r\) and \(w\) by \(W/r\), where \(H/W\) are the original image height and widths, and \(r\) is the down-sampling factor. The computation complexity \(C\) of the image encoder of a batch is given by1: Footnote 1: In practice, the fully connected layers in a transformer is also important to the computation cost despite its complexity of \(O(BN)\). We focus on self attention operation in this analysis. \[C=O(BN^{2})=O(\frac{BH^{2}W^{2}}{p^{2}r^{4}}) \tag{4}\] , where \(B\) is the batch size. When \(B,H,W,p\) are held constant, we have: \[C=O(\frac{1}{r^{4}}) \tag{5}\] This shows that reducing the image size is very effective in reducing computation complexity to the inverse power of 4. Since image encoder is the computation bottleneck in existing CLIP recipes (Radford et al., 2021; Li et al., 2022b; Zhai et al., 2022; Yu et al., 2022), RECLIP reduces the image sequence length to **16** by using an image size of \(64\), which makes our image token length the same as our own text token length, and much shorter than those of aforementioned methods. Constant batch size.Batch size is a critical factor in contrastive learning (Radford et al., 2021; Pham et al., 2021; Li et al., 2022b; Chen et al., 2022a) and larger batch has consistently yielded improvement. In Equation 4, the complexity changes linearly with batch size \(B\). Observing that reduced batch size tends to hurt representation quality, we keep the batch size constant to save both computation and memory use by reducing image size only. High-resolution finetuning.We perform high resolution finetuning after the main low-resolution training. Intuitively speaking, the model has acquired a high-level understanding of the images and texts through the main training phase. We improve its representation further by providing more detailed visual information through a short high-resolution finetuning process. Care is taken to initialize the positional embeddings from low-res pretraining to high-res finetuning. We up-sample the positional embedding weights from the low dimension (e.g. 4x4 for \(h=w=64\)) to the dimension of high-resolution positional embeddings (e.g. 14x14 for \(h=w=224\)) for a given patch size \(p=16\). Compared to up-sampling the low-res positional embeddings without increasing the amount of weights, we found this weight up-sampling beneficial because the positional embeddings have higher capacity to adapt with more detailed spatial representation. Network architecture.We use the ViT-Large backbone as image encoder by default unless noted otherwise. The patch size is fixed at \(16\) following common practice. Although we focus on ViT architecture in this study, RECLIP involves only changing the input size, and can potentially support other network architectures as well (Vaswani et al., 2017; Dosovitskiy et al., 2021; He et al., 2016; Liu et al., 2021; Tolstikhin et al., 2021). Our text encoder follows the same transformer design as previous works (Radford et al., 2021; Yu et al., 2022). Implementation details.We use a starting learning rate of 0.001, and train for 250k and 550k steps with linear LR decay using an Adafactor optimizer. We set weight decay to 0.01 and batch size to 16384. A short LR warmup of 2500 steps is used. Our high-resolution finetuning schedule starts with a learning rate of \(1\times e^{-4}\) with 5000 steps LR warmup, and decays linearly over a total schedule of 20k or 50k iterations. We use an image size of 224 or 448 for finetuning. We use the English subset of the WebLI dataset (Chen et al., 2022b) for training. Our training is run on TPU-v3 infrastructure. For zero-shot image classification, we use the same text prompts as Radford et al. (2021). ## 4 Experiments ### Main Results Zero-shot image-text retrieval and image classification.Following existing works (Radford et al., 2021; Li et al., 2022b; Yu et al., 2022), we evaluate RECLIP on zero-shot image and text retrieval on Flickr30K (Plummer et al., 2015) and MSCOCO (Chen et al., 2015) test sets, and zero-shot image classification on ImageNet (Deng et al., 2009), ImageNet-A (Hendrycks et al., 2021b), ImageNet-R (Hendrycks et al., 2021a), ImageNet-V2 (Recht et al., 2019) and ImageNet-Sketch (Wang et al., 2019) datasets. we take each image and text to the corresponding encoder to obtain embeddings for all image and text pairs. Then we calculate the the cosine similarity scores for the retrieval, and use the aligned image and text embeddings to perform zero-shot image classification by matching images with label names without fine-tuning. Table 1 presents the results of RECLIP on this benchmark, where the baseline is our own reproduced version of CLIP. Our baseline model trains on the WebLI dataset with the images of \(224\times 224\) for 300k and 600k steps. The original CLIP (Radford et al., 2021) model and trains on their own dataset with the image size of \(336\times 336\), which is marked in gray. RECLIP uses small images for the main training phase and finetune the model with the images of \(224\times 224\) for 20k or 50k steps. For long-schedule training of 600k steps, RECLIP-64 significantly reduces compute use by \(\sim\) **6** times from 52.8K to **9.2K** in cores\(\times\) hours, which saves \(\sim\) **80\(\%\)** compute resource, and it outperforms the baseline model by +3.9 on Flickr and MSCOCO retrieval. RECLIP-64-F20K, which finetunes the model for only 20k steps with high-resolution images, further reduces the computation use by \(\sim\) **8\(\times\)** to **6.5K** and improves retrieval performance by +1.8. On zero-shot image classification, RECLIP-64 achieves 75.4 and RECLIP-64-F20K achieves 74.9 of the top-1 accuracy, which is very competitive with the baseline method. RECLIP-64 reduces the token length for the image encoding from 196 to **16** during the main training phase, which is a key factor for resource savings. Overall, RECLIP-64 shows attractive trade-offs between the resource use and zero-shot retrieval and image classification performance. We also train RECLIP with the image size of \(80\times 80\). Comparing to the baseline method which consumes 52.8K in cores\(\times\)hours, our RECLIP-80 remarkably reduces resource usage by \(\sim\) 5 times to **11.2K**. RECLIP-80 improves retrieval results by +5.0 on Flickr30K and MSCOCO test sets, and achieves highly competitive zero-shot image classification performance of 75.8. Specifically, taking INet-A as an example, RECLIP-80 outperforms the baseline method for both 300k and 600k training steps. For short training schedule with 300 steps, RECLIP-80 requires only **7.5K** in cores\(\times\)hours which is \(\sim\) **4\(\times\)** less than the baseline model. from the use of very short image sequence length i.e., 16, which is very different from existing recipes (Radford et al., 2021; Li et al., 2022b; Yu et al., 2022; Zhai et al., 2022). We also observe that RECLIP-80 uses \(3\sim 34\times\) less compute resource. When comparing to the CoCa (Yu et al., 2022), RECLIP-80 saves \(\sim\) 97\(\%\) resource use and achieves highly competitive retrieval performance. The resource savings of RECLIP-80 can also be attributed to the largely-reduced sequence length, i.e., **25** for the image encoding. RECLIP-80 achieves highly competitive ImageNet top1 accuracy of 76.3, which outperforms CLIP and is on-par with ALIGN. Overall, RECLIP provides very affordable recipes for large-scale language and image pretraining. We note that some leading methods (Chen et al., 2022b; Pham et al., 2021; Yu et al., 2022) marked in gray demonstrate substantially better zero-shot classification because of larger image encoder capacity and the use of JFT (Sun et al., 2017) dataset. JFT is a human-annotated classification dataset which is cleaner than most web crawled image-text datasets (Radford et al., 2021; Schuhmann et al., 2021; Jia et al., 2021) and most advantageous for zero-shot classification, so we list the JFT-trained entries there for reference only. ### Ablations In this section, we ablate the design of RECLIP training and evaluate on the zero-shot retrieval and image classification accuracy. The importance of high-resolution finetuning.Table 4 shows the importance of finetuning RECLIP with high-resolution data after the main training phase. We compare the retrieval and classification accuracy by using the model trained with and without high-resolution finetuning on an image size of \(224\) for 50k steps. We observe that high-resolution finetuning significantly improves the performance for zero-shot retrieval and classification. In particular, training RECLIP by using the smallest images, e.g. \(64\times 64\), high-resolution finetuning offers the most notable benefits. This is also aligned with the results in Table 3 where RECLIP models trained with small images, e.g. \(64\times 64\) or \(80\times 80\), and finetuned with \(448\times 448\) for a short cycle can achieve comparable performance with SOTA models. \begin{table} \begin{tabular}{c|c|c c c c c|c c c c c} & \multirow{2}{*}{Total} & \multicolumn{4}{c|}{Before high-resolution finetuning} & \multicolumn{4}{c}{After high-resolution finetuning} \\ & & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ & Training & INet & \multicolumn{2}{c}{Flickr30K} & \multicolumn{2}{c}{MSCOCO} & INet & \multicolumn{2}{c}{Flickr30K} & \multicolumn{2}{c}{MSCOCO} \\ & Steps & Top-1 & I2T & T2I & I2T & T2I & Top-1 & I2T & I2T & T2I \\ \hline RECLIP-112 & 300k & 69.0 & 83.2 & 67.9 & 58.6 & 40.0 & 74.2 & 90.0 & 76.6 & 63.1 & 45.0 \\ RECLIP-80 & 300k & 66.3 & 80.8 & 65.4 & 54.6 & 37.4 & 74.3 & 91.0 & 77.1 & 62.8 & 45.7 \\ RECLIP-64 & 300k & 62.8 & 79.6 & 63.6 & 51.4 & 34.5 & 73.3 & 89.4 & 77.0 & 62.2 & 45.2 \\ \hline RECLIP-112 & 600k & 70.7 & 87.4 & 71.9 & 59.0 & 40.9 & 75.8 & 90.6 & 77.6 & 63.6 & 46.5 \\ RECLIP-80 & 600k & 67.7 & 82.8 & 68.1 & 55.8 & 39.0 & 75.8 & 91.3 & 78.2 & 64.6 & 47.2 \\ RECLIP-64 & 600k & 65.5 & 80.9 & 66.11 & 54.3 & 37.1 & 75.4 & 91.0 & 78.1 & 64.2 & 46.9 \\ \hline \end{tabular} \end{table} Table 4: The importance of RECLIP high-resolution finetuning. We found that high-resolution finetuning significantly improves zero-shot transfer performance. RECLIP-X: RECLIP trained with image size \(X\) where \(X=64,80,112\). \begin{table} \begin{tabular}{l|c|c|c|c c c c|c c c c} & \multirow{2}{*}{Image} & \multirow{2}{*}{Cores} & \multicolumn{4}{c|}{Flickr30K (1K test set)} & \multicolumn{4}{c}{MSCOCO (5K test set)} \\ & & Encoder & \(\times\) & ImageNet & \multicolumn{2}{c}{image-to-text} & \multicolumn{2}{c|}{text-to-image} & \multicolumn{2}{c}{image-to-text} & \multicolumn{2}{c}{text-to-text} & \multicolumn{2}{c}{text-to-image} \\ \cline{3-12} **Method** & Size & Hours & Top-1 & R@1 & R@5 & R@1 & R@5 & R@1 & R@5 & R@1 & R@5 \\ \hline PALI (Chen et al., 2022b) & 3.98 & 598.7k & 85.4 & - & - & - & - & - & - & - & - \\ BASIC (Pham et al., 2021) & 2.48 & 288.1k & 85.7 & - & - & - & - & - & - & - & - \\ CoCa (Yu et al., 2022) & 1.8 & 962.1K & 86.3 & 92.5 & 99.5 & 80.4 & 95.7 & 66.3 & 86.2 & 51.2 & 74.2 \\ \hline CLIP (Radford et al., 2021) & 302M & 120.0k\(\uparrow\) & 76.2 & 88.0 & 98.7 & 68.7 & 90.6 & 58.4 & 81.5 & 37.8 & 62.4 \\ ALIGN (Jia et al., 2021) & 408M & 355.0K & 76.4 & 88.6 & 98.7 & 75.7 & 93.8 & 58.6 & 83.0 & 45.6 & 69.8 \\ FIPIP (Yao et al., 2021) & 302M & 180.0k\(\uparrow\) & 78.3 & 89.8 & 99.2 & 75.0 & 93.4 & 61.3 & 84.3 & 45.9 & 70.6 \\ FIP (Li et al., 2022b) & 303M & 81.9K & 75.8 & 91.7 & - & 78.2 & - & 63.8 & - & 47.3 & - \\ \hline **RECLIP-80 (ours)** & 303M & 28.7K & 76.3 & 91.4 & 99.1 & 79.2 & 94.7 & 64.9 & 85.2 & 48.2 & 72.6 \\ **RECLIP-64-F20K (ours)** & 303M & 16.4K & 75.3 & 92.5 & 99.1 & 78.7 & 94.9 & 64.5 & 85.2 & 47.3 & 71.9 \\ \hline \end{tabular} \end{table} Table 3: Comparisons of zero-shot image-text retrieval results and ImageNet classification top-1 accuracy on Flickr30K, MSCOCO and ImageNet datasets. Models that use the fully-supervised dataset (Sun et al., 2017) and much larger are marked in gray. \(\dagger\): We refer to (Li et al., 2022b) to convert GPU cost to TPU usage in CLIP (Radford et al., 2021), FILIP (Yao et al., 2021). Cores\(\times\)hours results are reported on TPU-v3 infrastructure. Text length for RECLIP main training.Table 5 studies the text length for the RECLIP training. We use the text length of 64 and 16 to train our RECLIP with an image size of \(80\). Somewhat surprisingly, we observe that using a short text length, i.e. 16, during the main training phase clearly reduces the resource use and achieve competitive zero-shot retrieval and image classification performance. This training efficiency gains is possible because we use much shorter image sequence lengths than existing recipes (Radford et al., 2021; Yu et al., 2022). Small batch size for RECLIP main training phase.Our RECLIP is designed with principles of using constant batch size but varying image resolutions during the main training phase. Table 6 ablates effects of the batch size during the main training phase on zero-shot retrieval and image classification accuracy. We first train the model for 250k steps by using the batch size of 4k or 16k and the image size \(112\); then we finetune it for 50k steps by using the batch size of 16k and the image size of \(224\). From Table 6 shows that using smaller batch size (4k) saves compute resource by \(69\%\), but the zero-shot retrieval and classification performance drops significantly even with the same high-resolution finetuning phase. Therefore, we conclude that using the same large batch size is important for language image pretraining to ensure competitive zero-shot transfer performance. Increasing the batch size with small images for RECLIP.In Table 7, we ablate RECLIP by varying both the batch size and image size during the main training phase. The multi-grid training paradigm is as below: (1) we equally divide training process into 3 stages with the same steps in each; (2) we train the model for 25k, 50k and 100k steps by using the batch size of 64k, 32k and 16k, and the image size of \(112\), \(160\) and \(224\) in each stage. The idea is to increase the batch size while using low resolution data, and decrease the batch size with high-resolution data. The multi-grid free baseline is trained for 300k steps by using a constant batch size 16k and image size \(112\), and finetuned with image size \(224\). We observe that RECLIP without "MG" is not only simpler, but saves computational resource by \(30\%\). In addition, RECLIP achieves better zero-shot retrieval retrieval performance on Flickr30K and MSCOCO and very similar ImageNet performance. Multi-stages RECLIP high-resolution finetuning.In Table 8, we further study RECLIP with 1 and 2 high-resolution finetuning stages given a model trained with low-resolution data. We study the following variants: \begin{table} \begin{tabular}{c|c|c c|c c|c} Batch & \multirow{2}{*}{Cores\(\times\)} & \multicolumn{2}{c|}{Flickr30K} & \multicolumn{2}{c|}{MSCOCO} & \multicolumn{1}{c}{INet} \\ Size & & Hours & I2T & T2I & I2T & T2I & Top-1 \\ \hline 4k & **4.2K** & 81.9 & 68.8 & 51.2 & 38.6 & 64.4 \\ \hline 16k & 13.1K & **90.0** & **76.6** & **63.1** & **45.0** & **74.2** \\ \hline \end{tabular} \end{table} Table 6: The importance of RECLIP main training with constant batch size. We found that using the same batch size (16k) for RECLIP main training and finetuning achieves better zero-shot transfer performance. Default RECLIP settings are in dark gray. The best results are in **bold**. \begin{table} \begin{tabular}{c|c|c c|c c|c} MG & \multirow{2}{*}{Cores\(\times\)} & \multicolumn{2}{c|}{Flickr30K} & \multicolumn{2}{c|}{MSCOCO} & \multicolumn{1}{c}{INet} \\ & & Hours & I2T & T2I & I2T & T2I & Top-1 \\ \hline ✓ & 18.4K & 89.2 & 75.5 & 62.3 & **45.3** & **74.5** \\ \hline ✗ & **13.1K** & **90.0** & **76.6** & **63.1** & **45.0** & 74.2 \\ \hline \end{tabular} \end{table} Table 7: The effect of multigrid training strategy, where we increase the image size and decrease the batch size simultaneously. We found RECLIP is simple and effective. Default RECLIP settings are in dark gray. The best results are in **bold**. \begin{table} \begin{tabular}{c|c|c c|c c|c} Text & \multirow{2}{*}{Cores\(\times\)} & \multicolumn{2}{c|}{Flickr30K} & \multicolumn{2}{c|}{MSCOCO} & \multicolumn{1}{c}{INet} \\ Length & \(\times\) hours & I2T & T2I & I2T & T2I & Top-1 \\ \hline 64 & 15.5K & 91.2 & 78.0 & 64.3 & 46.7 & 75.6 \\ **16** & **11.2K** & **91.3** & **78.2** & **64.6** & **47.2** & **75.8** \\ \hline \end{tabular} \end{table} Table 5: The effect of the text length in RECLIP main training. We found that using a short image sequence can further save compute resource and achieve promising zero-shot transfer performance. Default RECLIP settings are in dark gray. The best results are in **bold**. * (\(112\to 224\to 448\)): we first train the model for 300k steps with the image size of \(112\), finetune it for 40k steps with the image size of \(224\), and finetune it for another 40k steps with the image size of \(448\). * (\(112\to 448\)): we train the model for 300k steps with the image size of \(112\) and finetune it for 50k steps with the image size of \(448\). We set 50k steps to keep the computation cost comparable with the first one. We observe that (\(112\to 448\)) gives very competitive zero-shot retrieval and image classification accuracy. Therefore, we use only one high-resolution finetuning stage for simplicity. ### Visualization Visualization of small images.In Fig. 3, we visualize images at various resolutions paired with their corresponding texts. We observe that small images generally preserve high-level structures of the original images, and contain sufficient visual information for language supervisions. For example, the martial arts, office meeting, concert, and gymnastics scenes are clearly recognizable down to \(64\times 64\) resolution. This supports the key insight of our RECLIP training design that leverages small images for the main training phase to save computation. Visualization of image and text retrieval.We present image and text retrieval results of RECLIP in Fig. 4. Despite highly resource efficient training, RECLIP still produces accurate results on both image-to-text and text-to-image retrieval. For example, the concepts of football players, race cars, circular sculpture, police officer, musicians, and bulldozer are all correctly matched between image and texts. ## 5 Conclusions We present the RECLIP, a method for resource-efficient language image pretraining. We propose to leverage small images with paired texts for the main constrastive training phase and finetune the model with high-resolution images for a short cycle at the end. The proposed training method has been validated on zero-shot image and text retrieval benchmarks and image classification datasets. In comparisons to the baseline method, RECLIP training recipe saves the computations by \(6\sim 8\times\) with improved zero-shot retrieval performance and competitive classification accuracy. Compared to the state-of-the art methods, RECLIP significantly saves \(\mathbf{79}\%\sim\mathbf{98}\%\) resource in cores\(\times\)hours with highly competitive zero-shot classification and image-text retrieval performance. We hope RECLIP paves the path to make contrastive language image pretraining more resource-friendly and accessible to the broad research community. ### Broader Impact Statement Language image pretraining plays an important role in many applications, e.g. image and text retrieval, text-to-image generations, open-vocabulary detection, etc. This work presents a language image pretraining method, RECLIP, on large-scale web datasets and the proposed model has been evaluated on a series of zero-shot downstream tasks. The large image-text corpus may contain biased or harmful content which could be learnt by the model. Our model is for research use only and should be carefully studied for any production use cases. The good news is RECLIP significantly reduces the resource use, thereby reducing the carbon footprint and is very environment-friendly for the community to build upon in the long run. \begin{table} \begin{tabular}{c|c|c c|c c|c} Stages & \begin{tabular}{c} Core\(\times\) \\ Hours \\ \end{tabular} & \begin{tabular}{c} Flick730K \\ I2T \\ \end{tabular} & \begin{tabular}{c} MSCOCO \\ I2T \\ \end{tabular} & \begin{tabular}{c} INet \\ I2T \\ \end{tabular} & \begin{tabular}{c} INet \\ Top-1 \\ \end{tabular} \\ \hline \(112\to 224\to 448\) & 31.1K & 91.0 & 77.7 & 64.1 & **47.4** & **76.2** \\ \hline \(112\to 448\) & **30.8K** & **90.7** & **78.0** & **64.3** & 47.0 & 76.1 \\ \hline \end{tabular} \end{table} Table 8: RECLIP with one-stage or multi-stages high-resolution finetuning. We found that one high-resolution finetuning stage is simple and sufficient. Default RECLIP settings are in \begin{tabular}{c} dark gray. \\ \end{tabular} The best results are in **bold**. Figure 3: Visualization of image-text pairs and images are in various resolutions. Images are scaled with the same factor of 0.01 for both height and width. Small images contain sufficient visual information for language-supervised contrastive training. Figure 4: Visualization of image and text retrieval results. Despite training with orders of magnitude less resource, RECLIP correctly match many visual concepts with texts.
2306.10033
Investigating Reproducibility at Interspeech Conferences: A Longitudinal and Comparative Perspective
Reproducibility is a key aspect for scientific advancement across disciplines, and reducing barriers for open science is a focus area for the theme of Interspeech 2023. Availability of source code is one of the indicators that facilitates reproducibility. However, less is known about the rates of reproducibility at Interspeech conferences in comparison to other conferences in the field. In order to fill this gap, we have surveyed 27,717 papers at seven conferences across speech and language processing disciplines. We find that despite having a close number of accepted papers to the other conferences, Interspeech has up to 40% less source code availability. In addition to reporting the difficulties we have encountered during our research, we also provide recommendations and possible directions to increase reproducibility for further studies.
Mohammad Arvan, A. Seza Doğruöz, Natalie Parde
2023-06-07T19:40:37Z
http://arxiv.org/abs/2306.10033v2
# Investigating Reproducibility at Interspeech Conferences: ###### Abstract Reproducibility is a key aspect for scientific advancement across disciplines, and reducing barriers for open science is a focus area for the theme of Interspeech 2023. Availability of source code is one of the indicators that facilitates reproducibility. However, less is known about the rates of reproducibility at Interspeech conferences in comparison to other conferences in the field. In order to fill this gap, we have surveyed 27,717 papers at seven conferences across speech and language processing disciplines. We find that despite having a close number of accepted papers to the other conferences, Interspeech has up to 40% less source code availability. In addition to reporting the difficulties we have encountered during our research, we also provide recommendations and possible directions to increase reproducibility for further studies. Mohammad Arvan,\({}^{1}\) A. Seza Dogruoz,\({}^{2}\) Natalie Parde\({}^{1}\)+\({}^{1}\)University of Illinois Chicago, USA \({}^{2}\)Ghent University, Belgium [email protected], [email protected], [email protected] Footnote †: Published at Interspeech’23, DOI: 10.21437/Interspeech.2023-2252 **Index Terms**: open science, reproducibility, speech, interdisciplinary perspectives, Interspeech'23 theme. ## 1 Introduction The United Nations Educational, Scientific and Cultural Organization (UNESCO) has recently released its recommendations on open science [1]. These recommendations are based on the consideration that science's efficiency, effectiveness, and impact can be improved by making scientific knowledge, data, and information openly available, accessible, and reusable. Considering the theme of this year's Interspeech is to address barriers that could prevent other researchers from adopting a technique, or users from benefitting, we find this an excellent opportunity to discuss the importance of open science within the Interspeech community and highlight the ways it can mitigate such barriers. Part of these recommendations is educating researchers on ways to align their research with open science practices. Fortunately, unlike many other disciplines, access to the scientific papers published at Interspeech is not restricted behind a paywall. Nevertheless, there may be other concerns, such as the reproducibility of the results reported in these papers. Since published papers may not include all the details required to reproduce the results, the availability and quality of the software and the data used in the experiments are also important factors to consider. The growing concerns regarding this matter, often referred to as the "reproducibility crisis" [2] has led to several action items to address this issue. To address this concern, the standardization of research- and publication-oriented best practices and the pursuit of reproducible research has been increasingly central within the machine learning and speech and language processing communities. Several top-tier conferences have promoted _reproducibility challenges_ as part of workshops [3] or even in initiatives in which authors can earn badges for their papers [4]. Others have produced comprehensive _reproducibility studies_ to draw attention to the matter and suggest promising directions for improving conditions [5]. Collectively, these efforts have increased awareness of the importance of research reproducibility. Evidence suggests that they have also resulted in real gains in the rate at which authors release their source code or data [6]. Thus far, the extent to which these efforts have carried over to the speech processing community seems minimal. These efforts have been limited to asking authors to fill out a reproducibility checklist during the submission process. Additionally, this year's theme suggests that authors report performance metric distributions in addition to averages. In this work, we set out to systematically understand and report upon the state of reproducibility at Interspeech, the world's premier conference on speech processing, compared to peer conferences in the machine learning and natural language processing communities. Our primary contribution is to reveal current trends and promote recommendations for reducing identified barriers in the future, fostering more open science in the Interspeech community. Although it is beyond the scope of this paper to comprehensively examine all facets of reproducibility within the community, we argue that our study captures a reasonable portrait of Interspeech reproducibility as a whole, as evidenced by relevant findings in comparable studies in machine learning and NLP. ## 2 Related Work Although research towards building and sustaining reproducible and open science is gradually becoming more standardized, work to date on this topic has been approached from a wide variety of perspectives. Because of this and also because of the novelty of this topic as research in its own right, different terms and definitions are often used in the context of reproducibility interchangeably. To avoid confusion and misunderstanding, we begin this section by briefly defining the terms used in our own work; namely, _reproducibility_, _replicability_, and _open science_. We follow these definitions with a brief summary of the evolution of work towards reproducibility and open science to date, framed within the broader AI community. We adapt the definitions originally proposed by The International Vocabulary of Metrology (VIM) [7], highlighted by Belz et. al [5]. **Reproducibility** Within the VIM framework, _reproducibility_ is defined as measurement precision under a set of conditions. Using precision enables the quantification of reproducibility by standard statistical measures such as the coefficient of variation (CV), computed as the ratio between the standard deviation \(\sigma\) and mean \(\mu\) of measured scores: \(\frac{\sigma}{\mu}\). Specifi cally, when computing these statistical measures the conditions are all the variables affecting an observed result, including but not limited to source code, runtime environment, data, model, and hardware. **Replicability** Following the definition of reproducibility, we define _replicability_ (or _repeatability_) as reproducibility under the same conditions. Although reproducibility may be framed as both a binary (yes/no) variable or a numeric score, the latter quantification of a reproduction attempt (most often using CV) may provide more information and be less subjective. In a recent study [8], only 14.03% of 513 reproduction score pairs were found to be the same, highlighting the importance of more fine-grained quantitative measures in understanding the extent of reproducibility [9, 10]. UNESCO [1] defines open science as "an inclusive construct that combines various movements and practices that aim to make multilingual scientific knowledge openly available, accessible, and reusable for everyone." Many venues have encouraged researchers to share more of their work over the last few years [11, 12, 13, 14, 4]. Proposed checklists such as the ML Reproducibility Checklist [15] and the ML Completeness Checklist [16] ask authors to provide specification of dependencies, training and evaluation code, pre-trained models, and proper documentation on how to run the provided code. However, recent research suggests that this information may not be enough to support adequate reproducibility [17, 6]. One of the early assumptions with regard to reproducibility was that simply sharing source code, data, and hyperparameters would suffice to achieve reproducibility. More recent studies [6, 18, 17] have shown that in practice, however, this is not the case. In fact, even changing the random seed could have drastic impact on the final results [19]. Controlling the conditions affecting results (and understanding what those conditions are) is extremely important to achieve reproducibility. Even with open source code, data, and hyperparameters, changes to other more innocuous or hidden experimental conditions may greatly influence reproducibility. For example, one of the factors missing from the original definition of reproducibility is the runtime environment. Listing the dependencies and the versions of the dependencies used when reporting results may foster greater reproducibility, but it also may not be sufficient for achieving reproducible results since packages may be updated or even become deprecated over time. Due to the nondeterministic nature of deep learning models, even the hardware used for a study can cause variation in the results. Fixing the random seed may fall short of addressing this concern as well as it is only guaranteed that it will generate the same sequence of numbers when used in the _same_ environment. Ultimately, capturing all conditions involved in scientific experiments is an ever-evolving and challenging task. Currently, the best solution is to provide a self-contained docker container or a virtual machine image that can be used to reproduce the results. While this concept has not been fully adopted by the machine learning or NLP communities, it is considered as the best practice in other computing research areas such as systems and software engineering [20, 21]. While self-containment might not seem necessary at first glance, it prolongs the lifetime of the artifacts by reducing the chance of reliance on unavailable dependencies. ## 3 Methodology Considering the necessity of access to research artifacts to most productively reproduce the results of computing research, including that within the speech processing community, we set out to study the availability of research artifacts for papers published at Interspeech. The papers published at Interspeech are available through the isca-speech.org website. This portal provides basic information regarding speech processing papers listed in the conference proceedings, as well as the papers themselves. At the time of writing, the website provides information regarding 35,050 papers published at 344 conferences. We limit our analysis to papers published at Interspeech from 2017 to 2022. Given the fast pace of research in speech processing specifically and computing more broadly, we selected a five-year window to balance our competing interests in studying reproducibility over time while recognizing that the Interspeech community even just five years ago was in some ways very different from how it is today. We believe that 2019, the year that the NeurIPS 2019 Reproducibility Program started, was a turning point regarding the availability of research artifacts. Given our five-year window, we were able to include data that highlighted trends both prior to and following 2019. While the ISCA portal is useful for researchers who are searching for papers, it does not provide any information about the availability of research artifacts; hence, retrieving this information originally required us to manually search each paper. To expedite and streamline this process, we wrote a Python script to download the PDF files. Then, we used the PyPDF2 library to extract the text from the PDFs. Following this, we performed a simple keyword search ("github.com") to determine whether the paper contained any information regarding released software artifacts. GitHub is by and large the most popular website for hosting open source code, making its presence in a paper a reasonable first clue towards source code availability. The references section of the paper was excluded from the search to reduce false positives. To provide a point of comparison, we also collected the same information for major NLP and machine learning conferences (considered peer conferences to Interspeech in adjacent research fields). Unlike isca-speech.org, the aclanthology.org portal (which hosts the papers from all top-tier natural language processing conferences) contains "code" and "data" fields that indicate whether the paper contains a link to the source code and the data. We use these fields to determine the availability of research artifacts for these conferences. We limit our NLP search to papers published at ACL, EMNLP, NAACL, LREC, and COLING. For our machine learning search, we attempted to follow a similar process for papers published at NeurIPS. This conference utilizes the openreview.net website to host their proceedings. The website provides an API to download accepted papers published at conferences. NeurIPS also provides a paper portal that provides a similar experience to the ACL Anthology. ## 4 Results We present our primary results in Figure 1. The figure shows the percentage of papers with research artifacts for each conference. We observe that the percentage of papers with research artifacts is higher in NeurIPS and nearly all of NLP conferences than in Interspeech. On a positive note, all of the conferences have an upward trend for research artifact submission. In the case of NeurIPS, ACL, EMNLP, and NAACL, the percentage has surpassed 50%. The gap between research artifact availability in NeurIPS and Interspeech is nearly 40%. In fact, except for LREC, all other conferences have a higher percentage of papers with research artifacts than Interspeech. We demonstrate the total number of papers accepted at each conference in Figure 2. Interestingly, in addition to being the front-runner in terms of its percentage of papers with research artifacts available, NeurIPS also has the greatest overall volume of papers accepted. The rest of the conferences are relatively close in terms of the total number of papers that they accepted. The data presented was collected by scraping different portals. The issue with this approach is that if the structure or the design of the page changes, the entire collection process must be updated. Additionally, APIs often come with a limit rate to avoid overloading the backend servers. Without having access to an API, we had to estimate our own limits to avoid overloading servers. While openreview.net comes with an API, there does not seem to be a way of differentiating between papers that have been accepted and those that have been rejected. Hence, we had to manually check the status of each paper. This process was time-consuming and error-prone. Had we been able to access that information through an API, we could have collected more data regarding machine learning conferences as well (e.g., by including ICML or ICLR), increasing the comprehensiveness of our analysis. ## 5 Discussion ### Where are we now? Based on our results, we can conclude that the percentage of papers with research artifacts in Interspeech is lower than that observed in other conferences. This is a concerning issue, as it may suggest a lack of reproducibility in research published at Interspeech. We believe this could create unnecessary barriers for future researchers to adapt and build upon the work presented at this conference. This problem is not easily solved, as it requires a change in established research practices and acceptance from the community. One may argue that it is harder to share research artifacts in the field of speech processing than other fields, due to issues such as complexity, artifact size, or even privacy concerns. However, we believe that the community should strive to make research artifacts available and that other communities have dealt with many parallel concerns; their solutions may offer guidance on how to address these concerns in the Interspeech community. We hope that this work will raise awareness about the importance of reproducibility in Interspeech. The isca-speech.org portal provides a useful resource for researchers to find papers. However, it does not provide any information about the availability of research artifacts. We believe that this portal should be updated to include this information. This would make it easier for researchers to find papers with research artifacts. ### What is the next step? By identifying where we should be, we can provide a future direction. In an ideal scenario, future researchers would have access to fully functional self-contained artifacts alongside the report published by the original authors. This availability would facilitate further scrutiny and review. Furthermore, the research artifacts would contain details that may have been omitted in the paper itself, providing a comparison point in the event that other researchers chose to re-implement and reuse such work. Similar to the peer-review evaluations of papers, program committees can evaluate the reproducibility of papers that rely on empirical evidence. The results could then be published in the conference proceedings, and this would be a significant step towards open science initiatives. The availability of reports reviewing and investigating the reproducibility of previously published work would be one of the most valuable outcomes of this process. However, we are not there yet. Forcing such a change is not practical. Instead, it is more feasible to take small steps toward this goal. When the ideal case scenario is not feasible, the unavailability of research artifacts may cause complications for follow-up studies. In particular, if the reproduced and original results do not match with each other, it may be unclear whether this mismatch is because of missing features, bugs in the new implementation, or the irreproducibility of the original results. With this lack of clarity, debugging the real cause of any mis Figure 1: Percentage of papers with research artifacts at the selected conferences over the years. Figure 2: Total number of papers accepted at the selected conferences over the years. matched results may be quite challenging. Hence, increasing the quantity and improving the quality of research artifacts is a good starting point for the Interspeech community. These two aspects can be addressed in parallel. We emphasize that artifacts **should** be self-contained since the previous lack of emphasis on self-containment has rendered many already-released research artifacts obsolete [17]. Using self-contained artifacts, researchers could assess the reproducibility of previously-reported results under the same conditions. Following this, finding conditions that must be controlled to facilitate effective reproducibility should be considered an important area of research. This process is also more aligned with modern software engineering principles, where testing and reviewing source code is a common practice. It also underlines the importance of good engineering practices in academia. It is understandable that researchers may not be able to share their data or research artifacts due to GDPR and confidentiality issues. These restrictions are out of the control of researchers and should be considered when evaluating the reproducibility of results. Nevertheless, sharing data sources and code should be encouraged as much as possible to achieve open science goals. Researchers who face issues about sharing their own data could perhaps consider including supplementary experiments performed on publicly available datasets (when/if possible) to strengthen their claims and foster comparison with prior work. Conference organizers could also provide support through encouraging reproducibility reports. As an example of involvement from conference leadership, ACL and EMNLP have included reproduction studies as contributions in their calls for papers. Conducting reproducibility evaluations may result in many failed attempts. While some of these failures may actually be due to bugs or other issues, many of them may also be due to the lack of self-containment or proper documentation. The reproducibility of a submitted research paper should be viewed as another dimension of its quality during the evaluation process. However, it is still challenging to assess the validity of research results. Similar to the peer-review evaluations, artifact evaluation is not free of flaws. A failed reproduction attempt should not single-handedly result in a paper's dismissal. However, the additional scrutiny of evaluating the potential for reproducibility of research results during the submission process would help the field progress towards open science goals in the long run. We summarize our recommendations for the Interspeech community below as follows: * Researchers should be encouraged to share their research artifacts. * isca-speech.org could be updated to include information about the availability of research artifacts (and we are willing to share our suggestions for improvements). * Interspeech organizers could consider encouraging reproducibility evaluations for accepted papers that rely on empirical evidence. * Interspeech organizers could encourage reproduction studies as a contribution or as a separate track for the annual conference. We recognize that there may be numerous barriers to implementing any and all of these recommendations, and we encourage incremental steps toward them as well. Nevertheless, we hope that these recommendations propel the Interspeech community forward toward producing more reproducible, open science. ## 6 Limitations While we have done our best to ensure the accuracy of our results, there are some limitations to our work. First, we have used a simple keyword search to determine whether papers not hosted on the ACL Anthology contain links to research artifacts. This approach may have led to false positives. For example, a paper may contain a link to a GitHub repository that is not related to the research artifacts, such as a third-party dataset or tool. This may have caused an overly optimistic estimation of the percentage of papers with research artifacts published at Interspeech and NeurIPS. At the same time, since these venues have no formal procedures to include a link to the source code within a paper, some published papers may have publicly released their source code but not included any reference to that code within their paper. Unfortunately, except through mass direct inquiries, there does not seem to be a way to collect such data in a more precise manner. Second, we have used the aclanthology.org website to determine the availability of research artifacts for NLP conferences. We suspect that the authors of some papers may have not included the links to their research artifacts on their paper's landing page in the aclanthology.org website. Hence, we may have underestimated the percentage of research papers with publicly available research artifacts in NLP conferences. Finally, this work reports on the availability of research artifacts for papers published at Interspeech, which is not necessarily broadly representative of the full speech processing community. Our focus on this conference was in part introspective, and in part a direct response to the conference theme of inclusive spoken language science and technology. Specifically, we seek to systematically understand one aspect of research inclusivity by studying reproducibility, for which a lack thereof poses a barrier to open science. ## 7 Conclusion In this work, we provided a high-level overview of the state of research artifact availability at Interspeech, NeurIPS, and several NLP conferences. We found that the percentage of papers with research artifacts published at Interspeech is lower than nearly all peer conferences published at selected NLP and machine learning conferences. We hope our research findings will inform the Interspeech community about the current situation around reproducibility and inspire them to take the necessary steps towards improving the quantity and quality of research artifacts in their research and at Interspeech conferences. Doing so will remove unnecessary barriers for future researchers to build upon the work presented at Interspeech and benefit the authors whose results will be reproduced. ## 8 Acknowledgements We thank the anonymous reviewers for their helpful feedback, which we incorporated in the final version of this manuscript.
2310.04978
TopicAdapt- An Inter-Corpora Topics Adaptation Approach
Topic models are popular statistical tools for detecting latent semantic topics in a text corpus. They have been utilized in various applications across different fields. However, traditional topic models have some limitations, including insensitivity to user guidance, sensitivity to the amount and quality of data, and the inability to adapt learned topics from one corpus to another. To address these challenges, this paper proposes a neural topic model, TopicAdapt, that can adapt relevant topics from a related source corpus and also discover new topics in a target corpus that are absent in the source corpus. The proposed model offers a promising approach to improve topic modeling performance in practical scenarios. Experiments over multiple datasets from diverse domains show the superiority of the proposed model against the state-of-the-art topic models.
Pritom Saha Akash, Trisha Das, Kevin Chen-Chuan Chang
2023-10-08T02:56:44Z
http://arxiv.org/abs/2310.04978v1
# TopicAdapt- An Inter-Corpora Topics Adaptation Approach ###### Abstract Topic models are popular statistical tools for detecting latent semantic topics in a text corpus. They have been utilized in various applications across different fields. However, traditional topic models have some limitations, including insensitivity to user guidance, sensitivity to the amount and quality of data, and the inability to adapt learned topics from one corpus to another. To address these challenges, this paper proposes a neural topic model, TopicAdapt, that can adapt relevant topics from a related source corpus and also discover new topics in a target corpus that are absent in the source corpus. The proposed model offers a promising approach to improve topic modeling performance in practical scenarios. Experiments over multiple datasets from diverse domains show the superiority of the proposed model against the state-of-the-art topic models. ## 1 Introduction To effectively and quickly comprehend and navigate a big text corpus, it is important to mine a set of diverse and cohesive topics automatically. Topic models [1, 19] are statistical tools for detecting latent semantic themes in a text collection. These approaches have gained popularity for text mining [18, 2] and information retrieval tasks [12, 13] spanning a wide range of applications in fields such as science, humanities, business, and other related areas [1]. Despite the effectiveness of standard topic models for understanding latent topics in a large corpus, they suffer from several drawbacks. Firstly, the traditional topic [1, 1] models do not consider user guidance in learning the topics. For example, users may already know the name of topics but want to know the corpus-specific representation of that topic. Secondly, the performance of topic models is often sensitive to the amount of data and the quality of the data, and a small corpus may not provide enough information to identify the underlying topics accurately. One possible way to handle this is to adapt a pre-trained topic model from a related corpus to the target corpus. It leverages the knowledge learned from a large source corpus to improve the topic modeling performance on the small target corpus. However, in traditional topic models, there is no specific way to adapt learned topics from one corpus to another. Moreover, not all the topics of the related source domain are actual topics of the target domain, and there may also exist new topics in the target corpus different from the source corpus. For example, the source domain may cover topics such as "politics" and "sports" where the target may have a new topic, "entertainment", different from the source domain. Therefore, we develop a model named **TopicAdapt** that can dynamically adapt relevant topics from the source domain by transfer learning and also can discover new topics available in the target domain but absent in the source domain. To evaluate the performance of the proposed model, we conduct both quantitive and qualitative evaluations over multiple datasets from diverse domains. The experimental results show the superiority of the proposed model against the state-of-the-art topic models. ## 2 Methodology ### Problem Statement We proposed a problem of adapting topics from one corpus (i.e., domain) to another. As input, it takes a target corpus \(\mathcal{D}\), the topic-word distribution from a source reference corpus or alternatively named representation \(\beta^{r}\) for \(k\) well-defined topics with their surface names \(\mathcal{C}\). As output, we want to learn topic-word distribution \(\beta\) for the target corpus that best represents the corpus by given well-known topics. It also aims to generate new topics from the target corpus without any supervision or minimal supervision, such as using only the topic surface names. Similar to an existing topic model named Coordinated Topic Model (CTM) Akash et al. (2022), we can get a set of well-defined topics with their representation. More specifically, we use labeled LDA Ramage et al. (2009) to get reference representation (more details on Akash et al. (2022)). To solve our problem, we have chosen to use the Embedded Topic Model (ETM) Dieng et al. (2020) as the foundation of our proposed model and extension of a recent topic model named Coordinated Topic Modeling (CTM) Akash et al. (2022). We have several compelling reasons for this choice. Firstly, ETM is an excellent choice because it effectively combines the strengths of neural topic modeling and word embedding when modeling a corpus. Secondly, using pre-trained word embeddings enables us to map words in a common vector space, even if those words are not present in the target corpus vocabulary. Finally, we can impose our problem-specific requirements by applying regularization techniques to the objectives of ETM. Similar to CTM, the proposed framework uses topic-level and document-level supervision. Moreover, our model also incorporates word-level supervision for having topics comprising semantically similar words. As part of our problem, we are given a topic-word distribution \(\beta^{r}\) for some known topics, along with their surface names \(\mathcal{C}\). We aim to adapt these topics for a target corpus \(\mathcal{D}\) and discover new topics. To achieve this, we have modified the ETM model to incorporate supervision from \(\mathcal{C}\) and \(\beta^{r}\) as guidance. However, we cannot directly use \(\beta^{r}\) in the ETM model for the target corpus due to the vocabulary mismatch problem with the reference corpus. Therefore, we have modified the original ETM model structure, similar to CTM, to learn a topic-word distribution with vocabulary dimensions comparable to \(\beta^{r}\). Additionally, we have generated pseudo-labeled documents in the target corpus using \(\mathcal{C}\) to enhance document modeling in ETM. Lastly, we have also used \(\mathcal{C}\) to bias the topic distribution and create topics consisting of semantically closer words. The overview of our model is shown in Figure 1. ### TopicAdapt **Topic-level Supervision:** A set of topics with a reference representation \(\beta^{r}\) is employed as source topics to guide the generation of a target representation \(\beta\) that best captures the characteristics of the given \(\mathcal{D}\). The reference representation may be obtained from sources such as a large annotated corpus in a similar domain. However, a key issue arises in using \(\beta^{r}\) directly as guidance, as it cannot be assumed that \(\beta^{r}\) and \(\beta\) share the same vocabulary. To solve this, following CTM, Akash et al. (2022), an indirect method of supervision called "reference projection" is employed. To elaborate further, in conjunction with the parameter \(\beta\), the projected representation \(\tilde{\beta}^{r}=f(\tilde{\rho}^{\top}\alpha)\) is computed where \(\tilde{\rho}\) denotes the embedding matrix associated with the lexicon upon which the reference \(\beta^{r}\) is constructed. Finally, \(\tilde{\beta}^{r}\) is used to indirectly guide \(\beta\) by minimizing the following: \[R_{\beta}=\frac{1}{k}\sum_{j=1}^{k}KL(\beta_{j}^{r},\tilde{\beta}_{j}^{r})\] **Document-level Supervision:** Similar to CTM Akash et al. (2022), in this study, we utilize \(\mathcal{C}\) to obtain \(\theta^{t}\) for document-level supervision. To achieve this, we employ a pre-trained textual entailment model Liu et al. (2019). The model takes an input document \(d\) as the "premise" creates a "hypothesis" by filling a template with a surface name \(c_{k}\in\mathcal{C}\), and produces a probability \(p_{dk}\) representing the extent to which the premise entails the hypothesis. This distribution is then used to guide document topic distribution. We directly utilize the generated probabilities \(p_{dk}\) as a soft label for \(\theta^{t}_{dk}\). Soft labeling offers the opportunity to implement a technique proposed by Bhatia et al. (2016), which emphasizes the high-probability label while diminishing the low-probability ones. To accomplish this, the method squares and normalizes the \(p_{dk}\) values in the following manner: Figure 1: Proposed Architecture \[\theta^{t}_{dk}=\frac{p^{2}_{dk}/f_{k}}{\sum_{k^{\prime}}p^{2}_{dk^{\prime}}/f_{k^{ \prime}}},f_{k}=\sum_{d\in D}p_{dk}\] The \(\theta^{t}\) value is employed to offer supervision at the document level by reducing the following: \[R_{\theta}=\frac{1}{|D|}\sum_{d\in D}KL(\theta^{t}_{d},\theta_{d}).\] **Word-level Supervision:** The distribution of topics over vocabulary words is such that the most relevant words in a given topic are semantically related to the topic's name. To leverage this observation, pretrained word embeddings are employed to obtain embeddings for all vocabulary words. Subsequently, cosine similarity between the surface name of a topic and vocabulary words is used to generate a topic conditional probability distribution over all the vocabulary words \(\gamma\). \(\gamma\) serves as a guide for constructing the topic-word distribution. \[R_{\gamma}=\frac{1}{k}\sum_{j=1}^{k}KL(\gamma_{j},\beta_{j})\] #### 2.2.1 Training We unify topic-level, document-level, and word-level supervision into one model by constraining the objective of our base model as follows: \[\mathcal{L}(\theta)=ELBO-\gamma_{\beta}R_{\beta}-\gamma_{\theta}R_{\theta}- \gamma_{\gamma}R_{\gamma}, \tag{1}\] where \(\gamma_{\beta}\), \(\gamma_{\theta}\) and \(\gamma_{\gamma}\) are the regularization weights for \(R_{\beta}\), \(R_{\theta}\) and \(R_{\gamma}\) respectively. Maximizing Eq. 1 ensures the following objectives: (1) The ELBO part enforces the model to explain \(D\) by reducing the reconstruction error; (2) \(R_{\beta}\) enforces the model to move \(\beta\) in the direction of \(\beta^{r}\); (3) \(R_{\theta}\) encourages the model to maintain the global semantics of given topics in \(\beta\) by enforcing \(\theta\) and \(\theta^{t}\) as similar as possible; and (4) \(R_{\gamma}\) enforces topic words to be similar to relevant words in the vocabulary. ## 3 Experiments ### Data We use three datasets from news articles: 20 Newsgroup corpus 1, New York Times annotated corpus Sandhaus (2008), AG's News dataset Yang et al. (2016). For the review sentiment domain, we use the Yelp restaurant review dataset and IMDB Movie Review dataset. For academic articles, we use: Arxiv abstracts 2, Microsoft Academic Graph AI article abstracts Sinha et al. (2015). See Appendix A.2 for more details. Footnote 1: [http://qwone.com/](http://qwone.com/) jason/20Newsgroups/ Footnote 2: [https://www.kaggle.com/Cornell-University/arxiv](https://www.kaggle.com/Cornell-University/arxiv) ### Baselines We compare our model with the following baselines. GLDA Jagarlamudi et al. (2012), Sup+LLDA Ramage et al. (2009), ZS+LLDA Ramage et al. (2009), ACorEx, AVIAD Hoang et al. (2019), KeyETM Harandizadeh et al. (2022), ECTM Saha Akash et al. (2022). The details of the baselines can be found in Appendix A.3. ### Topic Quality Evaluation We use the following three quantitative measurements to evaluate the quality of inferred topics: Topic coherence (TC), Topic diversity (TD), and Topic Quality (TQ). Details about these metrics can be found in the Appendix A.4. We first show the quantitative results of topic quality in Table 1. The results suggest that, for news and sentiment domains, TopicAdapt generates more coherent and interpretable topics than other baselines. In Table 2, we show randomly selected two topics from each dataset and top-5 words under each topic from reference topic words, ECTM and TopicAdapt. Words that we found to be irrelevant to the corresponding topic are marked with (\(\times\)) in Table 2. The table consisting of results from all baselines can be found in A.5. In comparison to baselines, our method's generated topic terms are generally pertinent and simple to understand. We also note that the topics created by AcorEx have respectable interpretability (See Appendix A.5). However, rather than adapting to the target corpus, AcorEx's produced topics strictly converge toward the prior representation. Our approach, in contrast, tends to capture the elements of the given themes that are unique to the target corpus. AVIAD, on the other hand, has the opposite problem. It varies so widely that the subjects are incredibly challenging to comprehend. When the target corpus is balanced, the KeyETM with a similar base model (ETM) to ours performs better. For instance, the keyETM works well since the \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{**20News**} & \multicolumn{2}{c|}{**NYT**} & \multicolumn{2}{c|}{**Yelp-Senti**} & \multicolumn{2}{c}{**Arxiv-AI**} \\ & TC & TD & TC & TD & TC & TD & TC & TD & TQ \\ \hline GLDA & 0.25 & 0.87 & 0.22 & 0.26 & 0.85 & 0.22 & 0.08 & 0.06 & 0.09 & 0.39 & 0.49 \\ Sup+LLDA & 0.23 & 0.79 & 0.18 & 0.20 & 0.63 & 0.12 & 0.06 & 0.70 & 0.04 & 0.04 & 0.46 & 0.02 \\ ZS+LLDA & 0.23 & 0.80 & 0.18 & 0.17 & 0.65 & 0.11 & 0.06 & 0.76 & 0.05 & 0.14 & 0.80 & 0.11 \\ AcorEx & 0.25 & 0.28 & 0.25 & **0.27** & **1.00** & 0.27 & 0.20 & 0.70 & 0.08 & 0.05 & -0.03 \\ AVAD & 0.13 & **1.00** & 0.26 & **1.00** & 0.26 & 0.26 & 0.01 & **1.00** & -0.01 & -0.34 & **1.00** & 0.34 \\ KeyETM & 0.26 & **1.00** & 0.26 & 0.19 & 0.89 & 0.17 & 0.07 & 0.92 & 0.07 & 0.04 & **1.00** & 0.04 \\ \hline ECTM & **0.30** & **1.00** & **0.30** & **0.28** & 0.97 & **0.27** & **0.09** & **1.00** & **0.09** & **0.15** & 0.97 & **0.15** \\ \hline TopicAdapt & **0.30** & **1.00** & **0.30** & **0.28** & 0.97 & **0.27** & **0.11** & **1.00** & **0.11** & 0.12 & 0.93 & 0.12 \\ \hline \end{tabular} \end{table} Table 1: Quality Measures of Topic dataset 20Newsg is relatively balanced. Our model consistently outperforms the competition because it benefits from both topic-level supervision and document-level supervision from existing knowledge sources to make the topics adjusted to the target corpus while also maintaining the semantics of the given topic names. Moreover, the words from each inferred topic are more semantically related to each other than other baselines, thanks to our word-level supervision. ### Case studies **Case study 1**: From Table 3, we can see our model can generate new topics from the target corpus without supervision from the source corpus. We infer the topic names by observing the top 5 words of each topic. **Case study 2:** For this experiment, we use the AG News dataset as the source corpus and NYT corpus as the target domain. Particularly, we selected the period of attack at the Twin Towers from the New York Times corpus to investigate if the model can adapt given topics from the source corpus as well as find new topics from the target corpus. From Table 4, we can see that the model is able to adapt relevant words for each topic related to both cases. For the topics of 911 and 9/11, we used minimal supervision by providing the topic surface names to the model. 911 is mostly related to medical emergency cases, whereas 9/11 refers to the terrorist attack. The model is able to identify the most relevant words from the target documents (NYT corpus on 9/11/2001). ## 4 Conclusion In this paper, we propose a problem of adapting topics from a source corpus to a target corpus, also identifying new topics for the target corpus. Different from a recent work called coordinated topic modeling which only uses well-defined topics to describe a new corpus, we also mine new topics that represent the target corpus. For this purpose, we design a method named TopicAdapt which is based on an embedded topic model (Dieng et al., 2020) that uses three levels of supervision namely word-level supervision, topic-level supervision, and document-level supervision. An extensive experiment over a \begin{table} \begin{tabular}{l l} \hline \hline Topic Name & Top 5 Words \\ \hline \hline 9/11 & terrorist, terror, terrorism, militant, terrorists \\ 911 & doctor, hospital, medical, physician, nurse \\ \hline \hline \end{tabular} \end{table} Table 4: Case study 2- Topic name as minimal supervision for target corpus specific topic \begin{table} \begin{tabular}{l|c c|c|c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**20Newsg**} & \multicolumn{2}{c|}{**NYT**} & \multicolumn{2}{c|}{**VUp-Senti**} & \multicolumn{2}{c}{**Arvok-AI**} \\ & sports & politics & business & technology & good & bad & ML & IR \\ \hline \multirow{6}{*}{\begin{tabular}{l} Reference \\ Topic \\ \end{tabular} } & night & leader & stock & software & song & waste & machine & retrieval \\ & play & diction & sale & technology & music & arful & learning & document \\ & sport & amack & share & service & musical & territe & algorithm & query \\ & player & apl & phillon & internet & wonderful & boring & optimization & search \\ & boat & travel & fall & launch & dance & poor & problem & base \\ \hline \multirow{6}{*}{\begin{tabular}{l} AcetX \\ \end{tabular} } & point & force & billion & release (\(\gamma\)) & good & bad & optimization & search \\ & play & country & business & technology & bear (\(\gamma\)) & money (\(\gamma\)) & gradient & document \\ & player & amack & boy & phone & beautiful & territe & convergence & query \\ & league & military & stock & time (\(\gamma\)) & music (\(\gamma\)) & poor & stochastic & retrieval \\ & boat & political & profit & space & sound (\(\gamma\)) & water & guitar (\(\gamma\)) & semantics \\ \hline \multirow{6}{*}{\begin{tabular}{l} AVJAD \\ \end{tabular} } & \begin{tabular}{l} debantic (\(\gamma\)) \\ \end{tabular} (\(\gamma\)) & tragedy (\(\gamma\)) & \begin{tabular}{l} (\(\gamma\)) \\ \end{tabular} (\(\gamma\)) & pressure (\(\gamma\)) & \begin{tabular}{l} traditional (\(\gamma\)) \\ \end{tabular} (\(\gamma\)) & email (\(\gamma\)) & bad & dr \\ & \begin{tabular}{l} problem (\(\gamma\)) \\ \end{tabular} (\(\gamma\)) & policy & zero (\(\gamma\)) & enlargement (\(\gamma\)) & snow (\(\gamma\)) & water & analytically (\(\gamma\)) & healthy (\(\gamma\)) \\ & howe (\(\gamma\)) & serutin & earning & \(\text{abs}(\gamma)\) & filling & management(\(\gamma\)) & certify (\(\gamma\)) & processing (\(\gamma\)) \\ & play & freedom & europice & top (\(\gamma\)) & bague (\(\gamma\)) & yell & arm (\(\gamma\)) & patient (\(\gamma\)) \\ & all & mine (\(\gamma\)) & acquire & chechynos (\(\gamma\)) & seaweed (\(\gamma\)) & acknowledge (\(\gamma\)) & pruning & chrs \\ \hline \multirow{6}{*}{\begin{tabular}{l} KeyETM \\ \end{tabular} } & game & people & year (\(\gamma\)) & company (\(\gamma\)) & good & food (\(\gamma\)) & function & translation (\(\gamma\)) \\ & team & government & percent & bank (\(\gamma\)) & place (\(\gamma\)) & order (\(\gamma\)) & estimation (\(\gamma\)) & user \\ & season & person (\(\gamma\)) & market & Japan (\(\gamma\)) & great & service (\(\gamma\)) & distribution (\(\gamma\)) & search \\ & play & amretain & time (\(\gamma\)) & data (\(\gamma\)) & time (\(\gamma\)) & ear (\(\gamma\)) & parameter (\(\gamma\)) & amonetion \\ & win & law & month (\(\gamma\)) & snata (\(\gamma\)) & love & restaurant (\(\gamma\)) & efficient (\(\gamma\)) & point (\(\gamma\)) \\ \hline \multirow{6}{*}{\begin{tabular}{l} ECTM \\ \end{tabular} } & game & government & company & space & great & waste & optimization & retrieval \\ & team & war & bank & size & music (\(\gamma\)) & arful & convergence & document \\ & win & military & percent & technology & love & terthike & stochastic & query \\ & season & amretain & market & station & wonderful & bad & gradient & search \\ & league & attack & price & network & amazing & bonville & function & user \\ \hline \multirow{6}{*}{ \begin{tabular}{l} TopicAdapt \\ \end{tabular} } & game & government & percent & company & excellent & waste & machine & retrieval \\ & team & war & company & technology & great & bad & problem & document \\ & win & military & bank & space & good & horville & algorithm & search \\ & season & president & year & site & superb & crop & convergence & query \\ & play & political & market & station & perfect & garbage & optimization & semantic \\ \hline \hline \end{tabular} \end{table} Table 2: Qualitative Evaluation \begin{table} \begin{tabular}{l l} \hline \hline Topic Name & Top 5 words \\ \hline gun violence & gun, law, president, firearm, crime \\ sales & price, sale, buy, sell, work \\ \hline \hline \end{tabular} \end{table} Table 3: Case study 1- No supervision for target corpus-specific topic. We infer the topic names by observing the top 5 words of each topic set of datasets from different domains demonstrates the superiority of the proposed model over multiple strong baselines. ## 5 Limitation The proposed model depends on two pretrained models- pretrained word-embedding for vocabulary words and pretrained language model for textual entailment during generating document-level supervision. However, for a very specific target domain, the pretrained knowledge might be appropriate. In such a case, finetuning those models on the target corpus is worth exploring for better performance. Moreover, similar to CTM Akash et al. (2022), in this paper, we assume that the reference and target corpora are from common or very similar domains. However, practically, it is very probable that we may need to transfer topic knowledge from one domain to another. Therefore, extending our model for cross-domain scenarios is also an interesting future direction.
2307.07666
Efficient Action Robust Reinforcement Learning with Probabilistic Policy Execution Uncertainty
Robust reinforcement learning (RL) aims to find a policy that optimizes the worst-case performance in the face of uncertainties. In this paper, we focus on action robust RL with the probabilistic policy execution uncertainty, in which, instead of always carrying out the action specified by the policy, the agent will take the action specified by the policy with probability $1-\rho$ and an alternative adversarial action with probability $\rho$. We establish the existence of an optimal policy on the action robust MDPs with probabilistic policy execution uncertainty and provide the action robust Bellman optimality equation for its solution. Furthermore, we develop Action Robust Reinforcement Learning with Certificates (ARRLC) algorithm that achieves minimax optimal regret and sample complexity. Furthermore, we conduct numerical experiments to validate our approach's robustness, demonstrating that ARRLC outperforms non-robust RL algorithms and converges faster than the robust TD algorithm in the presence of action perturbations.
Guanlin Liu, Zhihan Zhou, Han Liu, Lifeng Lai
2023-07-15T00:26:51Z
http://arxiv.org/abs/2307.07666v2
# Efficient Action Robust Reinforcement Learning with Probabilistic Policy Execution Uncertainty ###### Abstract Robust reinforcement learning (RL) aims to find a policy that optimizes the worst-case performance in the face of uncertainties. In this paper, we focus on action robust RL with the probabilistic policy execution uncertainty, in which, instead of always carrying out the action specified by the policy, the agent will take the action specified by the policy with probability \(1-\rho\) and an alternative adversarial action with probability \(\rho\). We establish the existence of an optimal policy on the action robust MDPs with probabilistic policy execution uncertainty and provide the action robust Bellman optimality equation for its solution. Furthermore, we develop Action Robust Reinforcement Learning with Certificates (ARRLC) algorithm that achieves minimax optimal regret and sample complexity. Furthermore, we conduct numerical experiments to validate our approach's robustness, demonstrating that ARRLC outperforms non-robust RL algorithms and converges faster than the robust TD algorithm in the presence of action perturbations. ## 1 Introduction Reinforcement learning (RL), a framework of control-theoretic problem that makes decisions over time under an unknown environment, has many applications in a variety of scenarios such as recommendation systems [1], autonomous driving [2], finance [3] and business management [4], to name a few. However, the solutions to standard RL methods are not inherently robust to uncertainties, perturbations, or structural changes in the environment, which are frequently observed in real-world settings. A trustworthy reinforcement learning algorithm should be competent in solving challenging real-world problems with robustness against perturbations and uncertainties. Robust RL aims to improve the worst-case performance of algorithms deterministically or statistically in the face of uncertainties. The uncertainties could happen in different MDP components, including observations/states [5, 6], actions [7, 8], transitions [9, 10], and rewards [11, 12]. Robust RL against action uncertainties focuses on the discrepancy between the actions generated by the RL agent and the conducted actions [7]. Thus, action uncertainties can be called as policy execution uncertainties. Taking the robot control as an example, such policy execution uncertainty may come from the actuator noise, limited power range, or actuator failures in the real world. Taking the medication advice in healthcare as another example, such policy execution uncertainty may come from the patient's personal behaviors like drug refusal, forgotten medication, or overdose etc. Adversarial training [13, 14] has been recognized as one of the most effective approaches in traditional supervised learning tasks in training time defenses. A lot of robust RL methods adopt the adversarial training framework and thus assume an adversary conducting adversarial attacks to mimic the naturalistic uncertainties [15, 7, 8]. Training with an adversary can naturally be formulated as a zero-sum game between the adversary and the RL agent [7]. [7] proposed probabilistic action robust MDP (PR-MDP) in which, with probability \(\rho\), an alternative adversarial action is taken. [7] introduced the Probabilistic Robust Policy Iteration (PR-PI) algorithm to train an adversary along with the agent. PR-PI algorithm converges toward the optimal value but requires a MDP solver to solve the optimal adversarial policy when the agent policy is given and the optimal agent policy when the adversarial policy is given. Thus, it is not suit for unknown reward functions or unknown transition probabilities. A similar idea as the PR-MDP was presented [8], which extends temporal difference (TD) learning algorithms by a new robust operator and shows that the new algorithms converge to the optimal robust \(Q\)-function. However, no theoretical guarantee on sample complexity or regret is given. In this paper, we aim to fill in the gaps of the existing work on policy execution uncertainties. In particular, we develop a minimax optimal sample efficient algorithm for action robust RL with probabilistic policy execution uncertainty. Our major contributions are summarized as follows: * We model the episodic RL with probabilistic policy execution uncertain set. We provide the action robust Bellman equation and the action robust Bellman optimality equation. We show that there always exists an optimal robust policy which is deterministic and can be solved via the induction of the action robust Bellman optimality equation. * We develop a new algorithm, Action Robust Reinforcement Learning with Certificates (ARRLC), for episodic action robust MDPs, and show that it achieves minimax order optimal regret and minimax order optimal sample complexity. * We develop a model-free algorithm for episodic action robust MDPs, and analyze its regret and sample complexity. * We conduct numerical experiments to validate the robustness of our approach. In our experiments, our robust algorithm achieves a much higher reward than the non-robust RL algorithm when being tested with some action perturbations; and our ARRLC algorithm converges much faster than the robust TD algorithm in [8]. ## 2 Related work We mostly focus on papers that are related to sample complexity bounds for the episodic RL and the two-player zero-sum Markov game, and action robust RL, that are close related to our model. We remark that there are also related settings, e.g., infinite-horizon discounted MDP [16; 17], robust RL with other uncertainties [12; 5; 10], robust offline RL [18; 19], adversarial training with a generative RL model [20; 21], adversarial attacks on RL [22; 23; 24], etc. These settings are beyond the scope of this paper, though our techniques may be also related to these settings. Action robust RL[15] introduce robust adversarial reinforcement learning to address the generalization issues in reinforcement learning by training with a destabilizing adversary that applies disturbance forces to the system. [7] introduce two new criteria of robustness for reinforcement learning in the face of action uncertainty. One is probabilistic action robust MDP (PR-MDP) in which, instead of the action specified by the policy, an alternative adversarial action is taken with probability \(\rho\). Another is noisy action robust MDP (NR-MDP) criterion, in which a perturbation is added to the continues action vector itself. They generalize their policy iteration approach to deep reinforcement learning (DRL) and provide extensive experiments. [8] extends TD learning algorithms by a new robust operator and shows that the new algorithms converge to the optimal robust \(Q\)-function. Sample Complexity Bounds for the Episodic RLThere is a rich literature on sample complexity guarantees for episodic tabular RL, for example [25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Most relevant to our paper is the work about policy certificates [31].The algorithm outputs policy certificates that bound the sub-optimality and return of the policy in the next episode. They show that computing certificates can even improve the sample-efficiency of optimism-based exploration. Sample Complexity Bounds for the Two-player Zero-sum Markov GameTraining with an adversary can naturally be formulated as a zero-sum game between the adversary and the RL agent. Some sample efficient algorithms for two-player zero-sum Markov game can be used to train the action robust RL agent. [35] design an algorithm named optimistic Nash value iteration (Nash-VI) for two-player zero-sum Markov games that is able to output an \(\epsilon\)-approximate Nash policy in \(\widetilde{\mathcal{O}}(SABH^{3}/\delta^{2})\) episodes of game playing. [36] design a new class of fully decentralized algorithms V-learning, which provably learns \(\epsilon\)-approximate Nash equilibrium in \(\widetilde{\mathcal{O}}(SAH^{5}/\delta^{2})\) episodes of two-player zero-sum game playing. The two multi-agent RL algorithms can be used to solve the action robust optimal policy but are not minimax optimal. They are a factor of \(A\) or \(H^{2}\) above the minimax lower bound. ## 3 Problem formulation Tabular MDPsWe consider a tabular episodic MDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},H,P,R)\), where \(\mathcal{S}\) is the state space with \(|\mathcal{S}|=S\), \(\mathcal{A}\) is the action space with \(|\mathcal{A}|=A\), \(H\in\mathbb{Z}^{+}\) is the number of steps in each episode, \(P\) is the transition matrix so that \(P_{h}(\cdot|s,a)\) represents the probability distribution over states if action \(a\) is taken for state \(s\) at step \(h\in[H]\), and \(R_{h}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) represents the reward function at the step \(h\). In this paper, the probability transition functions and the reward functions can be different at different steps. The agent interacts with the MDP in episodes indexed by \(k\). Each episode \(k\) is a trajectory \(\{s^{k}_{1},a^{k}_{1},r^{k}_{1},\cdots,s^{k}_{H},a^{k}_{H},r^{k}_{H}\}\) of \(H\) states \(s^{k}_{h}\in\mathcal{S}\), action \(a^{k}_{h}\in\mathcal{A}\), and reward \(r^{k}_{h}\in[0,1]\). At each step \(h\in[H]\) of episode \(k\), the agent observes the state \(s^{k}_{h}\) and chooses an action \(a^{k}_{h}\). After receiving the action, the environment generates a random reward \(r^{k}_{h}\in[0,1]\) derived from a distribution with mean \(R_{h}(s^{k}_{h},a^{k}_{h})\) and next state \(s^{k}_{h+1}\) which is drawn from the distribution \(P_{h}(\cdot|s^{k}_{h},a^{k}_{h})\). For notational simplicity, we assume that the initial states \(s^{k}_{1}=s_{1}\) is deterministic in different episode \(k\). A (stochastic) Markov policy of the agent is a set of \(H\) maps \(\pi:=\{\pi_{h}:\mathcal{S}\rightarrow\Delta_{\mathcal{A}}\}_{h\in[H]}\), where \(\Delta_{\mathcal{A}}\) denotes the simplex over \(\mathcal{A}\). We use notation \(\pi_{h}(a|s)\) to denote the probability of taking action \(a\) in state \(s\) under stochastic policy \(\pi\) at step \(h\). A deterministic policy is a policy that maps each state to a particular action. Therefore, when it is clear from the context, we abuse the notation \(\pi_{h}(s)\) for a deterministic policy \(\pi\) to denote the action \(a\) which satisfies \(\pi_{h}(a|s)=1\). Action Robust MDPsIn the action robust case, the policy execution is not accurate and lies in some uncertainty set centered on the agent's policy \(\pi\). Denote the actual behavior policy by \(\widetilde{\pi}\) where \(\widetilde{\pi}\in\Pi(\pi)\) and \(\Pi(\pi)\) is the uncertainty set of the policy execution. Denote the actual behavior action at episode \(k\) and step \(h\) by \(\widetilde{a}^{k}_{h}\) where \(\widetilde{a}^{k}_{h}\sim\widetilde{\pi}^{k}_{h}\). Define the action robust value function of a policy \(\pi\) as the worst-case expected accumulated reward over following any policy in the uncertainty set \(\Pi(\pi)\) centered on a fixed policy \(\pi\): \[V^{\pi}_{h}(s)=\min_{\widetilde{\pi}\in\Pi(\pi)}\mathbb{E}\left[\sum_{h^{ \prime}=h}^{H}R_{h^{\prime}}(s_{h^{\prime}},a_{h^{\prime}})|s_{h}=s,a_{h^{ \prime}}\sim\widetilde{\pi}_{h^{\prime}}(\cdot|s_{h^{\prime}}),\;\forall h^{ \prime}>h\right]. \tag{1}\] \(V^{\pi}_{h}\) represents the action robust value function of policy \(\pi\) at step \(h\). Similarly, define the action robust \(Q\)-function of a policy \(\pi\): \[Q^{\pi}_{h}(s,a)=\min_{\widetilde{\pi}\in\Pi(\pi)}\mathbb{E}\left[\sum_{h^{ \prime}=h}^{H}R_{h^{\prime}}(s_{h^{\prime}},a_{h^{\prime}})|s_{h}=s,a_{h}=a,a _{h^{\prime}}\sim\widetilde{\pi}_{h^{\prime}}(\cdot|s_{h^{\prime}}),\;\forall h ^{\prime}>h\right]. \tag{2}\] The goal of action robust RL is to find the optimal robust policy \(\pi^{*}\) that maximizes the worst-case accumulated reward: \(\pi^{*}=\arg\max_{\pi}V^{\pi}_{1}(s),\forall s\in\mathcal{S}\). We also denote \(V^{\pi^{*}}\) and \(Q^{\pi^{*}}\) by \(V^{*}\) and \(Q^{*}\). Probabilistic Policy Execution Uncertain SetWe follow the setting of the probabilistic action robust MDP (PR-MDP) introduced in [7] to construct the probabilistic policy execution uncertain set. For some \(0\leq\rho\leq 1\), the policy execution uncertain set is defined as: \[\Pi^{\rho}(\pi):=\{\widetilde{\pi}|\forall s,\forall h,\widetilde{\pi}_{h}( \cdot|s)=(1-\rho)\pi_{h}(\cdot|s)+\rho\pi^{\prime}_{h}(\cdot|s),\pi^{\prime}_{h }(\cdot|s)\in\Delta_{\mathcal{A}}\}=\otimes_{h,s}\Pi^{\rho}_{h,s}(\pi_{h}( \cdot|s)) \tag{3}\] such that \(\Pi^{\rho}_{h,s}(\pi_{h}(\cdot|s))=\{\widetilde{\pi}_{h}(\cdot|s)|\widetilde{ \pi}_{h}(\cdot|s)=(1-\rho)\pi_{h}(\cdot|s)+\rho\pi^{\prime}_{h}(\cdot|s),\pi^{ \prime}_{h}(\cdot|s)\in\Delta_{\mathcal{A}}\}\). In this setting, an optimal probabilistic robust policy is optimal w.r.t. a scenario in which, with probability at most \(\rho\), an adversary takes control and performs the worst possible action. We call \(\pi^{\prime}\) as the adversarial policy. For different agent's policy \(\pi\), the corresponding adversarial policy \(\pi^{\prime}\) that minimizes the cumulative reward may be different. The probabilistic uncertain set model is closely related to the uncertainty set models defined on the total variation distance. The uncertainty set based on distance is defined as \(\Pi^{D,\rho}(\pi):=\otimes_{h,s}\Pi^{D,\rho}_{h,s}(\pi_{h}(\cdot|s))\) such that \(\Pi^{D,\rho}_{h,s}(\pi_{h}(\cdot|s))=\{\widetilde{\pi}_{h}(\cdot|s)\in\Delta_{ \mathcal{A}}|D(\pi_{h}(\cdot|s),\widetilde{\pi}_{h}(\cdot|s))\leq\rho\}\), where \(D\) is some distance metric between two probability measures and \(\rho\) is the radius. For any policy \(\widetilde{\pi}_{h}(\cdot|s)\in\Pi^{\rho}_{h,s}(\pi_{h}(\cdot|s))\), the total variation distance to the center satisfies \(D_{TV}(\pi_{h}(\cdot|s),\widetilde{\pi}_{h}(\cdot|s))=\frac{1}{2}\|\pi_{h}( \cdot|s)-\widetilde{\pi}_{h}(\cdot|s)\|_{1}\leq\rho\). Additional NotationsWe set \(\iota=\log(2SAHK/\delta)\) for \(\delta>0\). For simplicity of notation, we treat \(P\) as a linear operator such that \([P_{h}V](s,a):=\mathbb{E}_{s^{\prime}\sim P_{h}(\cdot|s,a)}V(s^{\prime})\), and we define two additional operators \(\mathbb{D}\) and \(\mathbb{V}\) as follows: \([\mathbb{D}_{\pi_{h}}Q](s):=\mathbb{E}_{a\sim\pi_{h}(\cdot|s)}Q(s,a)\) and \(\mathbb{V}_{P_{h}}V_{h+1}(s,a):=\sum_{s^{\prime}}P_{h}(s^{\prime}|s,a)\left(V_{h +1}(s^{\prime})-[P_{h}V_{h+1}](s,a)\right)^{2}=[P_{h}(V_{h+1})^{2}](s,a)-([P_{h}V _{h+1}](s,a))^{2}\). Existence of the optimal robust policy For the standard tabular MDPs, when the state space, action space, and the horizon are all finite, there always exists an optimal policy. In addition, if the reward functions and the transition probabilities are known to the agent, the optimal policy can be solved by solving the Bellman optimality equation. In the following theorem, we show that the optimal policy also always exists in action robust MDPs and can be solved by the action robust Bellman optimality equation. **Theorem 1**: _If the uncertainty set of the policy execution has the form in (3), the following perfect duality holds for all \(s\in\mathcal{S}\) and all \(h\in[H]\):_ \[\max_{\pi}\min_{\widetilde{\pi}\in\Pi^{\prime}(\pi)}\mathbb{E} \left[\sum_{h^{\prime}=h}^{H}R_{h^{\prime}}(s_{h^{\prime}},a_{h^{\prime}})|s_{ h}=s,a_{h^{\prime}}\sim\widetilde{\pi}_{h^{\prime}}(\cdot|s_{h^{\prime}})\right] \tag{4}\] \[=\min_{\widetilde{\pi}\in\Pi^{\prime}(\pi)}\max_{\pi}\mathbb{E} \left[\sum_{h^{\prime}=h}^{H}R_{h^{\prime}}(s_{h^{\prime}},a_{h^{\prime}})|s_{ h}=s,a_{h^{\prime}}\sim\widetilde{\pi}_{h^{\prime}}(\cdot|s_{h^{\prime}}) \right].\] _There always exists a deterministic optimal robust policy \(\pi^{*}\). The problem can be solved via the induction of the action robust Bellman optimality equation on \(h=H,\cdots,\mathbb{L}\). The action robust Bellman equation and the action robust Bellman optimality equation are:_ \[\begin{cases}V_{h}^{\pi}(s)=(1-\rho)[\mathbb{D}_{\pi_{h}}Q_{h}^{ \pi}](s)+\rho\min_{a\in\mathcal{A}}Q_{h}^{\pi}(s,a)\\ Q_{h}^{\pi}(s,a)=R_{h}(s,a)+[P_{h}V_{h+1}^{\pi}](s,a)\\ V_{H+1}^{\pi}(s)=0,\ \forall s\in\mathcal{S}\end{cases} \tag{5}\] \[\begin{cases}V_{h}^{*}(s)=(1-\rho)\max_{a\in\mathcal{A}}Q_{h}^{*} (s,a)+\rho\min_{b\in\mathcal{A}}Q_{h}^{*}(s,b)\\ Q_{h}^{*}(s,a)=R_{h}(s,a)+[P_{h}V_{h+1}^{*}](s,a)\\ V_{h+1}^{*}(s)=0,\ \forall s\in\mathcal{S}\end{cases}. \tag{6}\] Similar result of the perfect duality was show in [7]. They considered a PR-MDP as a two-player zero-sum Markov game and solving the optimal probabilistic robust policy can be equivalently viewed as solving the equilibrium value of a two-player zero-sum Markov game. We define \(C_{h}^{\pi,\pi^{\prime},\rho}(s):=\mathbb{E}\left[\sum_{h^{\prime}=h}^{H}R_{h^ {\prime}}(s_{h^{\prime}},a_{h^{\prime}})|s_{h}=s,a_{h^{\prime}}\sim\widetilde {\pi}_{h^{\prime}}(\cdot|s_{h^{\prime}})\right]\). The perfect duality of the control problems in (4) is equivalent to \(\max_{\pi}\min_{\pi^{\prime}}C_{h}^{\pi,\pi^{\prime},\rho}(s)=\min_{\pi^{ \prime}}\max_{\pi}C_{h}^{\pi,\pi^{\prime},\rho}(s)\). We provide an alternate proof in Appendix A based on the robust Bellman equation. ## 5 Algorithm and main results In this section, we introduce the proposed **A**ction **R**obust **R**einforcement **L**earning with **C**ertificates (ARRLC) algorithm and provides its theoretical guarantee. The pseudo code is listed in Algorithm 1. Here, we highlight the main idea of our algorithm. Algorithm 1 trains the agent in a clean (simulation) environment and learns a policy that performs well when applied to a perturbed environment with probabilistic policy execution uncertainty. To simulate the action perturbation, Algorithm 1 chooses an adversarial action with probability \(\rho\). To learn the agent's optimal policy and the corresponding adversarial policy, Algorithm 1 computes an optimistic estimate \(\overline{Q}\) of \(Q^{*}\) and a pessimistic estimate \(\underline{Q}\) of \(Q^{\overline{\pi}^{k}}\). Algorithm 1 uses the optimistic estimates to explore the possible optimal policy \(\overline{\pi}\) and uses the pessimistic estimates to explore the possible adversarial policy \(\underline{\pi}\). As shown later in Lemma 2, \(\overline{V}\geq V^{*}\geq V^{\overline{\pi}}\geq\underline{V}\) holds with high probabilities. The optimistic and pessimistic estimates \(\overline{V}\) and \(\underline{V}\) can provide policy certificates, which bounds the cumulative rewards of the return policy \(\overline{\pi}^{k}\) and \(\overline{V}-\underline{V}\) bounds the sub-optimality of the return policy \(\overline{\pi}^{k}\) with high probabilities. The policy certificates can give us some insights about the performance of \(\overline{\pi}^{k}\) in the perturbed environment with probabilistic policy execution uncertainty. ### Algorithm description We now describe the proposed ARRLC algorithm in more details. In each episode, the ARRLC algorithm can be decomposed into two parts. * Line 3-11 (Sample trajectory and update the model estimate): Simulates the action robust MDP, executes the behavior policy \(\widetilde{\pi}\), collects samples, and updates the estimate of the reward and the transition. * Line 16-25 (Adversarial planning from the estimated model): Performs value iteration with bonus to estimate the robust value functions using the empirical estimate of the transition \(\hat{P}\), computes a new policy \(\overline{\pi}\) which is optimal respect to the estimated robust value functions, and computes a new optimal adversarial policy \(\underline{\pi}\) respect to the agent's policy \(\overline{\pi}\). At a high-level, this two-phase strategy is standard in the majority of model-based RL algorithms [28; 31]. Algorithm 1 shares similar structure with ORLC (Optimistic Reinforcement Learning with Certificates) in [31] but has some significant differences in line 5-6 and line 18-23. The first main difference is that the ARRLC algorithm simulates the probabilistic policy execution uncertainty by choosing an adversarial action with probability \(\rho\). The adversarial policy and the adversarial action are computed by the ARRLC algorithm. The second main difference is that the ARRLC algorithm simultaneously plans the agent policy \(\overline{\pi}\) and the adversarial policy \(\underline{\pi}\) by the action robust Bellman optimality equation. These two main difference brings two main challenges in the design and analysis of our algorithm. (1) The ARRLC algorithm simultaneously plans the agent policy and the adversarial policy. However the planned adversarial policy \(\underline{\pi}\) is not necessarily the true optimal adversary policy towards the agent policy \(\overline{\pi}\) because of the estimation error of the value functions. We carefully design the bonus items and the update role of the value functions so that \(\overline{V}_{h}(s)\geq V_{h}^{*}(s)\geq V_{h}^{\overline{\pi}}(s)\geq \underline{V}_{h}(s)\) and \(\overline{Q}_{h}(s,a)\geq Q_{h}^{*}(s,a)\geq Q_{h}^{\overline{\pi}}(s,a)\geq \underline{Q}_{h}(s,a)\) hold for all \(s\) and \(a\). (2) A crucial step in many UCB-type algorithms based on Bernstein inequality is bounding the sum of variance of estimated value function across the planning horizon. The behavior policies in these UCB-type algorithms are deterministic. However, the behavior policy in our ARRLC algorithm is not deterministic due to the simulation of the adversary's behavior. The total variance is the weighted sum of the sum of variance of estimated value function across two trajectories. Even if action \(\overline{\pi}(s_{h}^{k})\) or \(\underline{\pi}(s_{h}^{k})\) is not sampled at state \(s_{h}^{k}\), it counts in the total variance. ### Theoretical guarantee We define the cumulative regret of the output policy \(\overline{\pi}^{k}\) at each episodes \(k\) as \(Regret(K):=\sum_{k=1}^{K}(V_{1}^{*}(s_{1}^{k})-V_{1}^{\overline{\pi}^{k}}(s_{1 }^{k}))\). **Theorem 2**: _For any \(\delta\in(0,1]\), letting \(\iota=\log(2SAHK/\delta)\), then with probability at least \(1-\delta\), Algorithm 1 achieves:_ * \(V_{1}^{*}(s_{1})-V_{1}^{\pi^{\text{out}}}(s_{1})\leq\epsilon\)_, if the number of episodes_ \(K\geq\Omega(SAH^{3}\iota^{2}/\epsilon^{2}+S^{2}AH^{3}\iota^{2}/\epsilon)\)_._ * \(Regret(K)=\sum_{k=1}^{K}(V_{1}^{*}(s_{1}^{k})-V_{1}^{\overline{\pi}^{k}}(s_{1 }^{k}))\leq\mathcal{O}(\sqrt{SAH^{3}K}\iota+S^{2}AH^{3}\iota^{2})\)_._ For small \(\epsilon\leq H/S\), the sample complexity scales as \(\mathcal{O}(SAH^{3}\iota^{2}/\epsilon^{2})\). For the case with a large number of episodes \(K\geq S^{3}AH^{3}\iota\), the regret scales as \(\mathcal{O}(\sqrt{SAH^{3}K}\iota)\). For the standard MDPs, the information-theoretic sample complexity lower bound is \(\Omega(SAH^{3}/\epsilon^{2})\) provided in [33] and the regret lower bound is \(\Omega(\sqrt{SAH^{3}K})\) provided in [30]. When \(\rho=0\), the action robust MDPs is equivalent to the standard MDPs. Thus, the information-theoretic sample complexity lower bound and the regret lower bound of the action robust MDPs should have same dependency on \(S\), \(A\), \(H\), \(K\) or \(\epsilon\). The lower bounds show the optimality of our algorithm up to logarithmic factors. ## 6 Proof sketch In this section, we provide sketch of the proof, which will highlight our the main ideas of our proof. First, we will show that \(\overline{V}_{h}(s)\geq V_{h}^{*}(s)\geq V_{h}^{\overline{\pi}}(s)\geq\underline {V}_{h}(s)\) hold for all \(s\) and \(a\). Then, the regret can be bounded by \(\overline{V}_{1}-\underline{V}_{1}\) and then be divided by four items, each of which can then be bounded separately. The full proof can be found in the appendix contained in the supplementary material. We first introduce a few notations. We use \(\overline{Q}_{h}^{k}\),\(\overline{V}_{h}^{k}\),\(\underline{Q}_{h}^{k}\),\(\underline{V}_{h}^{k}\), \(N_{h}^{k}\), \(\hat{P}_{h}^{k}\hat{r}_{h}^{k}\) and \(\theta_{h}^{k}\) to denote the values of \(\overline{Q}_{h}\),\(\overline{V}_{h}\),\(\underline{Q}_{h}\),\(\underline{V}_{h}\), \(\max\{N_{h},1\}\), \(\hat{P}_{h}\), \(r_{h}\) and \(\theta_{h}\) in the beginning of the \(k\)-th episode in Algorithm 1. ### Proof of monotonicity We define \(\mathcal{E}^{R}\) to be the event where \[\big{|}\hat{r}_{h}^{k}(s,a)-R_{h}(s,a)\big{|}\leq\sqrt{\frac{2\hat{r}_{h}^{k}( s,a)\iota}{N_{h}^{k}(s,a)}}+\frac{7\iota}{3(N_{h}^{k}(s,a))} \tag{7}\] holds for all \((s,a,h,k)\in S\times A\times[H]\times[K]\). We also define \(\mathcal{E}^{PV}\) to be the event where \[\left|(\hat{P}_{h}^{k}-P_{h})V_{h+1}^{*}(s,a)\right| \leq \sqrt{\frac{2\mathbb{V}\,\rho_{h}^{k}V_{h+1}^{*}(s,a)\iota}{N_{h} ^{k}(s,a)}}+\frac{7H\iota}{3(N_{h}^{k}(s,a))} \tag{8}\] \[\left|(\hat{P}_{h}^{k}-P_{h})V_{h+1}^{\not\not\pi^{k}}(s,a)\right| \leq \sqrt{\frac{2\mathbb{V}\,\rho_{h}^{k}V_{h+1}^{\not\pi^{k}}(s,a) \iota}{N_{h}^{k}(s,a)}}+\frac{7H\iota}{3N_{h}^{k}(s,a)} \tag{9}\] holds for all \((s,a,h,k)\in S\times A\times[H]\times[K]\). Event \(\mathcal{E}^{R}\) means that the estimations of all reward functions stay in certain neighborhood of the true values. Event \(\mathcal{E}^{PV}\) represents that the estimation of the value functions at the next step stay in some intervals. The following lemma shows \(\mathcal{E}^{R}\) and \(\mathcal{E}^{PV}\) hold with high probability. The analysis will be done assuming the successful event \(\mathcal{E}^{R}\cap\mathcal{E}^{PV}\) holds in the rest of this section. **Lemma 1**: \(\mathbb{P}(\mathcal{E}^{R}\cap\mathcal{E}^{PV})\geq 1-3\delta\)_._ **Lemma 2**: _Conditioned on \(\mathcal{E}^{R}\cap\mathcal{E}^{PV}\), \(\overline{V}_{h}^{k}(s)\geq V_{h}^{*}(s)\geq V_{h}^{\overline{\pi}^{k}}(s) \geq\underline{V}_{h}^{k}(s)\) and \(\overline{Q}_{h}^{k}(s,a)\geq Q_{h}^{*}(s,a)\geq Q_{h}^{\overline{\pi}^{k}}(s,a)\geq\underline{Q}_{h}^{k}(s,a)\) for all \((s,a,h,k)\in S\times A\times[H]\times[K]\)._ ### Regret analysis We decompose the regret and analyze the different terms. Set \(\Theta_{h}^{k}(s,a)=\sqrt{\frac{8\underline{V}_{P_{h}}C_{h+1}^{k*},\underline{x}^ {k},\rho}{N_{h}^{k}(s,a)}}+\sqrt{\frac{32}{N_{h}^{k}(s,a)}}+\frac{46\sqrt{SH^{4 }}_{h}}{N_{h}^{k}(s,a)}\), where \(\pi^{k*}\) is the optimal policy towards the adversary policy \(\underline{\pi}^{k}\) with \(\pi_{h}^{k*}(s)=\arg\max_{\pi}\mathcal{C}_{h}^{\pi,\underline{x}^{k},\rho}(s)\). We define the cumulative regret of the output policy \(\overline{\pi}^{k}\) at each episodes \(k\) as \(Regret(K):=\sum_{k=1}^{K}(V_{1}^{*}(s_{1}^{k})-V_{1}^{\overline{\pi}^{k}}(s_{1} ^{k}))\). Let \(M_{1}=\sum_{k=1}^{K}\sum_{h=1}^{H}[\mathbb{D}_{\pi_{h}^{k}}\hat{P}_{h}^{k}( \overline{V}_{h+1}^{k}-\underline{V}_{h+1}^{k})(s_{h}^{k})-\hat{P}_{h}^{k}( \overline{V}_{h+1}^{k}-\underline{V}_{h+1}^{k})(s_{h}^{k},a_{h}^{k})]\), \(M_{2}=\sum_{k=1}^{K}\sum_{h=1}^{H}\frac{1}{H}[\mathbb{D}_{\pi_{h}^{k}}P_{h}( \overline{V}_{h+1}^{k}-\underline{V}_{h+1}^{k})(s_{h}^{k})-P_{h}(\overline{V }_{h+1}^{k}-\underline{V}_{h+1}^{k})(s_{h}^{k},a_{h}^{k})]\), \(M_{3}=\sum_{k=1}^{K}\sum_{h=1}^{H}(P_{h}^{k}(\overline{V}_{h+1}^{k}- \underline{V}_{h+1}^{k})(s_{h}^{k},a_{h}^{k})-(\overline{V}_{h+1}^{k}- \underline{V}_{h+1}^{k})(s_{h+1}^{k}))\) and \(M_{4}=\sum_{k=1}^{K}\sum_{h=1}^{H}[\frac{SH+SH^{2}}{N_{h}^{k}(s_{h}^{k},a_{h}^ {k})}+\mathbb{D}_{\pi_{h}^{k}}\Theta_{h}^{k}(s_{h}^{k})]\). Here \(M_{1}\) and \(M_{2}\) are the cumulative sample error from the random choices of the adversarial policy of agent's policy. \(M_{3}\) is the cumulative sample error from the randomness of Monte Carlo sampling of the next state. \(M_{4}\) is the cumulative error from the bonus item \(\theta\). Lemma 3 shows that the regret can be bounded by these four terms. **Lemma 3**: _With probability at least \(1-(S+5)\delta\),_ \[Regret(K)\leq\sum_{k=1}^{K}(\overline{V}_{1}^{k}(s_{1}^{k})-\underline{V}_{1} ^{k}(s_{1}^{k}))\leq 21(M_{1}+M_{2}+M_{3}+M_{4}). \tag{10}\] We now bound each of these four items separately. **Lemma 4**: _With probability at least \(1-\delta\), \(|M_{1}|\leq H\sqrt{2HK\iota}\)._ **Lemma 5**: _With probability at least \(1-\delta\), \(|M_{2}|\leq\sqrt{2HK\iota}\)._ **Lemma 6**: _With probability at least \(1-\delta\), \(|M_{3}|\leq H\sqrt{2HK\iota}\)._ **Lemma 7**: _With probability at least \(1-2\delta\), \(|M_{4}|\leq 2S^{2}AH^{3}\iota^{2}+8\sqrt{SAH^{2}K\iota}+46S^{\frac{3}{2}}AH^{3} \iota^{2}+\sqrt{24SAH^{3}K\iota}+6\sqrt{SAH^{5}}\iota\)._ Putting All TogetherBy Lemmas 3, 4, 5, 6, and 7, we conclude that, with probability \(1-(S+10)\delta\), \[Regret(K)\leq O(\sqrt{H^{3}K\iota}+\sqrt{SAH^{2}K\iota}+\sqrt{SAH^{3}K\iota}+S^{2 }AH^{3}\iota^{2}+\sqrt{SAH^{5}}\iota)\] \[= O(\sqrt{SAH^{3}K\iota}+S^{2}AH^{3}\iota^{2}). \tag{11}\] By rescaling \(\delta\), \(\log(\frac{2SAHK}{\delta/(S+10)})\leq\iota\) for some constant \(c\) and we finish the proof of regret. As \(\sum_{k=1}^{K}(\overline{V}_{1}^{k}(s_{1}^{k})-\underline{V}_{1}^{k}(s_{1}^{k }))\leq O(\sqrt{SAH^{3}K\iota}+S^{2}AH^{3}\iota^{2})\), we have that \(V_{1}^{*}(s_{1})-V_{1}^{\pi^{out}}(s_{1})\leq\min_{k}\overline{V}_{1}^{k}(s_{1 }^{k})-\underline{V}_{1}^{k}(s_{1}^{k})\leq O(\frac{\sqrt{SAH^{3}\iota}}{K}+ \frac{S^{2}AH^{3}\iota^{2}}{K})\) and we finish the proof of sample complexity. ## 7 Model-free method In this section, we develop a model-free algorithm and analyze its theoretical guarantee. We present the proposed Action Robust Q-learning with UCB-Hoeffding (AR-UCBH) algorithm show in Algorithm 2. Here, we highlight the main idea of Algorithm 2. Algorithm 2 follows the same idea of Algorithm 1, which trains the agent in a clean (simulation) environment and learns a policy that performs well when applied to a perturbed environment with probabilistic policy execution uncertainty. To simulate the action perturbation, Algorithm 2 chooses an adversarial action with probability \(\rho\). To learn the agent's optimal policy and the corresponding adversarial policy, Algorithm 2 computes an optimistic estimate \(\overline{Q}\) of \(Q^{*}\) and a pessimistic estimate \(\underline{Q}\) of \(Q^{\overline{\pi}^{k}}\). Algorithm 2 uses the optimistic estimates to explore the possible optimal policy \(\overline{\pi}\) and uses the pessimistic estimates to explore the possible adversarial policy \(\underline{\pi}\). The difference is that Algorithm 2 use a model-free method to update \(Q\) and \(V\) values. Here, we highlight the challenges of the model-free planning compared with the model-based planing. In the model-based planning, we performs value iteration and the \(Q\) values, \(V\) values, agent policy \(\overline{\pi}\) and adversarial policy \(\underline{\pi}\) are updated on all \((s,a)\). However, in the model-free method, the \(Q\) values, \(V\) values are updated only on \((s_{h}^{k},a_{h}^{k})\) which are the samples on the trajectories. Compared with the model-based planning, the model-free planning is slower and less stable. We need to update the output policy carefully. In line 14-16, Algorithm 2 does not update the output policy when the lower bound on the value function of the new policy does not improve. By this, the output policies are stably updated. We provide the regret and sample complexity bounds of Algorithm 2 in the following: **Theorem 3**: _For any \(\delta\in(0,1]\), letting \(\iota=\log(2SABHK/\delta)\), then with probability at least \(1-\delta\), Algorithm 2 achieves:_ * \(V_{1}^{*}(s_{1})-V_{1}^{\pi^{\text{out}}}(s_{1})\leq\epsilon\)_, if the number of episodes_ \(K\geq\Omega(SAH^{5}\iota/\epsilon^{2}+SAH^{2}/\epsilon)\)_._ * \(Regret(K)=\sum_{k=1}^{K}(V_{1}^{*}(s_{1}^{k})-V_{1}^{\mathbb{R}^{k}}(s_{1}^{k}) )\leq\mathcal{O}(\sqrt{SAH^{5}K\iota}+SAH^{2})\)_._ The detailed proof is provided in Appendix C ``` 1: Set \(\alpha_{t}=\frac{H+1}{H+t}\). Initialize \(\overline{V}_{h}(s)=H-h+1\), \(\overline{Q}_{h}(s,a)=H-h+1\), \(\underline{V}_{h}(s)=0\), \(\underline{Q}_{h}(s,a)=0\), \(\hat{r}_{h}(s,a)\), \(N_{h}(s,a)=0\) for all state \(s\in\mathcal{S}\), all action \(a\in\mathcal{A}\) and all step \(h\in[H]\). \(\overline{V}_{H+1}(s)=\underline{V}_{H+1}(s)=0\) and \(\overline{Q}_{H+1}(s,a)=\underline{Q}_{H+1}(s,a)=0\) for all \(s\) and \(a\). \(\Delta=H\). Initial policy \(\overline{\pi}_{h}^{1}(a|s)\) and \(\underline{\pi}_{h}^{1}(a|s)=1/A\) for all state \(s\), action \(a\) and all step \(h\in[H]\). 2:for episode \(k=1,2,\ldots,K\)do 3:for step \(h=1,2,\ldots,H\)do 4: Observe \(s_{h}^{k}\). 5: Set \(\overline{a}_{h}^{k}=\arg\max_{a}\overline{Q}_{h}(s_{h}^{k},a)\), \(\underline{a}_{h}^{k}=\arg\min_{a}\underline{Q}_{h}(s_{h}^{k},a)\), \(\widetilde{\pi}_{h}^{k}(\overline{a}_{h}^{k}|s_{h}^{k})=1-\rho\) and \(\widetilde{\pi}_{h}^{k}(\underline{a}_{h}^{k}|s_{h}^{k})=\rho\). 6: Take action \(a_{h}^{k}\sim\widetilde{\pi}_{h}^{k}(|s_{h}^{k})\). 7: Receive reward \(r_{h}^{k}\) and observe \(s_{h+1}^{k}\). 8: Set \(t=N_{h}(s_{h}^{k},a_{h}^{k})\gets N_{h}(s_{h}^{k},a_{h}^{k})+1;b_{t}= \sqrt{H^{3}\iota/t}\). 9:\(\overline{Q}_{h}(s_{h}^{k},a_{h}^{k})\leftarrow(1-\alpha_{t})\overline{Q}_{h} (s_{h}^{k},a_{h}^{k})+\alpha_{t}(r_{h}^{k}+\overline{V}_{h+1}(s_{h+1}^{k})+b_{t})\), 10:\(\underline{Q}_{h}(s_{h}^{k},a_{h}^{k})\leftarrow(1-\alpha_{t})\underline{Q}_{h} (s_{h}^{k},a_{h}^{k})+\alpha_{t}(r_{h}^{k}+\underline{V}_{h+1}(s_{h+1}^{k})-b_{ t})\). 11: Set \(\overline{\pi}_{h}^{k+1}(s_{h}^{k})=\arg\max_{a}\overline{Q}_{h}(s_{h}^{k},a)\), \(\underline{\pi}_{h}^{k+1}(s_{h}^{k})=\arg\min_{a}\underline{Q}_{h}(s_{h}^{k+1},a)\). 12:\(\overline{V}_{h}(s_{h}^{k})\leftarrow\min\{\overline{V}_{h}(s_{h}^{k}),(1-\rho )\overline{Q}_{h}(s_{h}^{k},\overline{\pi}_{h}^{k+1}(s_{h}^{k}))+\rho\overline{ Q}_{h}(s_{h}^{k},\underline{\pi}_{h}^{k+1}(s_{h}^{k}))\}\). 13:\(\underline{V}_{h}(s_{h}^{k})\leftarrow\max\{\underline{V}_{h}(s_{h}^{k}),(1- \rho)\underline{Q}_{h}(s_{h}^{k},\overline{\pi}_{h}^{k+1}(s_{h}^{k}))+\rho \underline{Q}_{h}(s_{h}^{k},\underline{\pi}_{h}^{k+1}(s_{h}^{k}))\}\). 14:if\(\underline{V}_{h}(s_{h}^{k})>(1-\rho)\underline{Q}_{h}(s_{h}^{k},\overline{\pi}_{h}^{k+1} (s_{h}^{k}))+\rho\underline{Q}_{h}(s_{h}^{k},\underline{\pi}_{h}^{k+1}(s_{h}^{k }))\)then 15:\(\overline{\pi}_{h}^{k+1}=\overline{\pi}_{h}^{k}\). 16:endif 17:endfor 18: Output policy \(\overline{\pi}^{k+1}\) with certificates \(\mathcal{I}_{k+1}=[\underline{V}_{1}(s_{1}^{k}),\overline{V}_{1}(s_{1}^{k})]\) and \(\epsilon_{k+1}=|\mathcal{I}_{k+1}|\). 19:endfor 20:return\(\overline{\pi}^{k+1}\) ``` **Algorithm 2**Action Robust Q-learning with UCB-Hoeffding (AR-UCBH) ## 8 Simulation results We use OpenAI gym framework [37], and consider two different problems: Cliff Walking, a toy text environment, and Inverted Pendulum, a control environment with the MuJoCo [38] physics simulator. We set \(H=100\). To demonstrate the robustness, the policy is learned in a clean environment, and is then tested on the perturbed environment. Specifically, during the testing, we set a probability \(p\) such that after the agent takes an action, with probability \(p\), the action is uniformly randomly choosen or choosen by a fixed adversarial policy. A Monte-Carlo method is used to evaluate the accumulated reward of the learned policy on the perturbed environment. We take the average over 100 trajectories. Training of ARRLC on CliffWalking-v0 and InvertedPendulum-v4 respectively cost roughly 5 seconds and 22 seconds per 100 episodes on an i9-9880H CPU core. In Figure 3 and Figure 4, "fix" represents that the action is perturbed by a fixed adversarial policy during the testing, "random" represents that the action is randomly perturbed during the testing, \(p\) is the action perturbation probability. Inverted pendulumThe inverted pendulum experiment as shown in Figure 2 is a classic scenario proposed in [39]. The game starts with the player at location \([3,0]\) of the \(4\times 12\) grid world with the goal located at \([3,11]\). A cliff runs along \([3,1-10]\). If the player moves to a cliff location, it returns to the start location and receives a reward of \(-100\). For every move which does not lead into the cliff, the agent receives a reward of \(-1\). The player makes moves until they reach the goal. The fixed adversarial policy in the cliff walking environment is walking a step to the bottom. We compare our algorithm with the non-robust RL algorithm, which is ORLC (Optimistic Reinforcement Learning with Certificates) in [31]. We set \(\rho=0.2\) for our algorithm, which is the uncertain parameter used during the training. In Figure 3, we plot the accumulated reward of both algorithms under different \(p\) and perturbations. It can be seen that overall our ARRLC algorithm achieves a much higher reward than the ORLC algorithm. This demonstrates the robustness of our ARRLC algorithm to policy execution uncertainty. We compare our algorithm with the robust TD algorithm in [8], which has no theoretical guarantee on sample complexity or regret. We set \(\rho=0.2\). In Figure 4, we plot the accumulated reward of both algorithms under different \(p\) and perturbations using a base-10 logarithmic scale on the x-axis and a linear scale on the y-axis. It can be seen that our ARRLC algorithm converges faster than the robust TD algorithm. This demonstrates the efficiency of our ARRLC algorithm to learn optimal policy under policy execution uncertainty. ## 9 Conclusion In this paper, we have developed a novel approach for solving action robust RL problems with probabilistic policy execution uncertainty. We have theoretically proved the sample complexity bound and the regret bound of the algorithms. Figure 1: Inverted pendulum environment. Figure 2: Cliff walking environment. The upper bound of the sample complexity and the regret of proposed ARRLC algorithm match the lower bound up to logarithmic factors, which shows the minimax optimality of our algorithm. Moreover, we have carried out numerical experiments to validate our algorithm's robustness and efficiency, revealing that ARRLC surpasses non-robust algorithms and converges more rapidly than the robust TD algorithm when faced with action perturbations.
2304.10892
Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems
The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing changes in their computing resources. Failing to right-size computing resources results in either latency service level objectives (SLOs) violations or wasted computing resources. Adapting to dynamic workloads considering all the pillars of accuracy, latency, and resource cost is challenging. In response to these challenges, we propose InfAdapter, which proactively selects a set of ML model variants with their resource allocations to meet latency SLO while maximizing an objective function composed of accuracy and cost. InfAdapter decreases SLO violation and costs up to 65% and 33%, respectively, compared to a popular industry autoscaler (Kubernetes Vertical Pod Autoscaler).
Mehran Salmani, Saeid Ghafouri, Alireza Sanaee, Kamran Razavi, Max Mühlhäuser, Joseph Doyle, Pooyan Jamshidi, Mohsen Sharifi
2023-04-21T11:19:49Z
http://arxiv.org/abs/2304.10892v2
# Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems ###### Abstract. The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing changes in their computing resources. Failing to right-size computing resources results in either latency service level objectives (SLOs) violations or wasted computing resources. Adapting to dynamic workloads considering all the pillars of accuracy, latency, and resource cost is challenging. In response to these challenges, we propose InfAdapter, which proactively selects a set of ML model variants with their resource allocations to meet latency SLO while maximizing an objective function composed of accuracy and cost. InfAdapter decreases SLO violation and costs up to 65% and 33%, respectively, compared to a popular industry autoscaler (Kubernetes Vertical Pod Autoscaler). + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023 Association for Computing Machinery. + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy + Footnote †: journal: Eurnull_SSys '23, _May 8, 2023, Rame, Italy_ © 2023, Rame, Italy Auto-scaling and model-switching as the state-of-the-art adaptation mechanisms fail to consider the accuracy and cost-efficiency simultaneously. Auto-scaling may sacrifice accuracy if it works with a low-accuracy model variant or incur high resource costs if used for a high-accuracy model variant. Conversely, model-switching can be a subject of under-provisioning in cases where even the least accurate model variant cannot respond to the workload; it also fails to be cost-efficient when the capacity of the most accurate model variant is more than the workload on the service. The ability to jointly resize and switch ML model variants provides new opportunities. For instance, our experiments demonstrate that a Resnet50 model variant on 8 CPU cores allocation can sustain almost the same load that a Resnet152 variant does with 20 CPU cores; moreover, a Resnet18 with 8 CPU cores can process the same load as a Resnet50 with 20 cores, while meeting P99-latency (750ms). Using a set of model variants instead of a single variant provides more granular accuracy/cost trade-offs. Inference systems should be adaptive in response to dynamic workloads and be able to consider all the _contrasting_ objectives, including responsiveness (latency), accuracy, and cost-efficiency (allocated CPU cores) when dedicating resources to services and models. Moreover, reconciling these three measures is challenging as achieving one causes a violation or sacrifice of the other, and finding a trade-off among these three is daunting. In response to these challenges, we design InfAdapter and empirically show that it can address the limitations of existing solutions. It predicts the service workload to mitigate provisioning overhead and, by using the predicted load, selects a set of ML model variants and their sizes (CPU cores) as the service backends to meet latency SLO and to maximize an objective function composed of average accuracy, resource cost, and loading cost (Section 3). To process the incoming requests, we implemented a dispatcher that load balances user requests to the backend variants according to their capacity(Section 4). Our experiments demonstrate that InfAdapter reduces average accuracy loss for latency SLO of 750 ms at 99\({}^{th}\) percentile up to 4% given the same load compared to existing solutions (Section 5). We have prototyped InfAdapter1 in a Kubernetes cluster and used TensorFlow Serving (2017) model server to serve our ML models. We experimentally evaluate InfAdapter using a real workload trace, Twitter-trace (Krishnan et al., 2017) (Section 5) and compare it against existing solutions (_e.g.,_ vertical Pod auto-scaler (VPA) (Chen et al., 2017), and Model-Switching (Zhu et al., 2017)). Our experiments illustrate that InfAdapter reduces SLO violations by up to 65% compared to existing solutions. We further open-sourced our implementation for community engagement and reproducibility of our experiments. Table 1 summarizes differences between InfAdapter and other existing approaches. Footnote 1: [https://github.com/reconfigurable-ml-pipeline/InfAdapter](https://github.com/reconfigurable-ml-pipeline/InfAdapter) ## 2. Motivation Due to interaction with online users, inference services are latency-sensitive (Krishnan et al., 2017; Dwork et al., 2018), and since they contain heavy computations, they are resource-intensive (Krishnan et al., 2017). Accuracy is also a pillar dimension of these services (Krishnan et al., 2017). Faced with dynamic workload (Krishnan et al., 2017; Dwork et al., 2018), it is essential to consider the ternary trade-off space between latency, accuracy, and the resource cost dynamically to address latency requirements while gaining higher accuracy cost-efficiently. The importance of having accurate predictions while being cost-efficient, on one hand, hinders us from selecting a computationally light model variant with a low accuracy; on the other hand, selecting the most accurate model requires a very high resource cost to fulfill latency SLOs, which may even be unavailable. We conducted experiments with Resnet model variants under different CPU core assignments and captured their sustained throughput (number of requests they could handle given 750ms P99-latency SLO). Figure 1 shows our experiment's result for Resnet18, Resnet50, and Resnet152 variants used for image classification under 8, 14, and 20 CPU core assignments. In the figure, a Resnet18 with 8 CPU cores, and a Resnet50 with 20 CPU cores, can almost sustain the same throughput under the latency SLO; a similar argument is applicable to Resnet50 with 8 cores and Resnet152 with 20 cores. Due to the fact that latency-accuracy trade-off space changes based on the workload, it is a non-trivial task to pick the right model variant with the right resource allocation. **Observation 1**. ML model variants provide the opportunity to reduce resource costs while adapting to the dynamic workload. Using the traces collected from the previous experiment, we used two approaches to select backend model variant(s) to Figure 1. Throughput of three Resnet variants under 8, 14, and 20 CPU cores. We ensured the latency of all configurations is lower than 750 ms at the P99-latency at the saturation load. sustain a 75RPS load under a 750ms P99-latency SLO, using different CPU budgets (8, 14, and 20 CPU cores). Approach 1 is to opportunistically take advantage of selecting both a set of model variants and also their sizes (InfAdapter). Approach 2 is to select only one model variant and its size (MS). For example, under a 14 CPU core budget, InfAdapter selects Resnet50, Resnet101, and Resnet152 with 2, 6, and 6 CPU cores, respectively. However, the most accurate model that MS can pick under the 14-core budget for 75RPS is Resnet50. We compared the two approaches' accuracy loss (i.e., the accuracy we obtained subtracted from the accuracy of our most accurate variant, Resnet152). In Figure 2, we observe that InfAdapter is able to gain higher average accuracy (lower accuracy loss) for the requests by having more options to select from, i.e., selecting a set of models rather than a single model. **Observation 2**. Using a set of model variants simultaneously can provide better average accuracy compared to having one active variant. Given dynamic workloads, we propose an adaptive mechanism for ML inference services to achieve latency SLO-aware, highly accurate, and cost-efficient inference systems. InfAdapter selects a subset of model variants to meet latency SLOs and maximizes an objective function of accuracy and cost. InfAdapter reconciles three important yet contradictory objectives (_accuracy, cost, and latency_). ## 3. Problem Formulation We formally describe the accuracy-cost problem using ML model variants while guaranteeing the latency SLO. We denote \(M\) as the set of model variants for a specific task, with the latency SLO, \(L\), the given accuracy of model \(m\in M\), \(acc_{m}\), and the model readiness time (loading the model into memory and the model initialization), \(rt_{m}\). We profile our set of model variants under different CPU assignments to capture the number of requests they can process concerning latency SLO \(L\). Furthermore, by using the profiled data, we train a linear regression model to estimate the processing latency and throughput of model variant \(m\in M\) under any CPU cores \(n_{m}\leq B\), \(p_{m}(n_{m}),th_{m}(n_{m})\), where \(B\) is the total CPU budget and the total resource cost as \(RC=\sum_{m\in M}n_{m}\). To maintain system stability during a dynamic workload, the aggregated throughput of all available models for a given task must stay above an expected (predicted) request rate \(\lambda\). Mathematically, this can be expressed as \(\sum_{m\in M,n\leq B}th_{m}(n_{m})\geq\lambda\). Moreover, we define the weighted average accuracy, based on the quota of the workload on variant \(m\), \(\lambda_{m}\), as \(AA=\sum_{m\in M}\frac{\lambda_{m}}{\lambda}\cdot acc_{m}\). Furthermore, we define the model loading cost as \(LC=\max\{tc(m)*rt_{m},\ m\in M\}\) where the transition cost, \(tc(m)\), is equal to 1 if the model variant \(m\) needs to be loaded, and 0 otherwise. Table 2 summarizes the notations we use in the paper. We define a multi-objective optimization problem to decide which subset of model variants to use such that under a given workload, the end-to-end latency is guaranteed. The goal is to maximize the weighted average accuracy \(AA\) and to minimize the total resource \(RC\) and loading \(LC\) costs. The problem can be formulated with the following integer linear programming (ILP): \[\begin{split}\max&\alpha\cdot AA-(\beta\cdot RC+ \gamma\cdot LC)\\ \text{subject to}&\lambda\leq\sum_{m\in M}th_{m}(n_{m}), \\ &\lambda_{m}\leq th_{m}(n_{m})\\ & p_{m}(n_{m})\leq L,\forall m\in M,\\ & RC\leq B,\\ & n_{m}\in\overline{\mathbb{W}},\forall m\in M.\end{split} \tag{1}\] \begin{table} \begin{tabular}{l l} \hline \hline **Symbol** & **Description** \\ \hline \(M\) & Set of all model variants for a given task \\ \(L\) & Latency SLO \\ \(m\) & An ML model variant from set \(M\) \\ \(acc_{m}\) & Accuracy of variant \(m\) \\ \(rt_{m}\) & Readiness time of variant \(m\) \\ \(tc_{m}\) & Transition cost of variant \(m\) \\ \(n_{m}\) & Number of CPU cores for variant \(m\) \\ \(p_{m}(n_{m})\) & Processing latency of variant \(m\) with \(n_{m}\) CPU cores \\ \(th_{m}(n_{m})\) & Throughput of variant \(m\) with \(n_{m}\) CPU cores \\ \(AA\) & Average Accuracy \\ \(RC\) & Resource cost \\ \(LC\) & Loading cost \\ \(B\) & CPU budget \\ \(\lambda\) & Workload on the system \\ \(\lambda_{m}\) & Workload quota on variant \(m\) \\ \hline \hline \end{tabular} \end{table} Table 2. Notations Figure 2. Comparison of InfAdapter (the ability to use a set of alternative models in the back-end) and Model-Switching+ on accuracy loss (accuracy of the most accurate model, Resnet152, subtracted by the accuracy of each bar). Each bar sustains the SLO of 750ms at P99-latency for a 75RPS load. In the objective function, we introduce \(\alpha,\beta,\gamma\) to normalize the resource and loading costs and give importance to the objectives based on user preference. The first two constraints ensure the system's stability, e.g., there are enough resources to support the incoming workload. The third constraint satisfies the latency SLO, while the last two constraints bound the CPU core per model to be non-negative and within the available resources in the system. We use the Gurobi optimizer (Grobin, 2017) to solve the ILP in the above equation. ## 4. System Design An overview of the InfAdapter architecture is demonstrated in Figure 3. The system consists of three major components (monitoring, adapter, and dispatcher). _Monitoring_ keeps monitoring statistics about the distribution of request arrivals. _Adapter_ is responsible for first predicting the next time-interval workload based on the workload history gathered from the monitoring component and then finding a set of model variants, their CPU cores, and their workload quota by solving the ILP in Equation 1. _Dispatcher_ controls distributing the requests to the set of multi variants based on the models' workload quota provided by the _Adapter_ component. **Monitoring.** The monitoring demon is in charge of fetching the arrival rate from the dispatcher. We get the number of requests per second and pass it to the forecaster to predict the arrival workload for the next time interval. **Adapter.** The Adapter consists of two sub-components, a time-series forecaster and a solver. Time-series forecaster predicts future workload based on historical incoming workload patterns of requests on the system. The solver aims to solve the ILP in Equation 1 (every 30 seconds) to achieve the highest possible accuracy while respecting the latency SLO and available resources using the predicted workload and the current state of CPU allocation. Finally, the Adapter passes the set of models and their CPU cores to the cluster for enforcing the system configuration and the model's quota variables to the dispatcher for load balancing the incoming workload. **Dispatcher.** The Dispatcher component load balances the incoming workload among the models in the cluster based on the weighted round-robin algorithm using the received models' quota variable, \(\lambda_{m}\), from the solver in the adapter component. ## 5. Experimental Evaluation We have prototyped InfAdapter in a Kubernetes cluster of two homogeneous physical machines from the Chameleon Cloud (Kaloutsov et al., 2017) equipped with 48 CPU cores of type Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz and 192 GiB of RAM. TensorFlow Serving is used to serve the model variants in separate Docker containers. **Batching and parallelism parameters.** Batching and parallelism parameters are practical configuration knobs of ML inference services. Batching refers to aggregating multiple requests into one request, which is widely adapted for GPU inference systems (Kaloutsov et al., 2017; Kaloutsov et al., 2018; Kaloutsov et al., 2019). However, as shown in Figure 4, inference on CPU does not substantially benefit from batching in increasing the throughput, but increasing batch size leads to higher latency. Intra-op parallelism defines the parallelism degree within an operation (such as matrix multiplication), and inter-op parallelism defines the parallelism across independent operations of inference requests (Kaloutsov et al., 2018; Kaloutsov et al., 2019; Kaloutsov et al., 2019). We measured the effect of batching and CPU intra/inter operation parallelism on Resnet50 with 8 CPU cores regarding throughput and latency. Experimental results are shown in Figure 4, which captures throughput-average latency relation under different batching and parallelism configurations. Figure 4. Throughput-Average latency for batch sizes of 1 (batching disabled) and 8 with different parallelism configurations on Resnet50 with 8 CPU cores allocations. The starred configuration is the chosen configuration through our experiments. Figure 3. InfAdapter structure; variants can be scheduled (by the Kubernetes scheduler) in any of the nodes. We choose (starred plot) to disable batching (set to 1), set inter-op parallelism to the number of CPU cores, and disable intra-op parallelism (set to 1) in InfAdapter across all the experiments to get the best throughput with a latency within the 750ms SLO. Further, we observed the same trend for all other model variants and CPU allocations. **Profiling methodology** InfAdapter optimization formulation (Equation 1) needs to know the throughput \(th_{m}(n_{m})\) of each variant \(m\) under different CPU core allocation \(n_{m}\). We use a linear regression model for each ML model variant and train these regression models using the profiling data of only 5 CPU allocations (1, 2, 4, 8, and 16 cores) out of all possible allocations to avoid extra profiling costs. The regression models are then used to predict the throughput of the variants under any CPU allocation. Figure 6 shows the prediction accuracy of regression models for Resnet18 and Resnet50. As observed in the experiments the predicted RPS can be accurately predicted using the regression model. For instance, the R-squared (\(R^{2}\)) for the regression models of Resnet18 and Resnet50 are 0.996 and 0.994 respectively. **Load forecaster** We used LSTM (K at 600s, almost all the compared methods suffer from SLO violations with a non-negligible margin (E.g., we observed a 10-minute violation for Resnet152). However, InfAdapter and MS+ temporarily trade-off a little accuracy in favor of being responsive to the load spike with a short SLO violation time. Between MS+ and InfAdapter, InfAdapter can achieve the same SLO attainment with less accuracy loss during the load burst. InfAdapter aims to provide a tradeoff between accuracy and cost objectives. Under \(\beta=0.05\), we observed that InfAdapter could balance the cost and accuracy objectives and comply with latency SLO. The same trend can be identified from the cumulative result of the entire experiment in Figure 7. The InfAdapter can always balance the cost and accuracy objectives better than MS+. Also, VPA variants mostly took an extreme in maximizing only one objective; e.g., VPA-18 is the most cost-effective, but it comes at the expense of being very inefficient in accuracy. Similarly, we used a non-bursty workload (Figure 8). We observed that InfAdapter has less accuracy loss than all other methods (except VPA+ with Resnet152, which has zero accuracy loss at the expense of high cost and SLO violations). Although in most cases InfAdapter has better SLO compliance, the difference between MS+ and InfAdapter in terms of cost and accuracy is small. We found that the difference was higher for a synthesized workload. In future work, we aim to evaluate InfAdapter the cost and accuracy trade-off with respect to SLO guarantees with different workloads. Refer to the appendix for experiments with \(\beta=0.0125\) and \(\beta=0.2\). We observed that lower values of \(\beta\) InfAdapter prioritize accuracy over resource cost, and more significant values of \(\beta\) do the opposite. ## 6. Related Work Configuration of machine learning inference systems has gained considerable attraction in recent years. Clipper (Clipper, 2016) is one of the early inference serving systems that introduced a general-purpose inference server with functionalities like caching, batching, and adaptive model selection. MARk (McCarthy et al., 2017) employs request batching, predictive scaling, and serverless functions and proposes autoscaling policies that also take the hardware heterogeneity and service type diversity (FaaS, CaaS, IaaS) of inference serving data-centers into consideration. INFaaS (NIFaaS, 2017) provides an abstraction layer that decouples the model serving task from the used model for serving. Per each inference request, it searches through all the available sets of models for that specific inference task. Based on the request requirement, it finds the suitable model variant and dynamically offloads and unloads models as the user requirements change. Model switching (Zhu et al., 2018) is the first work that proposes switching between lightweight and heavier models as a workload adaptation mechanism. To be responsive to workload surges, it switches to a smaller but less accurate model. Unlike InfAdapter, their model is not cost-aware and can only work under a fixed resource budget. Cocktail (Cocktail, 2018) is the most similar work to InfAdapter. It proposes an approach based on ensemble learning to reduce the cost while meeting the previous works' latency and accuracy efficiency. Cocktail uses ensembling as its accuracy maximization technique, which is costly as all the requests should be sent to all the ML models. Most of the time, many model sets should be used to get to the accuracy of the largest model. Cocktail uses transient virtual machines to improve cost efficiency. Nevertheless, using unstable transient instances can cause interruptions in the inference service. Deploying InfAdapter on CaaS platforms like Google Autopilot does not suffer from similar problems. Due to fundamental structural differences and different problem formulations, we could not compare InfAdapter with Cocktail. ## 7. Future Works **Hardware Heterogeneity.** While in this work, we focused on homogeneous CPU inferencing, the performance of InfAdapter under general purposed (GPUs) and ASIC ML hardware can be evaluated. With packing requests into batches, Figure 7. Comparison of InfAdapter with VPA used along with Resnet18, Resnet50 and Resnet152 on accuracy loss, cost and 99th percentile latency, for the whole experiment, under different \(\beta\) values. GPUs can process higher workloads without a considerable increase in latency. **Scalability with ML.** Our proposed solution to the ILP problem, works by brute-forcing through all possible configurations and picking the one that maximizes the objective function. Such an approach could suffer from scalability in case of growth in configuration space (more model variants e.g. scalability under one-for-all-networks (Koren et al., 2019) with \(10^{15}\) possible variants and bigger resource budgets in our case in a larger experimental setting). Utilizing ML-based solutions can decrease the amount of sampling in the search space, resulting in faster decision-making. **Multi Model Serving.** In the case of using accelerators like GPUs, it is hard to share them among several containers, as there is no built-in mechanism for GPU sharing in container orchestration platforms like Kubernetes (Kubernetes, 2022). The multi-model deployment pattern, adapted in most production ML model servers (Koren et al., 2019; Koren et al., 2019; Koren et al., 2019; Koren et al., 2019; Koren et al., 2022), can mitigate the issues. Considering these emerging ML serving paradigms for improving adaptation mechanisms is a potential future work. ## 8. Conclusion In this work, we presented InfAdapter for ML inference services. It selects a set of ML model variants and their resource allocations to achieve a trade-off between accuracy and cost while preserving latency SLO guarantee. Experiments on real-world traces showed that InfAdapter adapts better to dynamic workloads compared to the existing solutions by utilizing scaling and ML model variants selection. ## Acknowledgements This work has been supported in part by NSF (Awards 2233873, 2007202, and 2107463), German Research Foundation (DFG) within the Collaborative Research Center (CRC) 1053 MAKI, and Chameleon Cloud.
2305.19221
Microwave-to-optical conversion in a room-temperature $^{87}$Rb vapor with frequency-division multiplexing control
Coherent microwave-to-optical conversion is crucial for transferring quantum information generated in the microwave domain to optical frequencies, where propagation losses can be minimised. Among the various physical platforms that have realized coherent microwave-to-optical transduction, those that use atoms as transducers have shown rapid progress in recent years. In this paper we report an experimental demonstration of coherent microwave-to-optical conversion that maps a microwave signal to a large, tunable 550(30) MHz range of optical frequencies using room-temperature $^{87}$Rb atoms. The inhomogeneous Doppler broadening of the atomic vapor advantageously supports the tunability of an input microwave channel to any optical frequency channel within the Doppler width, along with simultaneous conversion of a multi-channel input microwave field to corresponding optical channels. In addition, we demonstrate phase-correlated amplitude control of select channels, resulting in complete extinction of one of the channels, providing an analog to a frequency domain beam splitter across five orders of magnitude in frequency. With frequency-division multiplexing capability, multi-channel conversion, and amplitude control of frequency channels, neutral atomic systems may be effective quantum processors for quantum information encoded in frequency-bin qubits.
Benjamin D. Smith, Bahar Babaei, Andal Narayanan, Lindsay J. LeBlanc
2023-05-30T17:09:50Z
http://arxiv.org/abs/2305.19221v2
Microwave-to-optical conversion in a room-temperature \({}^{87}\)Rb vapor with frequency-division multiplexing control ###### Abstract Coherent microwave-to-optical conversion is crucial for transferring quantum information generated in the microwave domain to optical frequencies, where propagation losses can be minimised. Among the various physical platforms that have realized coherent microwave-to-optical transduction, those that use atoms as transducers have shown rapid progress in recent years. In this paper we report an experimental demonstration of coherent microwave-to-optical conversion that maps a microwave signal to a large, tunable 550(30) MHz range of optical frequencies using room-temperature \({}^{87}\)Rb atoms. The inhomogeneous Doppler broadening of the atomic vapor advantageously supports the tunability of an input microwave channel to any optical frequency channel within the Doppler width, along with simultaneous conversion of a multi-channel input microwave field to corresponding optical channels. In addition, we demonstrate phase-correlated amplitude control of select channels, resulting in complete extinction of one of the channels, providing an analog to a frequency domain beam splitter across five orders of magnitude in frequency. With frequency-division multiplexing capability, multi-channel conversion, and amplitude control of frequency channels, neutral atomic systems may be effective quantum processors for quantum information encoded in frequency-bin qubits. ## Main Quantum information processing using solid-state qubits, such as superconducting qubits and using spins associated with defects in solid state systems, has shown remarkable progress in the field of quantum information technology [1; 2]. The typical operation frequency of these devices is \(\sim\)10 GHz, in the microwave region of the electromagnetic spectrum. Due to the inherent room-temperature thermal-photon occupation of cables at GHz frequencies that leads to signal noise, practical implementation of scalable and distributed quantum networks at these frequencies poses a formidable challenge [3]. In recent years, progress has been made in addressing coherent conversion of signals from the microwave to optical regimes [4; 5]: at optical frequencies, not only are the thermal noise photons negligible at room temperature, but there exist efficient single-photon detectors and technologies for quantum state storage and reconstruction. Several systems have been used to coherently convert signals at microwave frequencies to the optical regime. These include electro- and magneto-optical devices [6; 7], opto-mechanical structures, [8] and atoms with suitable internal energy level spacings [9; 10; 11]. Impressive progress on improving the efficiency of the conversion process led to efficiency near 80% [11], and the largest reported conversion bandwidths of an input microwave signal are 15-16 MHz [12; 13]. Along with conversion bandwidth, a transducer that can accommodate several channels of input and output frequencies is advantageous, and tunability over input microwave frequencies across 3 GHz width was recently reported [14]. Yet, transducers that operate over multiple optical output frequencies have not been demonstrated to date. In this article we show that a limited bandwidth input microwave signal can be converted to a large tunable range of output optical frequencies, across 550(30) MHz. This capacity enables information encoded in a narrow-bandwidth microwave field to be mapped to several frequencies in the optical domain, achieving coherent frequency-division multiplexing (FDM). The ability to tune the optical signal output over such a large frequency range arises because of the presence of a large inhomogeneous optical Doppler width for Rb atoms at room temperature. The Doppler width also enables an input multiple-channel microwave idler to be simultaneously and coherently converted to a multiple-channel optical-signal output. In addition, we demonstrate correlated amplitude control of the converted multi-channel optical field with the ability to selectively extinguish a desired output frequency channel. This action is very similar to frequency domain beam-splitting operation for frequency bins, demonstrated using electro-optic modulators (EOMs) and pulse shapers [15]. Coherent frequency-division multiplexing enables qubits encoded in a microwave frequency to be placed in any desired output optical channel within the range of tunability. Multiple qubits in different input microwave channels can also be coherently mapped to the optical domain. The ability to control the amplitude of selected output optical channels is very useful in perform ing single-qubit-gate operations spanning involving both microwave and optical frequencies. With a generated linewidth of about 30 Hz, this results in a large channel capacity of about \(10^{7}\) channels over the output tunable range. Thus neutral atom systems are a promising candidate as a quantum frequency processor platform [16] for quantum information encoded in frequency bin qubits. Microwave-to-optical conversion in a room-temperature \({}^{87}\)Rb atomic vapor is possible through a sum-frequency generation (SFG) process. The nonlinear process uses a hybrid, magneto-optical second-order [\(\chi^{(2)}\)] susceptibility induced in the vapor by the interaction of three energy levels with an optical pump (electric- dipole transition) and a microwave idler (magnetic-dipole transition) fields. (Note that atomic systems with only electric dipole transitions have a \(\chi^{(2)}\) of zero.) The pump frequency, chosen anywhere within this Doppler width [Fig. 3(b)], combines coherently with an input resonant microwave idler mode to generate a new optical signal field at frequency \(\omega_{\mathrm{S}}=\omega_{\mathrm{P}}+\omega_{\mathrm{I}}\). We begin by establishing a numerical model which explains the large frequency tunability and multi-channel conversion capability observed in this experiment, similar to that discussed for a generic three-level atomic system at room temperature, interacting with idler, pump, and signal fields [17]. The amplitudes of the idler and pump fields are \(\Omega_{\mathrm{I}}(0)\) and \(\Omega_{\mathrm{P}}(0)\) with the relative phase between the fields denoted by \(\Delta\phi\). The signal field is generated due to a sum frequency interaction between the idler and pump fields. The amplitude of the generated signal field \(\Omega_{\mathrm{S}}(l)\) from a medium of length \(\ell\), under a rotating wave and three-photon transformation is \[\Omega_{\mathrm{S}}(l)=\frac{i\Omega_{\mathrm{P}}(0)\Omega_{\mathrm{I}}(0)e^{ -i\Delta\phi}}{2\Gamma_{2}}(e^{-\beta\ell}-1), \tag{1}\] where the constant \[\beta = \frac{2\eta_{\mathrm{S}}\Gamma_{2}\Gamma_{3}}{4\Gamma_{1}\Gamma_{ 2}\Gamma_{3}+\Gamma_{3}|\Omega_{\mathrm{P}}(0)|^{2}+\Gamma_{2}|\Omega_{ \mathrm{I}}(0)|^{2}}, \tag{2}\] and \[\Gamma_{1}= \frac{\gamma_{P}}{2}+\frac{\gamma_{S}}{2}-i\;\Delta_{\mathrm{S}} (\mathbf{v}), \tag{3}\] \[\Gamma_{2}= \frac{1}{2}\gamma_{I}-i\left[\Delta_{\mathrm{S}}(\mathbf{v})-\Delta_{ \mathrm{P}}(\mathbf{v})\right],\] \[\Gamma_{3}= \frac{\gamma_{P}}{2}+\frac{\gamma_{S}}{2}+\frac{\gamma_{I}}{2}-i \;\Delta_{\mathrm{P}}(\mathbf{v}),\] \[\eta_{\mathrm{S}}(v)= \mathbf{d}_{\mathrm{ac}}^{2}\omega_{\mathrm{s}}N(v)/2\epsilon_{0}c,\] with \(N(v)\) is the number density of atoms in that velocity class with \(v=|\mathbf{v}|\). The value \(N(v)\) is the Maxwell-Boltzmann (MB) velocity distribution of atoms at room temperature. The Doppler-shifted detunings of the pump and the generated signal fields, as seen from the atom frame [Fig. (2 a, iii-v)], are denoted as \(\Delta_{\mathrm{P}}=\delta_{\mathrm{P}}-\mathbf{k}_{\mathrm{P}}\cdot\mathbf{v}\) and \(\Delta_{\mathrm{S}}=\delta_{\mathrm{S}}-\mathbf{k}_{\mathrm{S}}\cdot\mathbf{v}\), while the corresponding detunings in the lab frame are \(\delta_{\mathrm{P}}\) & \(\delta_{\mathrm{S}}\) [Fig. 1(c) & Fig. 2(a, i)]. The microwave idler field has negligible Doppler shift for Figure 1: (a) Schematic of the atomic frequency-division multiplexing scheme. The microwave cavity-vapor cell system acts analogously to an RF mixer, and information encoded in idler channels (\(\blacktriangle\), \(\blacklozenge\), \(\blacksquare\)) is combined coherently with multiple pump channels to produce many output signal channels over a range of optical frequencies. (b) A representation of our microwave cavity and the generation process. The optical pump beam passes transversely through the cavity and cell and is polarized along the \(z\)-axis, parallel to a weak magnetic field (not shown). The cavity’s TE\({}_{011}\) mode overlaps with the pump interaction region inside the vapor cell. The arrows define the beam propagation and polarization directions. (c) Three-level energy diagram showing the cyclical transition scheme. The idler (\(\omega_{I}\)) and pump (\(\omega_{P}\)) fields combine coherently in the vapor to produce the signal field (\(\omega_{\mathrm{S}}\)). The frequencies shown are for \({}^{87}\)Rb; in a room-temperature vapor, the excited state addressed by the pump field is Doppler-broadened by approximately 550 MHz. all velocity values [Fig. 2(a, i)]. The dipole matrix element between levels \(|a\rangle\) and \(|c\rangle\) is \(\mathbf{d}_{\rm ac}\). The natural decay rates from the \(|a\rangle\) level to both the \(|c\rangle\) and \(|b\rangle\) levels are denoted by \(\gamma_{P}\) and \(\gamma_{S}\), respectively [Fig. 1 (c)]. To maximise the generated signal field, sufficiently high input pump and idler field intensities and a small value for \(\Gamma_{2}\) [Eq. (1)] are required. From Eq. (3) this is achieved when \[\Delta_{\rm S}(\mathbf{v})-\Delta_{\rm P}(\mathbf{v})=0. \tag{4}\] Typically, the wavelengths of optical fields are such that \(k\equiv|\mathbf{k}_{S}|\sim|\mathbf{k}_{P}|\gg|\mathbf{k}_{I}|\), in which case Eq. (4) reduces to \((\delta_{S}-\delta_{P})=0\). Since the signal and pump fields travel in the same direction, the phase-matching condition \(\mathbf{k}_{I}+\mathbf{k}_{P}=\mathbf{k}_{S}\) is automatically satisfied for all atomic velocity classes. The maximum signal generation for each velocity class is obtained when \(\delta_{\rm S}\) of the signal-field is the same as the pump-field detuning \(\delta_{\rm P}\), such that different velocity classes participate in the maximum signal-field generation at different pump-field detunings. Therefore, changing the laboratory pump frequency within the Doppler width to \(\omega_{P}^{\prime}=\omega_{P}+\delta\) gives rise to a corresponding signal field at \(\omega_{S}^{\prime}=\omega_{S}+\delta\), thus enabling a large tunable generation bandwidth. This is depicted numerically in Fig 2(a, ii) for atoms at T = 300\({}^{\circ}\) C. The shape of the generated signal field is primarily determined by the MB velocity distribution. The broad Doppler width at room temperature and guaranteed phase-matching for sum-frequency generation [Eq. (4)] make possible the simultaneous conversion of multi-channel microwave field inputs to a multi-channel optical signal outputs. Multiple frequency channels in the idler and pump fields are obtained by either amplitude or phase modulation. Specifically, the idler field amplitude is modulated as \(\Omega_{I}(t)=\Omega_{I0}(1+a_{m}\sin(\omega_{\rm AM})t)\) and phase modulation of the pump field takes the form \(\Omega_{p}(t)=\Omega_{p0}\sin(\omega_{p}t+p_{m}\sin(\omega_{\rm PM})t+\Delta\phi)\). The symbols \(a_{m}\) and \(p_{m}\) denote amplitude and phase modulation indices. The modulation frequencies are \(\omega_{\rm AM}\) and \(\omega_{\rm PM}\) and \(\Delta\phi\) denotes the relative phase between the two modulation fields. By multiplying the modulated idler and pump fields to obtain the signal field, we derive the amplitudes for the generated signal field and its first order side-bands for the case of equal modulation frequencies \(\omega_{\rm AM}=\omega_{\rm PM}=\delta\) and for equal modulation indices \(a_{p}=a_{m}=0.2\). At equal modulation frequencies, the pump and idler fields associated with the side-band channels interfere with each other [Fig. 2(c)] enabling control of their amplitudes through \(\Delta\phi\)[18]. The powers of the positive and negative sideband channels at \(\omega_{\rm S}\pm\delta\) are proportional to \(a_{m}^{2}+a_{p}^{2}+2a_{m}a_{p}\sin(\Delta\phi)\) and \(a_{m}^{2}+a_{p}^{2}-2a_{m}a_{p}\sin(\Delta\phi)\) (see Supplementary Material). The power variations at \(\pm\delta\) sidebands, as a function of \(\Delta\phi\), are shown in Fig. 2(d). Figure 2: (a-i) Lab-frame cartoon depicting multiple pump frequency channels combining with a single idler to produce multiple signal channels. (ii) These pump frequencies access independent atomic velocity classes from a thermal MB distribution. (iii-v) In their reference frame, atoms in these velocity classes see the respective pump Doppler-shifted to resonance. Generation occurs in the velocity class for which the three-photon resonance condition (i.e. energy conservation) is satisfied. (b) Calculated spectra of an amplitude-modulated idler field showing sidebands (SBs); the carrier and SBs combine with the pump to produce a multichannel signal spectrum. (c) When the idler and pump are both modulated at the same frequency, multiple generation pathways interfere producing an upper and lower generated signal SBs. (d) Calculated curves showing correlated signal SB amplitudes, which vary coherently with the relative phase of the two modulations depicted in (c). Both AM and PM modulation indices are 0.2. ## Results Figure 1(c) shows the energy levels of the \({}^{87}\)Rb atoms used as the nonlinear medium in this experiment. The TE\({}_{011}\) mode of our cylindrical copper microwave cavity [Fig. 1(b)] supports the idler field (\(\omega_{\rm I}\)), connecting \(|b\rangle\equiv|5S_{1/2},F=1\rangle\) and \(|c\rangle\equiv|5S_{1/2},F=2\rangle\) ground state levels through a magnetic-dipole transition. The pump optical field (\(\omega_{\rm P}\)) induces an electric-dipole transition between \(|c\rangle\) and \(|a\rangle\equiv|5P_{3/2},F^{\prime}=1,2\rangle\). We define \(\delta\) as the detuning of the pump field from the the mid-energy point between the \(|F^{\prime}=1\rangle\) and \(|F^{\prime}=2\rangle\) excited levels. First, we show that light is generated on the \(|a\rangle\rightarrow|b\rangle\) transition. The \(\chi^{(2)}\) interaction mediated by these atoms between the pump and microwave fields results in a sum-frequency process that generates a corresponding signal optical field. By combining the generated optical light with an optical local oscillator, we detect the generated light at the heterodyne beat frequency \(f_{\rm het,\ \delta}\) (see Methods). To probe the frequency-tunable nature of the conversion process, we scan the pump laser frequency across the Doppler width. At any instant during the scan, the pump field interacts with one velocity class of atoms, which see this frequency on resonance with the pump transition. The generated signal reaches a maximum amplitude between the \(F\)=\(2\to F^{\prime}=\)1 and \(F^{\prime}\)=2 transitions [Fig. 3(b)], with a full width at half maximum (FWHM) of 550(25) MHz. This generated spectrum corresponds to the combined Doppler-dependent absorption coefficients for the \(F=1\to F^{\prime}=1,\ 2\) transitions, which has a FWHM of 540 MHz. The \(F^{\prime}=3\) state does not participate in this nonlinear process, since selection rules prohibit an electric-dipole transition to the \(F=1\) ground state. Similarly, the pump transition from \(F=2\to F^{\prime}=0\) is electric-dipole forbidden. Next, we explore the multi-channel character of the conversion process. The pump laser is first tuned halfway between the \(F=2\to F^{\prime}=2\) & 3 transitions. By modulating either the idler or the pump, multiple frequency components are generated in the signal: we amplitude-modulate (AM) the microwave idler signal at \(\omega_{\rm AM}\) and observe sidebands in the signal at \(\omega_{S}\ \pm\ \omega_{\rm AM}\). The conversion bandwidth from idler modulation is limited to a maximum of 1 MHz by the width of the cavity resonance. Next, we additionally phase-modulate (PM) the pump using a fiber EOM in the optical path and observe that the idler and pump combine together as dictated by the sum-frequency process [Eq. (1)]. The MB width and density determine the bandwidth and amplitudes of the pump-modulated generated sidebands. Both types of sideband generation are illustrated in Fig. 4(a), including simultaneous idler AM and pump PM. In the three cases depicted, \(\omega_{\rm AM}/2\pi=50\) kHz is fixed and \(\omega_{\rm PM}/2\pi\) is varied from 10 MHz to 100 MHz. When the pump and idler are modulated at the same frequency, \(\omega_{\rm PM}=\omega_{\rm AM}\), the sidebands from each input overlap and interfere. By tuning the relative phase of the modulating waves, one sideband or the other can be selectively enhanced or eliminated, with a relative visibility of 97% [Fig. 4(b-i)], in qualitative agreement with the calculated variation shown in Fig. 2(d). The efficiency of conversion from microwave to signal photons is defined as the rate of signal photon production to the rate of microwave photons residing in the cavity mode [9; 10]: \[\eta\equiv\frac{P_{\rm signal}/\hbar\omega_{\rm signal}}{P_{\rm idler}/ \hbar\omega_{\rm idler}}. \tag{5}\] From this work, we report a maximum conversion efficiency of \(\eta_{0}=3.0(2)\times 10^{-9}\). Further details on conversion efficiency calculations are included in the Supplementary Material. ## Discussion The frequency tunability of the generated signal field results from the existence of a large Doppler width at room temperature. While this work uses a warm vapor cell of enriched \({}^{87}\)Rb without an anti-relaxation coating or a buffer gas, a buffer gas would provide additional tunability due to collisional broadening [19; 20; 21]. This tunability is ultimately limited to the order of the ground-state hyperfine splitting of the atoms used. While other conversion schemes display degrees of tunability of the output frequency [9; 11; 13; 14], these are often restricted by natural atomic or optical cavity resonance frequencies. In the warm-atom system, a continuous tunability over the entire Doppler width is possible. This frequency multiplexing capability enables multiple microwave sources to be interfaced with a single atomic transducer, and the conversion can be used to place differing sources in distinct optical frequency domains. Switching between different optical frequencies for a single source is also possible. Our scheme features a conversion bandwidth of 910(20) kHz [see Fig. 5(a)], which is primarily limited by the cavity linewidth. The high-contrast amplitude control of signal sidebands enabled by simultaneous pump and idler modulation [Fig. 4(b)] is equivalent to a tunable beam-splitter operating in the frequency domain [15]. The modulation indices and the relative phase act together as reflection and transmission coeffiecients. For particular values of these coefficients we can selectively generate a chosen sideband channel in the optical domain while completely suppressing the other, resulting in single sideband conversion. Thus, our neutral atom system is capable of performing as a "quantum frequency processor" [22; 16; 23] with particularly low input- field intensities. This allows us not only to encode and manipulate information in mul tiplexed frequency bins but also to convert it from microwave to optical frequency domains. The conversion efficiency here is comparable with magnon-based microwave-to-optical conversion schemes [13; 14]. The warm-atom efficiency is inherently limited by the intrisic weakness of the magnetic-dipole microwave transition, spin-exchange collisions, and transit decoherence. A larger optical beam combined with a buffer gas [24; 25] and anti-relaxation coating on the cell walls [26; 27; 28; 29], would help reduce spin decoherence from the dark state and boost conversion efficiency. Experiments using cold atoms for microwave-to-optical conversion have employed optical depths of up to 15 [9] and 120 [11]. Efficiency in our system is limited by an optical depth of \(0.05(1)\) corresponding to a number density of about \(4\times 10^{16}\) cm\({}^{-3}\) with \(8.7(1)\times 10^{8}\) atoms in the interaction region. This atom number is orders of magnitude smaller than the \(3.2(1)\times 10^{18}\) available microwave photons in the same volume. In summary, we have shown that inhomogeneous Doppler broadening in a warm vapor cell provides significant tunability of the output optical frequency in a microwave-optical transduction, resulting in frequency division multiplexing. The inhomogeneous width also enables simultaneous frequency up-conversion of multi-channel microwave inputs. Amplitude control of up-converted optical channels demonstrates analogous frequency domain beam splitting action across five orders in frequency. The frequency division multiplexing capability combined with amplitude control can make neutral atoms as quantum processors for information encoded in frequency bin qubits ## Methods ### Experimental Setup The experimental setup can be seen in Fig. 3(a). The pump optical field is derived from an continuous-wave external cavity diode laser (MOGLabs CEL) operating at 780.24 nm. This laser is frequency-stabilized to the Figure 3: (a) Schematic of the experimental setup: Central to the experiment is a microwave cavity supporting a TE011 mode, polarized along the \(y\)-axis. The light incident on the cavity is polarized along the \(z\)-axis and propagates along the \(x\)-axis. The vapor cell inside the cavity has enriched \({}^{87}\)Rb. Unshifted light from the diode laser (ECDL) is used as a local oscillator for heterodyne detection; it is combined with the other beam in a fiber 50:50 beam splitter (BS), producing two beat signals at \(f_{\text{het, P}}\) and \(f_{\text{het, S}}\). AOM - Acousto-optic modulator; EOM - Electro-optic modulator; SG - signal generator; SA - spectrum analyzer; PD - photodiode. A vector network analyzer used in certain measurements is not shown here. (b) The generated signal measured with the spectrum analyzer in zero-span mode. This peak has a max efficiency of \(\eta_{0}=3.0(2)\times 10^{-9}\) and a FWHM of 550(25) kHz. The dashed curve shows the calculated Doppler-broadened absorption coefficient of the pump light; the individual components due to the \(F^{\prime}=1\) and \(F^{\prime}=2\) excited hyperfine levels are shown as dotted curves. Without any fitting or free parameters, the generated signal data is proportional to the MB absorption distribution of the pump transitions. (c) The generated signal amplitude responds roughly linearly to increased pump optical power. (d) The response of the generated signal amplitude to microwave powers from SG1. saturated absorption spectrum of a separate Rb reference vapor cell. We have the option to lock the laser frequency 80 MHz below the \(F^{\prime}\)=2-3 crossover transition with a locked linewidth of \(\sim 100\) kHz. Alternatively, we can unlock and move the laser frequency off-resonance. The pump beam is then shifted upwards in frequency by \(f_{\rm{AOM}}=76.60\) MHz with an acousto-optical modulator and transmitted to the cavity-vapor cell apparatus via optical fiber. After the optical fiber, the polarization is filtered with a polarizing beam splitter (PBS) and the power of the pump light is stabilized with a servo controller (NuFocus LB1005) to \(<1\%\) fluctuations. The Rabi frequency of the pump laser \(\Omega_{\rm{P}}/2\pi\) has a value of 3.5 MHz [30] and it is linearly polarized along the \(z\)-axis. Inside the cavity, the idler microwave field is polarized along the \(y\)-direction. Orthogonal Helmholtz coils null the external magnetic field along the \(x\) and \(y\)-axes and apply a weak DC magnetic field \(\vec{B}_{z}\) along the \(z\)-axis. The magnetic field strength makes a striking impact on the conversion efficiency. As shown in Fig. 6(b), conversion is strongly suppressed at zero magnetic field and reaches maxima at \(\sim\pm 200\) mG. This behavior is well-described by a symmetric, generalized Fano lineshape [31] that has the simplified form \[\eta(B_{z})=\eta_{0}^{\prime}\left[\frac{2\gamma_{B}(B_{z}-B_{0})}{\gamma_{B}^ {2}+(B_{z}-B_{0})^{2}}\right]^{2}. \tag{6}\] This points to the participation and interaction of multiple Zeeman levels in the generation process. At Zeeman degeneracy, generation pathways may destructively interfere. The value \(\gamma_{B}\times\gamma_{0}=140\) kHz \(\sim\gamma_{I}\), the ground state decoherence rate, where \(\gamma_{0}\) is the ground state gyromagnetic ratio of \({}^{87}\)Rb [32]. Further discussion goes beyond the scope of this work and will be the topic of future investigation. The generated signal light emerges from the cavity polarized along the \(y\)-axis, orthogonal to the pump [33]. Before recoupling to an optical fiber, a half-waveplate is manually adjusted to optimize the signals at \(f_{\rm{het,~{}P}}\) and \(f_{\rm{het,~{}S}}\) for each respective measurement. The light is then combined in a 50:50 fiber beam splitter with local oscillator laser light that was not shifted by the acousto-optic modulator (AOM). The interference produces heterodyne beat notes at \(f_{\rm{het,~{}P}}=f_{\rm{AOM}}\) and \(f_{\rm{het,~{}S}}=\omega_{\mu}/2\pi+f_{\rm{AOM}}=6.911\) 282 GHz. A high bandwidth amplified photodiode (PD1, EOT ET-4000AF) detects the beating signals. This electronic signal is amplified (AMP, Mini-Circuits ZJL-7G+) and its amplitude is measured by a spectrum analyzer (SA, R&S FSV3013). This heterodyne peak at \(f_{\rm{het,~{}s}}\) has a FWHM of 26(2) Hz [see Fig. 5(b)], larger than the \(<2\) Hz linewidth of our microwave source, SG1. We presume the presence of additional perturbations (e.g. 60 Hz line noise) broadens and modulates our generated signal. Figure 4: (a-i) Multichannel generation using a fixed microwave amplitude modulation frequency of \(\omega_{\rm{AM}}/2\pi=50\) kHz and a variable pump phase modulation frequency \(\omega_{\rm{PM}}/2\pi\) of 10 (blue), 50 (red) and 100 (black) MHz. The relative sideband amplitude decreases for increased \(\omega_{\rm{PM}}\) because the pump light accesses velocity classes with decreased occupation. (a-ii) Zoomed-in data revealing AM sidebands flanking each of the central and SB EOM peaks. (b) When the two modulation frequencies are equal, i.e. \(\omega_{\rm{PM}}=\omega_{\rm{AM}}\), the EOM and AM sidebands overlap and interfere. The relative phase difference \(\Delta\phi\) between the modulating waves controls the height of the two correlated generated sidebands. (b-i) The generated spectrum at zero relative phase. (b-ii) The relative power amplitude response of the peaks have a visibility of 97%, while the central peak is independent of the relative phase. The two SB amplitudes differ from each other by \((\pi-0.38)\) radians of phase. The dashed lines show independent fits to a cosine. ### Broad-tunability To access all the frequencies within the Doppler profile, the pump frequency is scanned across the Doppler profile with a laser frequency sweep. We trigger the SA off the sweep ramp and measure the heterodyne signal in zero-span mode with a resolution bandwidth of 2 kHz. Simultaneous to the sweep, we also trigger and record the transmission of pump light through a reference vapor cell of rubidium (natural abundance) placed near the microwave cavity and at ambient 21.6(3) \({}^{\circ}\)C. We find good agreement between the measured spectrum and a theoretical transmission model [34]. Using the \({}^{87}\)Rb and \({}^{85}\)Rb transmission dip extrema, we calibrate our sweep rate to 17.2 MHz/ms and convert the time axis of the series to frequency [Fig. 6(a)]. ### Multichannel Conversion We generate a multichannel optical signal field in two different ways: first, the signal generator (SG1) supplying the idler field is amplitude-modulated at a variable \(\omega_{\rm AM}\). We note that due the finite microwave cavity resonance, \(\omega_{\rm AM}/2\pi\) can be tuned up to 455(10) kHz before the sidebands become significantly suppressed. Second, we modulate the pump light by using a fiber phase-modulator EOM (EOSPACE PM-0S5-20-PFA-PFA-780) inserted into the pump light optical path. We drive the EOM with a second signal generator (SG2, SRS SG384) at a variable \(\omega_{\rm PM}\). This produces sidebands in the pump optical spectrum, and corresponding sidebands are visible in the generated spectrum around the heterodyne frequency \(f_{\rm het,~{}S}\). The SA and SG2 are both phase-locked to the internal 10 MHz clock reference of SG1. In this second case, \(\omega_{\rm PM}\) can be tuned over the Doppler FWHM before the sidebands become significantly suppressed. Figure 5: (a) The raw generated signal during a scan of \(\delta_{I}\), as measured with a vector network analyzer in S21 mode. We measure a conversion bandwidth (FWHM) of 910(20) kHz when the laser is locked on-resonance and with a \(z\)-magnetic field of 209 mG. The dashed line is a smoothing spline used to determine the bandwidth. No generation is visible when the laser is off-resonance. The cavity resonance has a FWHM of 417 kHz, as measured with the VNA in S11 mode. (b) A close up of the unmodulated generated line revealing a FWHM of 26(2) Hz. Figure 6: (a) The Doppler-broadened transmission spectrum of a reference rubidium vapor cell (outside the microwave cavity) at room temperature, measured while sweeping the pump laser frequency. The dotted line shows a transmission model curve with good agreement to the measured trace. This trace was taken simultaneous to the data shown in Fig. 3(b); the positions of the two Doppler peaks are used to calibrate the horizontal frequency scale. (b) The generated signal’s response to a variable magnetic field strength \(\vec{B}_{z}\) along the z-axis. We operate at 209 mG for maximum generation. We fit a symmetric Fano profile [Eq. (6)] to these data, with fit parameters \(\eta_{0}^{\prime}=0.95(2)\), \(\gamma_{B}=200(4)\) mG and \(B_{0}=-5(3)\) mG. This behavior might indicate the interaction and interference of multiple Zeeman level resonances within the decoherence width of our atoms. ### Conversion Bandwidth We characterize the conversion bandwidth of our microwave-to-optical conversion process. To do this, we replace SG1 and SA with Ports 1 and 2, respectively of a vector network analyzer (VNA, Keysight E5063A). The generated signal is measured on the VNA in S21 mode and is shown in Fig. 5(a). When the laser is locked on resonance and at the optimal magnetic field of 209 mG, a clear peak in the S21 signal is visible with FWHM = 910(20) kHz. This width is compared to the 417 kHz width of the cavity resonance. When the laser is off-resonance, no generation is apparent. We also characterize the effects of various pump and idler powers as shown in Figs. 3(c-d). For a good signal-to-noise ratio, we typically operated with a pump power near 76 \(\mu\)W and idler power of 3.2 mW. ###### Acknowledgements. We gratefully acknowledge that this work was performed on Treaty 6 territory, and as researchers at the University of Alberta, we respect the histories, languages, and cultures of First Nations, Metis, Inuit, and all First Peoples of Canada, whose presence continues to enrich our vibrant community We are indebted to Clinton Potts for machining the microwave cavity used in this work. We are also grateful to the John Davis, Mark Freeman, Robert Wolkow, and the Vadim Kravchinsky groups for generously lending equipment used in these experiments, and to Ray DeCorby for helpful feedback on this work. ## Data Availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. ## Supplementary Information **Microwave Cavity & Vapor Cell**: The cylindrical microwave cavity is machined out of low-oxygen copper, with an internal diameter of 58 mm and length of 53(2) mm. The resonance frequency of the cavity TE011 mode is precisely tuned to the rubidium ground state transition, \(\omega_{0}=2\pi\times 6.834\) 682 GHz, by a micrometer attached to the cavity endcap. Microwave power to the cavity is supplied by a low-noise signal generator (SG1, AnaPico APSIN12G). As the ground state magnetic-dipole transition is weak, the cavity mode enhances this transition probability. Inside the cavity is a rectangular quartz cell with external dimensions of 30 x 12.5 x 12.5 mm (1 mm thick walls) containing enriched \({}^{87}\)Rb. It has a cylindrical stem that protrudes from the other face of the cavity and is gently clamped in place by a support structure water-cooled to 16 C. In addition, the cavity is wrapped with silicon thermal heating pads and aluminum foil, and is temperature stabilized at 37 C. We discovered that excess rubidium inside the main volume deposits and forms a thin metallic film on the inner cell walls. This metalization scatters the internal microwave mode, thereby lowering the internal cavity quality factor. The cooled stem pulls the excess rubidium out of the cavity, and the heated cavity vaporizes the deposited metal. With both heating and cooling, we measure the loaded quality factor (Q) of our cavity with a vector network analyzer in S11 mode to be 9000 +/- 100. The rubidium vapor density is ultimately set by the temperature of the cold stem through a "reservoir" effect [35]. From our measured peak pump optical depth of 0.03(1) we calculate a number density of \(1.7(6)\times 10^{16}\) m\({}^{-3}\). **Cavity Mode Volume**: We performed COMSOL simulations of our microwave cavity and the TE\({}_{011}\) mode of interest. With the simulation results, we calculated the mode volume from the equation [4]: \[V_{\mathrm{m}}=\frac{\int_{V_{\mathrm{ex}}}|\mathbf{B}(\mathbf{r})|^{2}/\mu( \mathbf{r})}{(|\mathbf{B}(\mathbf{r})|^{2}/\mu(\mathbf{r}))_{\mathrm{max}}}\] (S1) to be 9.19 cm\({}^{3}\), where \(V_{\mathrm{ex}}\) is the internal volume of the microwave cavity. **Interaction Volume**: We determine the conversion interaction volume to be the region inside the vapor cell illuminated by the pump laser beam. We make the assumption that the microwave cavity mode is uniform over this volume. The pump beam incident to the microwave cavity is collimated with a \(1/e^{2}\) radius of \(r_{0}=0.87\) mm and a power of 75 uW. Due to the hole aperture of \(r_{\mathrm{h}}=1.5\) mm radius, about 95% of the incident light passes through. The effective uniform beam has a area of: \[A_{eff}=\int_{0}^{r_{\mathrm{h}}}re^{-(r/r_{0})^{2}}dr =\pi r_{0}^{2}\left(1-e^{-(r_{\mathrm{h}}/r_{0})^{2}}\right)\] (S2) \[\simeq 0.95\times\pi r_{0}^{2}\] (S3) \[=2.21(4)\ \mathrm{mm}^{2}\] (S4) and the volume overlapping with the atoms is \(V_{\mathrm{int}}=19(1)\ \mathrm{mm}^{3}\). **Efficiency**: The generated optical power is weaker than the residual pump by a factor determined by the relative amplitudes of the two heterodyne peaks \(P_{\mathrm{SA,\ pump}}\) and \(P_{\mathrm{SA,signal}}\) measured (in dBm) on the spectrum analyzer: \[P_{\mathrm{signal}}/P_{\mathrm{pump}}=10^{(P_{\mathrm{SA,\ signal}}-P_{ \mathrm{SA,pump}}+\zeta)/10}.\] (S5) The correction term \(\zeta\) is the combined frequency response and insertion loss difference between the two heterodyne frequencies. It is calibrated from the response of a beating signal between the local oscillator light and a weak beam from a separate tunable laser. To estimate the available microwave photon rate, we first determine the fraction of the source power \(P_{\rm cav}/P_{0}\) which is coupled into the cavity. From an S21 measurement of the cable, its insertion loss \(IL\) was determined from S21 to be -2.928(3) dB and the cavity absorbs \(\Delta RL=-23\) dB beyond this, meaning that all but 0.5% of the incident power is transmitted to the cavity. \[P_{\rm cav}/P_{0}=10^{(IL-\Delta RL)/10}\] (S6) We now assume that \(P_{\rm cav}/2\) is distributed uniformly over the magnetic TE\({}_{011}\) mode volume. The factor of 2 comes because power is equally shared between the electric and magnetic components of the mode, but only the magnetic component participates. Only the portion of \(V_{\rm m}\) that overlaps with the laser interaction volume \(V_{\rm int}\) participates in the conversion process. Therefore, the microwave power involved in the conversion is: \[P_{\rm idler}=\frac{P_{0}}{2}\frac{V_{\rm int}}{V_{\rm m}}10^{(IL-\Delta RL)/10}\] (S7) A critical difference between this efficiency calculation and the previous work of Adwaith, et al. [10] is that they divided by \(V_{\rm cav}\) instead of the more accurate \(V_{\rm m}\)[4]. Finally, we calculate the efficiency from Eq. (5) to be: \[\eta=2\frac{P_{\rm pump}}{P_{0}}\frac{V_{\rm m}}{V_{\rm int}}\frac{\omega_{I} }{\omega_{S}}\frac{10^{(P_{\rm SA,\ signal-P_{\rm SA,pump}+\zeta)/10}}}{10^{( IL-\Delta RL)/10}}.\] (S8) **Modulated Sideband Interference**: A derivation of sideband amplitudes of the generated signal field with amplitude modulation of the idler field and phase modulation of pump field is given below. The derivation is for equal modulation frequencies \(\delta\) with the relative phase of modulations taken as \(\Delta\phi\). The pump field phase modulation is represented as \[\Omega_{p}(t)=\Omega_{p0}\sin[(\omega_{\rm P}t+a_{p}\sin(\delta t)].\] (S9) Here \(\Omega_{p0}\) is the initial value of the pump field amplitude and \(a_{p}\) is the phase modulation index. The idler field is amplitude modulated as \[\Omega_{I}(t)=\Omega_{I0}[1+a_{m}\sin(\delta t+\Delta\phi)]\sin(\omega_{\rm l }t),\] (S10) with amplitude modulation index \(a_{m}\). The generated field is obtained by multiplying the idler and pump fields. This is equivalent to summing their frequencies in the frequency domain. The resultant expression for the generated field under the approximation of small phase modulation index values is \[\Omega_{\rm s}(t) = \Omega_{p0}\Omega_{I0}\left[1+a_{m}\sin(2\pi\delta t+\Delta\phi) \right]\sin(\omega_{\rm l}t)\] \[\times\left\{\sin(\omega_{\rm P}t)+a_{p}\cos(\omega_{\rm P}t)[ \sin(2\pi\delta t)]\right\}\] Using trigonometric identities Eq. (S11) can be grouped based on the generated frequencies in the signal field through the sum frequency process. The relevant terms are the central frequency at \(\omega_{s}=\omega_{I}+\omega_{P}\), the first positive sideband at \(\omega_{s}+\delta\) and the first negative sideband at \(\omega_{s}-\delta\): \[\Omega_{p0}\Omega_{I0}\cos(\omega_{s}t)\] (S12) \[\Omega_{p0}\Omega_{I0}A_{+}\sin\left([\omega_{s}+\delta]t+\theta _{+}\right)\] (S13) \[\Omega_{p0}\Omega_{I0}A_{-}\sin\left([\omega_{s}-\delta]t+\theta _{-}\right)\] (S14) The amplitudes of the first positive and negative sidebands (\(A_{\pm}\)) are: \[A_{+} =\sqrt{a_{m}^{2}+a_{p}^{2}+2a_{p}a_{m}\sin(\Delta\phi)}\] (S15) \[A_{-} =\sqrt{a_{m}^{2}+a_{p}^{2}-2a_{p}a_{m}\sin(\Delta\phi)}\] (S16) The phases of the first sideband oscillations (\(\theta_{\pm}\)) are: \[\tan(\theta_{+})=\frac{a_{m}\cos(\Delta\phi)}{a_{m}\sin(\Delta \phi)+a_{p}}\] (S17) \[\tan(\theta_{-})=\frac{a_{m}\sin(\Delta\phi)-a_{p}}{a_{m}\cos( \Delta\phi)}\] (S18)
2304.01800
Quantum Public-Key Encryption with Tamper-Resilient Public Keys from One-Way Functions
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Barooti-Grilo-Malavolta-Sattath-Vu-Walter, eprint:2023/877]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels, based only on one-way functions. Moreover, we show a generic compiler to upgrade security against chosen plaintext attacks (CPA security) into security against chosen ciphertext attacks (CCA security) only using one-way functions. As a result, we obtain CCA secure quantum public-key encryption based only on one-way functions.
Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Takashi Yamakawa
2023-04-04T13:57:17Z
http://arxiv.org/abs/2304.01800v4
# Quantum Public-Key Encryption with Tamper-Resilient Public Keys from One-Way Functions ###### Abstract We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Grilo-Sattath-Vu, eprint:2023/345; Barooti-Malavolta-Walter, eprint:2023/306]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction based only on one-way functions is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels. Introduction ### Background Quantum physics provides several advantages in cryptography. For instance, statistically-secure key exchange, which is impossible in classical cryptography, becomes possible if quantum states are transmitted [1]. Additionally, oblivious transfers and multiparty computations are possible only from one-way functions (OWFs) in the quantum world [1, 1]. Those cryptographic primitives are believed to require stronger structured assumptions in classical cryptography. Furthermore, it has been shown that several cryptographic tasks, such as (non-interactive) commitments, digital signatures, secret-key encryption, quantum money, and multiparty computations, are possible based on new primitives such as pseudorandom states generators, pseudorandom function-like states generators, one-way states generators, and EFI, which seem to be weaker than OWFs [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. Quantum public key encryption from OWFs.Despite these developments, it is still an open problem whether public-key encryption (PKE) is possible with only OWFs (or the above weaker primitives) in the quantum world. PKE from OWFs is impossible (in a black-box way) in the classical cryptography [13]. However, it could be possible if quantum states are transmitted or local operations are quantum. In fact, some recent works [13, 14, 15, 16] independently constructed quantum PKE (QPKE) with quantum public keys based on OWFs or pseudorandom function-like states generators. However, the constructions proposed in those works have a huge drawback as explained below, and thus we still do not have a satisfactory solution to the problem of "QPKE from OWFs". How to certify quantum public keys?When we study public key cryptographic primitives, we have to care about how to certify the public keys, that is, how a sender can check if a given public key is a valid public key under which the secrecy of the encrypted messages is guaranteed. When the public keys are classical strings, we can easily certify them using digital signature schemes. However, in the case where the public keys are quantum states, we cannot use digital signature schemes to achieve this goal in general1, and it is unclear how to certify them. Footnote 1: There is a general impossibility result for signing quantum states [1]. As stated above, some recent works [13, 14, 15, 16] realized QPKE with quantum public keys from OWFs or even weaker assumptions. However, those works did not tackle this quantum public key certification problem very much. In fact, as far as we understand, to use the primitives proposed in those works meaningfully, we need to use secure quantum channels to transmit the quantum public keys so that a sender can use an intact quantum public key. This is a huge drawback since the goal of PKE is to transmit a message _without assuming secure channels_. If the sender can establish a secure channel to obtain the quantum public key, the sender could use it to transmit the message in the first place, and there is no advantage to using the PKE scheme. QPKE with tamper-resilient quantum public keys.One of our goals in this work is to solve this issue and develop a more reasonable notion of QPKE with quantum public keys. Especially, we consider the setting with the following two natural conditions. First, we assume that every quantum state (that is, quantum public keys in this work) is sent via an unauthenticated channel, and thus it can be tampered with by an adversary. If we do not assume secure quantum channels, we have to take such a tampering attack into account since authentication generally requires secrecy for quantum channels [1]. Second, we assume that every classical string is sent via an authenticated channel. This is the same assumption in classical PKE and can be achieved using digital signatures. Note that the security of the constructions proposed in the above works [13, 14, 15, 16] is broken in this natural setting. In this work, we tackle whether we can realize QPKE with quantum public keys that provides a security guarantee in this natural setting, especially from OWFs. ### Our Results We affirmatively answer the above question. We realize the first QPKE scheme based only on OWFs that achieves the goal of classical PKE, which is to establish secure communication over insecure channels. We define the notions of QPKE that can be used in the above setting where unauthenticated quantum channels and classical authenticated channels are available. Then, we propose constructions satisfying the definitions from OWFs. Below, we state each result in detail. Defintional work.We redefine the syntax of QPKE. The difference from the previous definitions is that the key generation algorithm outputs a classical verification key together with the secret key. Also, this verification key is given to the encryption algorithm with a quantum public key and a message so that the encryption algorithm can check the validity of the given quantum public key. We then define a security notion for QPKE that we call _indistinguishability against public key tempering attacks (IND-pkTA security)_. Roughly speaking, it guarantees that indistinguishability holds even if messages are encrypted by a public key tampered with by an adversary. More specifically, it guarantees that no efficient adversary can guess the challenge bit \(b\) with a probability significantly better than random guessing given \(\mathsf{Enc}(\mathsf{vk},\mathsf{pk}^{\prime},\mathsf{msg}_{b})\), where \(\mathsf{vk}\) is the correct verification key and \((\mathsf{pk}^{\prime},\mathsf{msg}_{0},\mathsf{msg}_{1})\) are generated by the adversary who is given the verification key \(\mathsf{vk}\) and multiple copies of the correctly generated quantum public keys. IND-pkTA security captures the setting where the classical verification key is sent via a classical authenticated channel. Thus, everyone can obtain the correct verification key. However, a quantum public key is sent via an unauthenticated quantum channel and thus can be tampered with by an adversary. Moreover, we define a security notion related to the correctness notion. In our setting, a decryption error could frequently occur as a result of tampering attacks on the quantum public key. To address this issue, we introduce a security notion that we call _decryption error detectability_. It roughly guarantees that a legitimate receiver of a ciphertext can notice if the decrypted message is different from the message intended by the sender. Constructions from OWFs.We propose a QPKE scheme satisfying IND-pkTA security from a digital signature scheme that can be constructed from OWFs. Our construction is inspired by the duality between distinguishing and swapping shown by Aaronson, Atia, and Susskind [1] and its cryptographic applications by Hhan, Morimae, and Yamakawa [17]. Our construction has quantum public keys and classical ciphertexts. We also propose a general transformation that adds decryption error detectability. The transformation uses only a digital signature scheme. Recyclable variant.Our above definitions for QPKE assume each quantum public key is used to encrypt only a single message and might be consumed. We also introduce a notion of _recyclable QPKE_ where the encryption algorithm given a quantum public key outputs a ciphertext together with a classical state that can be used to encrypt a message many times. Then, we show that any standard QPKE scheme with classical ciphertext can be transformed into a recyclable one. The transformation uses only a CPA secure classical symmetric key encryption scheme that is implied by OWFs. Thus, by combining the transformation with the above results, we obtain a recyclable QPKE scheme from OWFs. ### Discussion: Pure State Public Keys vs. Mixed State Public Keys The quantum public keys of our QPKE schemes are mixed states. Some recent works [1, 13] that studied QPKE explicitly require that a quantum public key of QPKE be a pure quantum state. The reason is related to the quantum public key certification problem, which is this work's main focus. Those works claimed that a sender can check the validity of given quantum public keys by using SWAP test if they are pure states, but not if they are mixed states. However, as far as we understand, this claim implicitly requires that at least one intact quantum public key be transmitted via secure quantum channels where an adversary cannot touch it at all2, which is an unsatisfactory assumption that makes QPKE less valuable. It is unclear how a sender can check the validity of a given quantum public key in the constructions proposed in [1, 13] without assuming such secure transmission of intact quantum public keys. Footnote 2: More precisely, their model seems to require a physical setup assumption that enables a sender to obtain at least one intact quantum public key, such as secure quantum channels or tamper-proof quantum hardware. We believe that it is not important whether the quantum public keys are pure states or mixed states, and what is really important is whether a sender can check the validity of given quantum public keys without assuming unsatisfactory setups such as quantum secure channels. Although our QPKE schemes have mixed state quantum public keys, they provide such a validity checking of quantum public keys by a sender without assuming any unsatisfactory setups. In addition, we can easily extend our construction into one with pure state quantum public keys. We provide the variant in Appendix A. ### Technical Overview In this subsection, we overview our technical contributions. Duality between distinguishing and swapping.Our construction is inspired by the duality between distinguishing and swapping shown by Aaronson, Atia, and Susskind [1] and its cryptographic applications by Hhan et al. [12].3 We first review their idea. Let \(\ket{\psi}\) and \(\ket{\phi}\) be orthogonal states. [1] showed that \(\ket{\psi}+\ket{\phi}\) and \(\ket{\psi}-\ket{\phi}\) are computationally indistinguishable4 if and only if one cannot efficiently "swap" \(\ket{\psi}\) and \(\ket{\phi}\) with a non-negligible advantage, i.e., for any efficiently computable unitary \(U\), \(\ket{\bra{\phi}U\ket{\psi}+\bra{\psi}U\ket{\phi}}\) is negligible. Based on the above result, [12] suggested to use \(\ket{\psi}+\left(-1\right)^{b}\ket{\phi}\) as an encryption of a plaintext \(b\in\{0,1\}\). By the result of [1], its security is reduced to the hardness of swapping \(\ket{\psi}\) and \(\ket{\phi}\). Footnote 3: In the main body, we do not explicitly use any result of [1, 12] though our analysis is similar to theirs. Footnote 4: We often omit normalization factors. Basic scheme.Based on the above idea, we first construct a QPKE scheme that is secure in a very restricted setting where an adversary is not given any quantum public key at all and only given one ciphertext. In particular the adversary cannot tamper with the quantum public key. Of course, constructing a secure QPKE scheme in such a setting is trivial since one could simply send a secret key for the one-time pad as a "public key". But we construct it in an alternative way for convenience of the later extension. A quantum public key is \[\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}} \tag{1}\] for uniformly random bit strings \(x_{0},x_{1}\in\{0,1\}^{\lambda}\), a secret key is \((x_{0},x_{1})\), and an encryption of a plaintext \(b\in\{0,1\}\) is \[\left|0\right\rangle\left|x_{0}\right\rangle+\left(-1\right)^{b}\left|1\right\rangle \left|x_{1}\right\rangle. \tag{2}\] Security of the above scheme in the restricted setting is somewhat obvious because the adversary has no information of \(x_{0}\) or \(x_{1}\) besides the ciphertext, but let us analyze it using the idea of [1] to get more insights. Suppose that the above scheme is insecure, i.e., \(\left|0\right\rangle\left|x_{0}\right\rangle+\left|1\right\rangle\left|x_{1}\right\rangle\) and \(\left|0\right\rangle\left|x_{0}\right\rangle-\left|1\right\rangle\left|x_{1}\right\rangle\) are computationally distinguishable with a non-negligible advantage. Then, by the result of [1], there is an efficient unitary \(U\) that swaps \(\left|0\right\rangle\left|x_{0}\right\rangle\) and \(\left|1\right\rangle\left|x_{1}\right\rangle\) with a non-negligible advantage. By using this unitary, let us consider the following procedure: 1. Given a state \(\left|0\right\rangle\left|x_{0}\right\rangle\pm\left|1\right\rangle\left|x_{1}\right\rangle\), measure it in the computational basis to get \(\left|\alpha\right\rangle\left|x_{\alpha}\right\rangle\) for random \(\alpha\in\{0,1\}\). 2. Apply the unitary \(U\) to \(\left|\alpha\right\rangle\left|x_{\alpha}\right\rangle\) and measure it in the computational basis. Since \(U\) swaps \(\left|0\right\rangle\left|x_{0}\right\rangle\) and \(\left|1\right\rangle\left|x_{1}\right\rangle\) with a non-negligible advantage, the probability that the outcome of the second measurement is \(\left|\alpha\oplus 1\right\rangle\left|x_{\alpha\oplus 1}\right\rangle\) is non-negligible. This yields the following observation: If one can efficiently distinguish \(\left|0\right\rangle\left|x_{0}\right\rangle+\left|1\right\rangle\left|x_{1}\right\rangle\) and \(\left|0\right\rangle\left|x_{0}\right\rangle-\left|1\right\rangle\left|x_{1}\right\rangle\), then one can efficiently compute both \(x_{0}\) and \(x_{1}\) from \(\left|0\right\rangle\left|x_{0}\right\rangle\pm\left|1\right\rangle\left|x_{1}\right\rangle\). On the other hand, it is easy to show that one cannot compute both \(x_{0}\) and \(x_{1}\) from \(\left|0\right\rangle\left|x_{0}\right\rangle\pm\left|1\right\rangle\left|x_{1}\right\rangle\) with a non-negligible probability by a simple information theoretical argument. Thus, the above argument implies security of the above scheme in the restricted setting where no public key and one ciphertext is given to the adversary. Dealing with public key tampering.We extend the above basic scheme to resist public key tampering attacks. Specifically, we consider the setting where the adversary receives a single copy of the quantum public key from the receiver, sends a tampered quantum public key to the sender, and receives a ciphertext generated under the tampered quantum public key. First, we remark that the basic scheme is insecure if the public key is tampered with since the adversary can replace the public key with \(\left|0\right\rangle\left|x_{0}^{\prime}\right\rangle+\left|1\right\rangle \left|x_{1}^{\prime}\right\rangle\) for \(x_{0}^{\prime},x_{1}^{\prime}\) of its choice. Indeed, it is impossible to prevent such a man-in-the-middle attack if only untrusted quantum channels are available. Thus, we assume that authenticated classical channels are available similarly to classical PKE. Our idea is to partially authenticate a quantum public key by using classical digital signatures where its verification key is sent through the authenticated classical channel. Namely, our idea is to set \(x_{\alpha}:=\sigma(\alpha)\) for \(\alpha\in\{0,1\}\) in the basic construction where \(\sigma(\alpha)\) is a signature for \(\alpha\in\{0,1\}\), i.e., a quantum public key is \[\left|0\right\rangle\left|\sigma(0)\right\rangle+\left|1\right\rangle\left| \sigma(1)\right\rangle. \tag{3}\] A secret key is the signing key of the digital signature scheme. Here, we assume that the signature scheme has a deterministic signing algorithm and satisfies strong unforgeability, i.e., given message-signature pairs \((\mathsf{msg}_{1},\sigma_{1}),...,(\mathsf{msg}_{n},\sigma_{n})\), no efficient adversary can output \((\mathsf{msg}^{*},\sigma^{*})\) such that \((\mathsf{msg}^{*},\sigma^{*})\neq(\mathsf{msg}_{i},\sigma_{i})\) for all \(i\in[n]\).5 Before encryption, the sender coherently verifies the validity of the signature in the second register of the public key and aborts if the verification rejects. By strong unforgeability of the signature scheme, no matter how the adversary tampers with the public key, the state after passing the verification is negligibly close to a state in the form of Footnote 5: At this point, two-time security (where \(n=2\)) suffices but we finally need to allow \(n\) to be an arbitrary polynomial. \[c_{0}\left|0\right\rangle\left|\sigma(0)\right\rangle\left|\Psi_{0}\right\rangle+ c_{1}\left|1\right\rangle\left|\sigma(1)\right\rangle\left|\Psi_{1}\right\rangle \tag{4}\] with some complex coefficients \(c_{0}\) and \(c_{1}\), and some states \(\left|\Psi_{0}\right\rangle\) and \(\left|\Psi_{1}\right\rangle\) over the adversary's register (except for a negligible probability). The encryption of a plaintext \(b\in\{0,1\}\) is to apply \(Z^{b}\) on the first qubit of Equation (4). The cipertext generated under the tampered public key is therefore \[c_{0}\left|0\right\rangle\left|\sigma(0)\right\rangle\left|\Psi_{0}\right\rangle +(-1)^{b}c_{1}\left|1\right\rangle\left|\sigma(1)\right\rangle\left|\Psi_{1} \right\rangle. \tag{5}\] By a slight extension of the analysis of the basic scheme, we show that if one can efficiently distinguish \(c_{0}\left|0\right\rangle\left|\sigma(0)\right\rangle\left|\Psi_{0}\right\rangle +c_{1}\left|1\right\rangle\left|\sigma(1)\right\rangle\left|\Psi_{1}\right\rangle\) and \(c_{0}\left|0\right\rangle\left|\sigma(0)\right\rangle\left|\Psi_{0}\right\rangle -c_{1}\left|1\right\rangle\left|\sigma(1)\right\rangle\left|\Psi_{1}\right\rangle\), then one can efficiently compute both \(\sigma(0)\) and \(\sigma(1)\). On the other hand, recall that the adversary is only given one copy of the public key \(\left|0\right\rangle\left|\sigma(0)\right\rangle+\left|1\right\rangle\left| \sigma(1)\right\rangle\). We can show that it is impossible to compute both \(\sigma(0)\) and \(\sigma(1)\) from this state by the strong unforgeability as follows. By [1, Lemma 2.1], the probability to output both \(\sigma(0)\) and \(\sigma(1)\) is only halved even if \(\left|0\right\rangle\left|\sigma(0)\right\rangle+\left|1\right\rangle\left| \sigma(1)\right\rangle\) is measured in the computational basis before given to the adversary. After the measurement, the adversary's input collapses to a classical state \(\left|\alpha\right\rangle\left|\sigma(\alpha)\right\rangle\) for random \(\alpha\in\{0,1\}\), in which case the adversary can output \(\sigma(\alpha\oplus 1)\) only with a negligible probability by the strong unforgeability. Combining the above, security of the above scheme under tampered public keys is proven. Multiple public keys.In the previous paragraph, we give only one copy of the quantum public key to the adversary. It is clear that the security is broken if two copies of the public key are available to the adversary: by measuring each public key, the adversary can learn both \(\sigma(0)\) and \(\sigma(1)\) with probability 1/2. The construction in the previous paragraph therefore cannot achieve an important advantage of PKE, namely, many public keys corresponding to a single secret key can be distributed. In order to extend the above scheme to ensure security even if the adversary can obtain multiple public keys, we introduce a classical randomness for each public key generation. Specifically, a public key is \[(r,\left|0\right\rangle\left|\sigma(0,r)\right\rangle+\left|1\right\rangle \left|\sigma(1,r)\right\rangle) \tag{6}\] where \(r\in\{0,1\}^{\lambda}\) is chosen uniformly at random for every execution of the public key generation algorithm, and \(\sigma(\alpha,r)\) is a signature for \(\alpha\|r\).6 A secret key is the signing key of the digital signature scheme, and an encryption of a plaintext \(b\in\{0,1\}\) is Footnote 6: \(\alpha\|r\) is the concatenation of two bit strings \(\alpha\) and \(r\). \[(r,\left|0\right\rangle\left|\sigma(0,r)\right\rangle+(-1)^{b}\left|1\right\rangle \left|\sigma(1,r)\right\rangle). \tag{7}\] Since each quantum public key uses different \(r\), security of this scheme holds even if the adversary obtains arbitrarily many public keys. Note that the receiver does not need to remember each \(r\). What the receiver has to record is only a single secret key that is the same for all public keys. Making ciphertext classical and recycling public key.Finally, we extend the scheme so that one quantum public key can be used to encrypt many plaintexts. Toward the goal, we first observe that the ciphertext of the above scheme can be made classical. In the above construction, the ciphertext contains a quantum state \(\left|0\right\rangle\left|\sigma(r,0)\right\rangle+(-1)^{b}\left|1\right\rangle \left|\sigma(r,1)\right\rangle\). Suppose that we measure this state in Hadamard basis and let \(d\) be the measurement outcome. Then an easy calculation shows that we have \[b=d\cdot(0\|\sigma(0,r)\oplus 1\|\sigma(1,r)). \tag{8}\] Thus, sending \((r,d)\) as a ciphertext is sufficient for the receiver to recover the plaintext \(b\). Moreover, this variant is at least as secure as the original one with quantum ciphertexts since the Hadamard-basis measurement only loses information of the ciphertext. Given the above observation, it is straightforward to use one quantum public key to encrypt many plaintexts by standard hybrid encryption. When a sender encrypts a message msg for the first time, it generates a key \(K\) of a symmetric key encryption scheme, encrypt each bit of \(K\) by the above scheme in a bit-by-bit manner, and encrypt msg by the symmetric key encryption scheme under the key \(K\). Since the encryption of \(K\) is classical, the sender can reuse it when it encrypts another message later. We formalize this construction as a _recyclable_ QPKE.7 Footnote 7: Such a recyclable property is a default requirement in [11], and our definition is similar to theirs with the important difference that we consider public key tampering. The idea to achieve the recyclability by the hybrid encryption technique was also used in one of their constructions. Adding decryption error detectability.So far, we are only concerned with security under tampered public keys (IND-pkTA security). On the other hand, the schemes presented in the previous paragraphs do not satisfy decryption error detectability. (See Section 1.2 for its informal definition and Definition 3.3 for formal definition.) Fortunately, there is a simple generic conversion that adds decryption error detectability while preserving IND-pkTA security by using digital signatures. The idea is that the encryption algorithm first generates a signature for the message under a signing key generated by itself, encrypts both the original message and signature under the building block scheme, and outputs the ciphertexts along with the verification key for the signature scheme in the clear. Then, the decryption algorithm can verify that the decryption result is correct as long as it is a valid message-signature pair (except for a negligible probability). ### Related Works The possibility that QPKE can be achieved from weaker assumptions was first pointed out by Gottesman [12], though he did not give any concrete construction. The first concrete construction of QPKE was proposed by Kawachi, Koshiba, Nishimura, and Yamakami [13]. They formally defined the notion of QPKE with quantum public keys, and provided a construction satisfying it from a distinguishing problem of two quantum states. Recently, Morimae and Yamakawa [14] pointed out that QPKE defined by [13] can be achieved from any classical or quantum symmetric key encryption almost trivially. The constructions proposed in these two works have mixed state quantum public keys. Then, subsequent works [15, 16] independently studied the question whether QPKE with pure state quantum public keys can be constructed from OWFs or even weaker assumptions. The definition of QPKE studied in the above works essentially assume that a sender can obtain intact quantum public keys. As far as we understand, this requires unsatisfactory physical setup assumptions such as secure quantum channels or tamper-proof quantum hardware, regardless of whether the quantum public keys are pure states or mixed states. In our natural setting where an adversary can touch the quantum channel where quantum public keys are sent, the adversary can easily attack the previous constructions by simply replacing the quantum public key on the channel with the one generated by itself that the adversary knows the corresponding secret key. We need to take such adversarial behavior into consideration, unless we assume physical setup assumptions that deliver intact quantum public keys to the sender. Our work is the first one that proposes a QPKE scheme secure in this natural setting assuming only classical authenticated channels that is the same assumption as classical PKE and can be implemented by digital signature schemes. It is unclear if we could solve the problem in the previous constructions by using classical authenticated channels similarly to our work. Below, we review the constructions of QPKE from OWFs proposed in the recent works. The construction by Morimae and Yamakawa [14] is highly simple. A (mixed state) public key of their construction is of the form \((\mathsf{ct}_{0},\mathsf{ct}_{1})\), where \(\mathsf{ct}_{b}\) is an encryption of \(b\) by a symmetric key encryption scheme. The encryption algorithm with input message \(b\) simply outputs \(\mathsf{ct}_{b}\). Coladangelo [13] constructed a QPKE scheme with quantum public keys and quantum ciphertexts from pseudorandom functions (PRFs), which are constructed from OWFs. The public key is \[\ket{\mathsf{pk}}\coloneqq\sum_{y}(-1)^{\mathsf{PRF}_{k}(y)}\ket{y}, \tag{9}\] and the secret key is \(k\). The ciphertext for the plaintext \(m\) is \[(Z^{x}\ket{\mathsf{pk}}=\sum_{y}(-1)^{x\cdot y+\mathsf{PRF}_{k}(y)}\ket{y},r,r \cdot x\oplus m), \tag{10}\] where \(r\) is chosen uniformly at random. Barooti, Malavolta, and Walter [1] constructed a QPKE scheme with quantum public keys and classical ciphertexts from PRFs. The public key is \(\ket{\mathsf{pk}}\coloneqq(\ket{\mathsf{pk}_{0}},\ket{\mathsf{pk}_{1}})\), where \[\ket{\mathsf{pk}_{b}}\coloneqq\sum_{x}\ket{x}\ket{\mathsf{PRF}_{k}(x)}, \tag{11}\] and the secret key is \((k_{0},k_{1})\). The encryption algorithm measures \(\ket{\mathsf{pk}_{b}}\) in the computational basis and sends the measurement result as the ciphertext when the plaintext is \(b\in\{0,1\}\). Barooti et al. showed that this construction is a QPKE scheme for the plaintext space \(\{0,1\}\) that satisfies chosen ciphertext security (as long as intact quantum public keys are securely transmitted to the sender). Grilo, Sattath, and Vu [15] constructed three QPKE schemes with quantum public keys from OWFs or pseudorandom function-like states generators. We review their construction based on OWFs. The public key is \[\ket{\mathsf{pk}}\coloneqq\sum_{x}\ket{x}\ket{\mathsf{PRF}_{k}(x)}, \tag{12}\] and the secret key is \(k\). The encryption algorithm first measures \(\ket{\mathsf{pk}}\) in the computational basis to get \((x,\mathsf{PRF}_{k}(x))\) and outputs \((x,\mathsf{SKE}.\mathsf{Enc}(\mathsf{PRF}_{k}(x),m))\) as the ciphertext for the plaintext \(m\), where \(\mathsf{SKE}.\mathsf{Enc}\) is the encryption algorithm of a symmetric key encryption scheme. We finally compare Quantum Key Distribution (QKD) [1] with our notion of QPKE. QKD also enables us to establish secure communication over an untrusted quantum channel assuming that an authenticated classical channel is available similarly to our QPKE. An advantage of QKD is that it is information theoretically secure and does not need any computational assumption. On the other hand, it has disadvantages that it must be interactive and parties must record secret information for each session. Thus, it is incomparable to the notion of QPKE. ### Open Problems In our construction, public keys are quantum states. It is an open problem whether QPKE with classical public keys are possible from OWFs. Another interesting open problem is whether we can construct QPKE defined in this work from an even weaker assumption than OWFs such as pseudorandom states generators. In our model of QPKE, a decryption error could occur frequently as a result of tampering attacks on the quantum public key. To address this issue, we introduce the security notion we call decryption error detectability that guarantees that a legitimate receiver of a ciphertext can notice if the decrypted message is different from the message intended by the sender. We could consider even stronger variant of decryption error detectability that requires that a sender can notice if a given quantum public key does not provide decryption correctness. It is an open problem to construct a QPKE scheme satisfying such a stronger decryption error detectability. ## 2 Preliminaries ### Basic Notations We use the standard notations of quantum computing and cryptography. We use \(\lambda\) as the security parameter. For any set \(S\), \(x\gets S\) means that an element \(x\) is sampled uniformly at random from the set \(S\). We write \(\mathsf{negl}\) to mean a negligible function. PPT stands for (classical) probabilistic polynomial-time and QPT stands for quantum polynomial-time. For an algorithm \(A\), \(y\gets A(x)\) means that the algorithm \(A\) outputs \(y\) on input \(x\). For two bit strings \(x\) and \(y\), \(x\|y\) means the concatenation of them. For simplicity, we sometimes omit the normalization factor of a quantum state. (For example, we write \(\frac{1}{\sqrt{2}}(|x_{0}\rangle+|x_{1}\rangle)\) just as \(|x_{0}\rangle+|x_{1}\rangle\).) \(I\coloneqq|0\rangle\langle 0|+|1\rangle\langle 1|\) is the two-dimensional identity operator. For the notational simplicity, we sometimes write \(I^{\otimes n}\) just as \(I\) when the dimension is clear from the context. ### Digital Signatures **Definition 2.1** (Digital signatures).: _A digital signature scheme is a set of algorithms \((\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) such that_ * \(\mathsf{Gen}(1^{\lambda})\rightarrow(k,\mathsf{vk}):\) _It is a PPT algorithm that, on input the security parameter_ \(\lambda\)_, outputs a signing key_ \(k\) _and a verification key_ \(\mathsf{vk}\)_._ * \(\mathsf{Sign}(k,\mathsf{msg})\rightarrow\sigma:\) _It is a PPT algorithm that, on input the message_ \(\mathsf{msg}\) _and_ \(k\)_, outputs a signature_ \(\sigma\)_._ * \(\mathsf{Ver}(\mathsf{vk},\mathsf{msg},\sigma)\rightarrow\top/\bot:\) _It is a deterministic classical polynomial-time algorithm that, on input_ \(\mathsf{vk}\)_,_ \(\mathsf{msg}\)_, and_ \(\sigma\)_, outputs_ \(\top/\bot\)_._ _We require the following correctness and strong EUF-CMA security._ Correctness:_For any \(\mathsf{msg}\),_ \[\Pr[\top\leftarrow\mathsf{Ver}(\mathsf{vk},\mathsf{msg},\sigma):(k,\mathsf{ vk})\leftarrow\mathsf{Gen}(1^{\lambda}),\sigma\leftarrow\mathsf{Sign}(k, \mathsf{msg})]\geq 1-\mathsf{negl}(\lambda). \tag{13}\] Strong EUF-CMA security:_For any QPT adversary \(\mathcal{A}\) with classical oracle access to the signing oracle \(\mathsf{Sign}(k,\cdot)\),_ \[\Pr[(\mathsf{msg}^{*},\sigma^{*})\notin\mathcal{Q}\wedge\top\leftarrow \mathsf{Ver}(\mathsf{vk},\mathsf{msg}^{*},\sigma^{*}):(k,\mathsf{vk}) \leftarrow\mathsf{Gen}(1^{\lambda}),(\mathsf{msg}^{*},\sigma^{*})\leftarrow \mathcal{A}^{\mathsf{Sign}(k,\cdot)}(\mathsf{vk})]\leq\mathsf{negl}(\lambda), \tag{14}\] _where \(\mathcal{Q}\) is the set of message-signature pairs returned by the signing oracle._ _Remark 2.2_.: Without loss of generality, we can assume that \(\mathsf{Sign}\) is deterministic. (The random seed used for \(\mathsf{Sign}\) can be generated by applying a PRF on the message signed, and the key of PRF is appended to the signing key.) **Theorem 2.3** ([1, Sec. 6.5.2]).: _Strong EUF-CMA secure digital signatures exist if OWFs exist._ ### Symmetric Key Encryption **Definition 2.4** (Symmetric Key Encryption (SKE)).: _A (classical) symmetric key encryption (SKE) scheme with message space \(\{0,1\}^{\ell}\) is a set of algorithms \((\mathsf{Enc},\mathsf{Dec})\) such that_ * \(\mathsf{Enc}(K,\mathsf{msg})\to\mathsf{ct}:\) _It is a PPT algorithm that, on input_ \(K\in\{0,1\}^{\lambda}\) _and the message_ \(\mathsf{msg}\in\{0,1\}^{\ell}\)_, outputs a ciphertext_ \(\mathsf{ct}\)_._ * \(\mathsf{Dec}(K,\mathsf{ct})\to\mathsf{msg}^{\prime}:\) _It is a deterministic classical polynomial-time algorithm that, on input_ \(K\) _and_ \(\mathsf{ct}\)_, outputs_ \(\mathsf{msg}^{\prime}\)_._ _We require the following correctness and IND-CPA security._ Correctness:_For any \(\mathsf{msg}\in\{0,1\}^{\ell}\),_ \[\Pr[\mathsf{msg}\leftarrow\mathsf{Dec}(K,\mathsf{ct}):K\leftarrow\{0,1\}^{ \lambda},\mathsf{ct}\leftarrow\mathsf{Enc}(K,\mathsf{msg})]=1. \tag{15}\] IND-CPA Security:_For any QPT adversary \(\mathcal{A}\) with classical oracle access to the encryption oracle \(\mathsf{Enc}(K,\cdot)\),_ \[\Pr\left[b\leftarrow\mathcal{A}(\mathsf{ct},\mathsf{st}):\begin{array}{c}K \leftarrow\{0,1\}^{\lambda}\\ (\mathsf{msg}_{0},\mathsf{msg}_{1},\mathsf{st})\leftarrow\mathcal{A}^{ \mathsf{Enc}(K,\cdot)}(1^{\lambda})\\ b\leftarrow\{0,1\}\\ \mathsf{ct}\leftarrow\mathsf{Enc}(K,\mathsf{msg}_{b})\end{array}\right]\leq \frac{1}{2}+\mathsf{negl}(\lambda). \tag{16}\] _Remark 2.5_.: IND-CPA secure SKE exists if OWFs exist [1, 1]. ### Lemma by Boneh and Zhandry In this paper, we use the following lemma by Boneh and Zhandry [1]. **Lemma 2.6** ([1, Lemma 2.1]).: _Let \(A\) be a quantum algorithm, and let \(\Pr[x]\) be the probability that \(A\) outputs \(x\). Let \(A^{\prime}\) be another quantum algorithm obtained from \(A\) by pausing \(A\) at an arbitrary stage of execution, performing a partial measurement that obtains one of \(k\) outcomes, and then resuming \(A\). Let \(\Pr^{\prime}[x]\) be the probability that \(A^{\prime}\) outputs \(x\). Then \(\Pr^{\prime}[x]\geq\Pr[x]/k\)._ ## 3 Definition of QPKE In this section, we define QPKE that can be used in the setting where quantum unauthenticated channels and classical authenticated channels are available. The difference on the syntax from the previous definitions is as follows. * The secret key generation algorithm outputs a classical verification key together with the secret key. * The verification key is given to the encryption algorithm together with a quantum public key and a message so that the encryption algorithm can check the validity of the given quantum public key. We require a QPKE scheme to satisfy the security notion that we call IND-pkTA security. It roughly guarantees that indistinguishability holds even if messages are encrypted by a public key tampered with by an adversary. The formal definition is as follows. **Definition 3.1** (Quantum Public-Key Encryption (QPKE)).: _A quantum public-key encryption scheme with message space \(\{0,1\}^{\ell}\) is a set of algorithms \((\mathsf{SKGen},\mathsf{PKGen},\mathsf{Enc},\mathsf{Dec})\) such that_ * \(\mathsf{SKGen}(1^{\lambda})\rightarrow(\mathsf{sk},\mathsf{vk}):\) _It is a PPT algorithm that, on input the security parameter \(\lambda\), outputs a classical secret key_ \(\mathsf{sk}\) _and a classical verification key_ \(\mathsf{vk}\)_._ * \(\mathsf{PKGen}(\mathsf{sk})\rightarrow\mathsf{pk}:\) _It is a QPT algorithm that, on input_ \(\mathsf{sk}\)_, outputs a quantum public key_ \(\mathsf{pk}\)_._ * \(\mathsf{Enc}(\mathsf{vk},\mathsf{pk},\mathsf{msg})\rightarrow\mathsf{ct}:\) _It is a QPT algorithm that, on input_ \(\mathsf{vk}\)_,_ \(\mathsf{pk}\)_, and a plaintext_ \(\mathsf{msg}\in\{0,1\}^{\ell}\)_, outputs a classical ciphertext_ \(\mathsf{ct}\)_._ * \(\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\rightarrow\mathsf{msg}^{\prime}:\) _It is a classical deterministic polynomial-time algorithm that, on input_ \(\mathsf{sk}\) _and_ \(\mathsf{ct}\)_, outputs_ \(\mathsf{msg}^{\prime}\in\{0,1\}^{\ell}\cup\{\bot\}\)_._ _We require the following correctness and IND-pkTA security._ Correctness:_For any \(\mathsf{msg}\in\{0,1\}^{\ell}\),_ \[\Pr[\mathsf{msg}\leftarrow\mathsf{Dec}(\mathsf{sk},\mathsf{ct}):(\mathsf{sk}, \mathsf{vk})\leftarrow\mathsf{SKGen}(1^{\lambda}),\mathsf{pk}\leftarrow\mathsf{ PKGen}(\mathsf{sk}),\mathsf{ct}\leftarrow\mathsf{Enc}(\mathsf{vk}, \mathsf{pk},\mathsf{msg})]\geq 1-\mathsf{negl}(\lambda). \tag{17}\] IND-pkTA Security:_For any polynomial \(m\), and any QPT adversary \(\mathcal{A}\),_ \[\Pr\left[b\leftarrow\mathcal{A}(\mathsf{ct}^{*},\mathsf{st}):\begin{array}{c} (\mathsf{pk},\mathsf{msg}_{0},\mathsf{msg}_{1},\mathsf{st})\leftarrow\mathcal{ A}(\mathsf{vk},\mathsf{pk}_{1},...,\mathsf{pk}_{m})\\ b\leftarrow\{0,1\}\\ \mathsf{ct}^{*}\leftarrow\mathsf{Enc}(\mathsf{vk},\mathsf{pk}^{\prime}, \mathsf{msg}_{b})\end{array}\right]\leq\frac{1}{2}+\mathsf{negl}(\lambda). \tag{18}\] _Here, \(\mathsf{pk}_{1},...,\mathsf{pk}_{m}\leftarrow\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\) means that \(\mathsf{PKGen}\) is executed \(m\) times and \(\mathsf{pk}_{i}\) is the output of the \(i\)th execution of \(\mathsf{PKGen}\). \(\mathsf{st}\) is a quantum internal state of \(\mathcal{A}\), which can be entangled with \(\mathsf{pk}^{\prime}\)._ As we discussed in Section 1.3, the above definition does not require the quantum public key \(\mathsf{pk}\) to be a pure state. _Remark 3.2_.: IND-pkTA security captures the setting where the classical verification key is sent via a classical authenticated channel and thus everyone can obtain correct verification key, but a quantum public key is sent via an unauthenticated quantum channel and thus can be tampered with by an adversary. Especially, it captures an adversary \(\mathcal{A}\) who steals a quantum public key \(\mathsf{pk}\) sent to a user, replace it with a tampered one \(\mathsf{pk}^{\prime}\), and try to break the secrecy of a message encrypted by \(\mathsf{pk}^{\prime}\). To capture wide range of usage scenarios, we give multiple copies of the quantum public keys \(\mathsf{pk}_{1},...,\mathsf{pk}_{m}\) to \(\mathcal{A}\). We also define a security notion related to the correctness notion that we call decryption error detectability. It roughly guarantees that a legitimate receiver of a ciphertext can notice if the decrypted message is different from the message intended by the sender. Such a decryption error could occur frequently in our setting as a result of the tampering attacks on the quantum public key sent via an unauthenticated quantum channel. Note that our definition of QPKE requires a ciphertext of QPKE be a classical string and we assume every classical information is sent though a classical authenticated channel. Thus, similarly to the verification key, we can assume that ciphertexts can be sent without being tampered. The formal definition is as follows. **Definition 3.3** (Decryption error detectability).: _We say that a QPKE scheme has decryption error detectability if for any polynomial \(m\), and any QPT adversary \(\mathcal{A}\),_ \[\Pr\left[\begin{array}{cc}(\mathsf{sk},\mathsf{vk})\leftarrow\mathsf{SKGen}(1^ {\lambda})\\ \mathsf{pk}_{1},...,\mathsf{pk}_{m}\leftarrow\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\\ \mathsf{msg}^{\prime}\neq\bot\ \wedge\ \mathsf{msg}^{\prime}\neq\mathsf{msg}: \begin{array}{c}(\mathsf{pk}^{\prime},\mathsf{msg})\leftarrow\mathcal{A}( \mathsf{vk},\mathsf{pk}_{1},...,\mathsf{pk}_{m})\\ \mathsf{ct}\leftarrow\mathsf{Enc}(\mathsf{vk},\mathsf{pk}^{\prime},\mathsf{msg })\\ \mathsf{msg}^{\prime}\leftarrow\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\end{array} \right]\leq\mathsf{negl}(\lambda). \tag{19}\] It is easy to see that we can generically add decryption error detectability by letting the sender generate a signature for the message under a signing key generated by itself, encrypt the concatenation of the message and signature, and send the ciphertext along with the verification key of the signature to the receiver. The receiver can check that there is no decryption error (except for a negligible probability) if the decryption result is a valid message-signature pair. That is, we have the following theorem. **Theorem 3.4**.: _If there exist OWFs and a QPKE scheme that satisfies correctness and IND-pkTA security, there exists a QPKE scheme that satisfies correctness, IND-pkTA security, and decryption error detectability._ We omit the proof since it is straightforward by the construction explained above. Since we have this theorem, we focus on constructing QPKE that satisfies correctness and IND-pkTA security in the rest of this paper. ## 4 Construction of QPKE In this section, we construct a QPKE scheme that satisfies correctness and IND-pkTA security (but not decryption error detectability) from strong EUF-CMA secure digital signatures. The message space of our construction is \(\{0,1\}\), but it can be extended to be arbitrarily many bits by parallel repetition. Let \((\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) be a strong EUF-CMA secure digital signature scheme with a deterministic \(\mathsf{Sign}\) algorithm and message space \(\{0,1\}^{u}\) for \(u=\omega(\log\lambda)\). Our construction of QPKE is as follows. * \(\mathsf{SKGen}(1^{\lambda})\rightarrow(\mathsf{sk},\mathsf{vk}):\text{Run }(k, \mathsf{vk})\leftarrow\mathsf{Gen}(1^{\lambda})\). Output \(\mathsf{sk}\coloneqq k\). Output \(\mathsf{vk}\). * \(\mathsf{PKGen}(\mathsf{sk})\rightarrow\mathsf{pk}:\text{Parse }\mathsf{sk}=k\). Choose \(r\leftarrow\{0,1\}^{u}\). By running \(\mathsf{Sign}\) coherently, generate the state \[\left|\psi_{r}\right\rangle\coloneqq\left|0\right\rangle_{\mathbf{A}}\otimes \left|\mathsf{Sign}(k,0\|r\rangle)_{\mathbf{B}}+\left|1\right\rangle_{ \mathbf{A}}\otimes\left|\mathsf{Sign}(k,1\|r\rangle)_{\mathbf{B}}\right.\] (20) over registers \((\mathbf{A},\mathbf{B})\). Output \[\mathsf{pk}\coloneqq(r,\left|\psi_{r}\right\rangle).\] (21) * \(\mathsf{Enc}(\mathsf{vk},\mathsf{pk},b)\rightarrow\mathsf{ct}:\text{Parse } \mathsf{pk}=(r,\rho)\), where \(\rho\) is a quantum state over registers \((\mathbf{A},\mathbf{B})\). The \(\mathsf{Enc}\) algorithm consists of the following three steps. 1. It coherently checks the signature in \(\rho\). In other words, it applies the unitary \[U_{r,\mathsf{vk}}\left|\alpha\right\rangle_{\mathbf{A}}\left|\beta\right\rangle _{\mathbf{B}}\left|0...0\right\rangle_{\mathbf{D}}=\left|\alpha\right\rangle _{\mathbf{A}}\left|\beta\right\rangle_{\mathbf{B}}\left|\mathsf{Ver}(\mathsf{ vk},\alpha\|r,\beta)\right\rangle_{\mathbf{D}}\] (22) on \(\rho_{\mathbf{A},\mathbf{B}}\otimes\left|0...0\right\rangle\left\langle 0...0 \right|_{\mathbf{D}}\),8 and measures the register \(\mathbf{D}\) in the computational basis. If the result is \(\bot\), it outputs \(\mathsf{ct}\coloneqq\bot\) and halts. If the result is \(\top\), it goes to the next step. 2. It applies \(Z^{b}\) on the register \(\mathbf{A}\). 3. It measures all qubits in the registers \((\mathbf{A},\mathbf{B})\) in the Hadamard basis to get the result \(d\). It outputs \[\mathsf{ct}\coloneqq(r,d).\] (23) * \(\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\to b^{\prime}:\mathsf{Parse}\ \mathsf{sk}=k\) and \(\mathsf{ct}=(r,d)\). Output \[b^{\prime}\coloneqq d\cdot(0\|\mathsf{Sign}(k,0\|r)\oplus 1\|\mathsf{Sign}(k,1 \|r)).\] (24) **Theorem 4.1**.: _If \((\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) is a strong EUF-CMA secure digital signature scheme, then the QPKE scheme \((\mathsf{SKGen},\mathsf{PKGen},\mathsf{Enc},\mathsf{Dec})\) above is correct and IND-pkTA secure._ The correctness is straightforward. First, the state over the registers \((\mathbf{A},\mathbf{B})\) is \(|\psi_{r}\rangle\) if pk was not tampered with and the first step of \(\mathsf{Enc}\) algorithm got \(\top\). Second, in that case, the state becomes \[|0\rangle\left|\mathsf{Sign}(k,0\|r)\right\rangle+(-1)^{b}\left|1\right\rangle \left|\mathsf{Sign}(k,1\|r)\right\rangle \tag{25}\] after the second step of \(\mathsf{Enc}\) algorithm. Finally, because in that case \(d\) obtained in the third step of \(\mathsf{Enc}\) algorithm satisfies \[b=d\cdot(0\|\mathsf{Sign}(k,0\|r)\oplus 1\|\mathsf{Sign}(k,1\|r)), \tag{26}\] we have \(b^{\prime}=b\). We prove IND-pkTA security in the next section. ## 5 Proof of IND-pkTA Security In this section, we show IND-pkTA security of our construction to complete the proof of Theorem 4.1. The outline of the proof is as follows. The security game for the IND-pkTA security of our QPKE (Hybrid 0) is given in Figure 1. We introduce two more hybrids, Hybrid 1 (Figure 2) and Hybrid 2 (Figure 3). Hybrid 1 is the same as Hybrid 0 except that the challenger does not do the Hadamard-basis measurement in the third step of \(\mathsf{Enc}\) algorithm, and the challenger sends the adversary \(r\) and the state over the registers \((\mathbf{A},\mathbf{B})\). Hybrid 2 is the same as Hybrid 1 except that the adversary outputs two bit strings \(\mu_{0},\mu_{1}\) and the adversary wins if \(\mu_{0}=\mathsf{Sign}(k,0\|r)\) and \(\mu_{1}=\mathsf{Sign}(k,1\|r)\). The formal proof is as follows. Assume that the IND-pkTA security of our construction is broken by a QPT adversary \(\mathcal{A}\). It means the QPT adversary \(\mathcal{A}\) wins Hybrid 0 with a non-negligible advantage. Then, it is clear that there is another QPT adversary \(\mathcal{A}^{\prime}\) that wins Hybrid 1 with a non-negligible advantage. (\(\mathcal{A}^{\prime}\) has only to do the Hadamard-basis measurement by itself.) From the \(\mathcal{A}^{\prime}\), we can construct a QPT adversary \(\mathcal{A}^{\prime\prime}\) that wins Hybrid 2 with a non-negligible probability by using the idea of [1]. (For details, see Section 5.1). Finally, we show in Section 5.2 that no QPT adversary can win Hybrid 2 except for a negligible probability. We thus have the contradiction, and therefore our QPKE is IND-pkTA secure. ### From Distinguishing to Outputting Two Signatures We present the construction of \(\mathcal{A}^{\prime\prime}\). Assume that there exists a QPT adversary \(\mathcal{A}^{\prime}\) and a polynomial \(p\) such that \[|\Pr[1\leftarrow\mathcal{A}^{\prime}\mid b=0]-\Pr[1\leftarrow\mathcal{A}^{ \prime}\mid b=1]|\geq\frac{1}{p(\lambda)} \tag{28}\] **Hybrid 0** 1. \(\mathcal{C}\) runs \((k,\mathsf{vk})\leftarrow\mathsf{Gen}(1^{\lambda})\). \(\mathcal{C}\) sends \(\mathsf{vk}\) to \(\mathcal{A}\). 2. \(\mathcal{C}\) chooses \(r_{1},...,r_{m}\leftarrow\{0,1\}^{u}\). 3. \(\mathcal{C}\) sends \(\{(r_{i},|\psi_{r_{i}})\}_{i=1}^{m}\) to the adversary \(\mathcal{A}\), where \[|\psi_{r_{i}}\rangle\coloneqq|0\rangle\otimes|\mathsf{Sign}(k,0\|r_{i})\rangle +|1\rangle\otimes|\mathsf{Sign}(k,1\|r_{i})\rangle\;.\] (27) 4. \(\mathcal{A}\) generates a quantum state over registers \((\mathbf{A},\mathbf{B},\mathbf{C})\). (\((\mathbf{A},\mathbf{B})\) corresponds to the quantum part of \(\mathsf{pk}^{\prime}\), and \(\mathbf{C}\) corresponds to st.) \(\mathcal{A}\) sends a bit string \(r\) and the registers \((\mathbf{A},\mathbf{B})\) to \(\mathcal{C}\). \(\mathcal{A}\) keeps the register \(\mathbf{C}\). 5. \(\mathcal{C}\) coherently checks the signature in the state sent from \(\mathcal{A}\). If the result is \(\bot\), it sends \(\bot\) to \(\mathcal{A}\) and halts. If the result is \(\top\), it goes to the next step. 6. \(\mathcal{C}\) chooses \(b\leftarrow\{0,1\}\). \(\mathcal{C}\) applies \(Z^{b}\) on the register \(\mathbf{A}\). 7. \(\mathcal{C}\) measures all qubits in \((\mathbf{A},\mathbf{B})\) in the Hadamard basis to get the result \(d\). \(\mathcal{C}\) sends \((r,d)\) to \(\mathcal{A}\). 8. \(\mathcal{A}\) outputs \(b^{\prime}\). **Hybrid 1** 1. [label=0., ref=0] 2. All the same as Figure 1. 3. \(\mathcal{C}\) does not do the Hadamard-basis measurement, and \(\mathcal{C}\) sends \(r\) and registers \((\mathbf{A},\mathbf{B})\) to \(\mathcal{A}\). 4. The same as Figure 1. **Hybrid 1** in Hybrid 1 (Figure 2) for all \(\lambda\in I\) with an infinite set \(I\). From the \(\mathcal{A}^{\prime}\), we construct a QPT adversary \(\mathcal{A}^{\prime\prime}\) such that \[\Pr[(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathcal{A}^{ \prime\prime}]\geq\frac{1}{q(\lambda)} \tag{29}\] in Hybrid 2 (Figure 3) with a polynomial \(q\) for infinitely many \(\lambda\). Let \(t\coloneqq(k,\mathsf{vk},r_{1},...,r_{m},r)\), and \(\Pr[t]\) be the probability that \(t\) is generated in Item 1, Item 2, and Item 4 in the game of Figure 2. Let \(\mathsf{Good}\) be the event that \(\mathcal{C}\) gets \(\top\) in Item 5 in the game of Figure 2. Let \(\mathsf{Bad}\) be the event that \(\mathsf{Good}\) does not occur. Then, from Equation (28), we have \[\frac{1}{p(\lambda)} \leq\Big{|}\sum_{t}\Pr[t]\Pr[\mathsf{Good}\mid t]\Pr[1\leftarrow \mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]+\sum_{t}\Pr[t]\Pr[\mathsf{Bad} \mid t]\Pr[1\leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Bad},b=0]\] \[-\sum_{t}\Pr[t]\Pr[\mathsf{Good}\mid t]\Pr[1\leftarrow\mathcal{A}^ {\prime}\mid t,\mathsf{Good},b=1]-\sum_{t}\Pr[t]\Pr[\mathsf{Bad}\mid t]\Pr[1 \leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Bad},b=1]\Big{|} \tag{30}\] \[\leq\sum_{t}\Pr[t]\Pr[\mathsf{Good}\mid t]\Big{|}\Pr[1\leftarrow \mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]-\Pr[1\leftarrow\mathcal{A}^{ \prime}\mid t,\mathsf{Good},b=1]\Big{|}\] \[+\sum_{t}\Pr[t]\Pr[\mathsf{Bad}\mid t]\Big{|}\Pr[1\leftarrow \mathcal{A}^{\prime}\mid t,\mathsf{Bad},b=0]-\Pr[1\leftarrow\mathcal{A}^{ \prime}\mid t,\mathsf{Bad},b=1]\Big{|}\] (31) \[=\sum_{t}\Pr[t]\Pr[\mathsf{Good}\mid t]\Big{|}\Pr[1\leftarrow \mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]-\Pr[1\leftarrow\mathcal{A}^{ \prime}\mid t,\mathsf{Good},b=1]\Big{|} \tag{32}\] Figure 1: Hybrid 0 Figure 2: Hybrid 1 for all \(\lambda\in I\), because if \(\mathsf{Bad}\) occurs, \(\mathcal{A}^{\prime}\) gets only \(\bot\) which contains no information about \(b\). (Here, we often abuse notation to just write \(t\) to mean the event that \(t\) is generated in Item 1, Item 2, and Item 4.) Therefore, if we define \[T_{\lambda}\coloneqq\Big{\{}t:\Pr[\mathsf{Good}\mid t]\cdot\Big{|}\Pr[1 \leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]-\Pr[1\leftarrow \mathcal{A}^{\prime}\mid t,\mathsf{Good},b=1]\Big{|}\geq\frac{1}{2p(\lambda)} \Big{\}}, \tag{33}\] we have, for all \(\lambda\in I\), \[\Pr[\mathsf{Good}\mid t]\geq\frac{1}{4p(\lambda)} \tag{34}\] and \[\Big{|}\Pr[1\leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]-\Pr[1 \leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Good},b=1]\Big{|}\geq\frac{1}{2p( \lambda)} \tag{35}\] for any \(t\in T_{\lambda}\) and \[\sum_{t\in T_{\lambda}}\Pr[t]\geq\frac{1}{2p(\lambda)}. \tag{36}\] Let \(|\phi_{b}^{t,good}\rangle\) be the state over the registers \((\mathbf{A},\mathbf{B},\mathbf{C})\) immediately before Item 8 of Figure 2 given that \(t\) is generated, \(\mathsf{Good}\) occurred, and \(b\) is chosen in Item 6 of Figure 2. We can show the following lemma. (Its proof is given later.) **Lemma 5.1**.: _If \((\mathsf{Gen},\mathsf{Sign},\mathsf{Ver})\) is strong EUF-CMA secure, there exists a subset \(T_{\lambda}^{\prime}\subseteq T_{\lambda}\) such that the following is satisfied for all \(\lambda\in I^{\prime}\), where \(I^{\prime}\coloneqq\{\lambda\in I:\lambda\geq\lambda_{0}\}\) with a certain \(\lambda_{0}\)._ * \(\sum_{t\in T_{\lambda}^{\prime}}\Pr[t]\geq\frac{1}{4p(\lambda)}\)_._ * _For any_ \(t\in T_{\lambda}^{\prime}\)_,_ \(|\phi_{b}^{t,good}\rangle\) _is close to a state_ \[|\tilde{\phi}_{b}^{t,good}\rangle\coloneqq c_{0}\left|0\right\rangle_{\mathbf{ A}}\left|\mathsf{Sign}(k,0\|r)\right\rangle_{\mathbf{B}}\left|\Psi_{0} \right\rangle_{\mathbf{C}}+(-1)^{b}c_{1}\left|1\right\rangle_{\mathbf{A}} \left|\mathsf{Sign}(k,1\|r)\right\rangle_{\mathbf{B}}\left|\Psi_{1}\right\rangle _{\mathbf{C}}\] (37) _within the trace distance_ \(\frac{1}{p^{10}(\lambda)}\)_, where_ \(c_{0}\) _and_ \(c_{1}\) _are some complex coefficients such that_ \(|c_{0}|^{2}+|c_{1}|^{2}=1\)_, and_ \(|\Psi_{0}\rangle\) _and_ \(|\Psi_{1}\rangle\) _are some normalized states._ Now let us fix \(t\in T_{\lambda}^{\prime}\). Also, assume that \(\mathsf{Good}\) occurred. Because \(T_{\lambda}^{\prime}\subseteq T_{\lambda}\), it means that \(t\in T_{\lambda}\). Then, from Equation (35), \[\Big{|}\Pr[1\leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Good},b=0]-\Pr[1 \leftarrow\mathcal{A}^{\prime}\mid t,\mathsf{Good},b=1]\Big{|}=\Delta \tag{38}\] Figure 3: Hybrid 2 for a non-negligible \(\Delta\geq\frac{1}{2p(\lambda)}\) for all \(\lambda\in I\). Without loss of generality, we can assume that in Item 8 of Figure 2, \(\mathcal{A}^{\prime}\) applies a unitary \(V\) on the state \(\left|\phi_{b}^{t,good}\right\rangle\), and measures the register \(\mathbf{A}\) in the computational basis to get \(b^{\prime}\in\{0,1\}\). By Equation (38) we have \[V\left|\phi_{0}^{t,good}\right\rangle =\sqrt{p}\left|1\right\rangle_{\mathbf{A}}\left|\nu_{1}\right\rangle _{\mathbf{B},\mathbf{C}}+\sqrt{1-p}\left|0\right\rangle_{\mathbf{A}}\left| \nu_{0}\right\rangle_{\mathbf{B},\mathbf{C}} \tag{39}\] \[V\left|\phi_{1}^{t,good}\right\rangle =\sqrt{1-p+\Delta}\left|0\right\rangle_{\mathbf{A}}\left|\xi_{0} \right\rangle_{\mathbf{B},\mathbf{C}}+\sqrt{p-\Delta}\left|1\right\rangle_{ \mathbf{A}}\left|\xi_{1}\right\rangle_{\mathbf{B},\mathbf{C}} \tag{40}\] for some real number \(p\) and some normalized states \(\left|\nu_{0}\right\rangle,\left|\nu_{1}\right\rangle,\left|\xi_{0}\right\rangle,\left|\xi_{1}\right\rangle\). (This is because any state can be written as \(p\left|1\right\rangle\left|\nu_{1}\right\rangle+\sqrt{1-p}\left|0\right\rangle \left|\nu_{0}\right\rangle\) with some \(p\) and normalized states \(\left|\nu_{0}\right\rangle,\left|\nu_{1}\right\rangle\), and due to Equation (38), the coefficients of \(\left|1\right\rangle\left|\xi_{1}\right\rangle\) has to be \(\sqrt{p-\Delta}\).) If we define \(W\) as \[W\coloneqq V^{\dagger}(Z\otimes I)V, \tag{41}\] we have \[\left|\left\langle\tilde{\phi}_{b}^{t,good}\right|W\left|\tilde{\phi}_{b}^{t, good}\right\rangle-\left\langle\phi_{b}^{t,good}\right|W\left|\phi_{b}^{t,good} \right\rangle\right|\leq\frac{2}{p^{10}(\lambda)} \tag{42}\] for all \(\lambda\in I^{\prime}\) from Lemma 5.1. Therefore, \[\left|c_{0}^{*}c_{1}\left\langle 0\right|\left\langle\text{Sign}(k,0 \|r)\right|\left\langle\Psi_{0}\right|W\left|1\right\rangle\left|\text{Sign}(k,1\|r)\right\rangle\left|\Psi_{1}\right\rangle \tag{43}\] \[+c_{0}c_{1}^{*}\left\langle 1\right|\left\langle\text{Sign}(k,1 \|r)\right|\left\langle\Psi_{1}\right|W\left|0\right\rangle\left|\text{Sign}(k,0\|r)\right\rangle\left|\Psi_{0}\right\rangle\left|\] (44) \[=\frac{1}{4}|(\left\langle\tilde{\phi}_{0}^{t,good}\right|+\langle \tilde{\phi}_{1}^{t,good}\right|)W(|\tilde{\phi}_{0}^{t,good}\rangle-|\tilde{ \phi}_{1}^{t,good}\rangle)\] (45) \[+(\left\langle\tilde{\phi}_{0}^{t,good}\right|-\langle\tilde{ \phi}_{1}^{t,good}\rangle)W(|\tilde{\phi}_{0}^{t,good}\rangle+|\tilde{\phi}_{1} ^{t,good}\rangle))|\] (46) \[=\frac{1}{2}|\left\langle\tilde{\phi}_{0}^{t,good}\right|W\left| \tilde{\phi}_{0}^{t,good}\right\rangle-\left\langle\tilde{\phi}_{1}^{t,good} \right|W\left|\tilde{\phi}_{1}^{t,good}\right\rangle|\] (47) \[\geq\frac{1}{2}|\left\langle\phi_{0}^{t,good}\right|W\left|\phi_{0} ^{t,good}\right\rangle-\left\langle\phi_{1}^{t,good}\right|W\left|\phi_{1}^{t,good }\right\rangle|-\frac{2}{p^{10}(\lambda)}\] (48) \[=\frac{1}{2}\left|\left(\sqrt{p}\left\langle 1\right|\left\langle \nu_{1}\right|+\sqrt{1-p}\left\langle 0\right|\left\langle\nu_{0}\right|\right)\left(- \sqrt{p}\left|1\right\rangle\left|\nu_{1}\right\rangle+\sqrt{1-p}\left|0 \right\rangle\left|\nu_{0}\right\rangle\right)\right.\] (49) \[=\frac{1}{2}\left|-p+(1-p)-(1-p+\Delta)+(p-\Delta)\right|-\frac{ 2}{p^{10}(\lambda)}\] (50) \[=\Delta-\frac{2}{p^{10}(\lambda)}\] (51) \[\geq\frac{1}{2p(\lambda)}-\frac{2}{p^{10}(\lambda)}\] (52) \[\geq\frac{1}{p(\lambda)} \tag{53}\] for all \(\lambda\in I^{\prime}\). Here, Equation (48) follows from Equation (42), and Equation (49) follows from Equations (39) and (40) and the definition of \(W\). From the triangle inequality and the facts that \(\left|c_{0}\right|\leq 1\) and \(\left|c_{1}\right|\leq 1\), \[\frac{1}{p(\lambda)}\leq\left|c_{1}\right|\cdot\left|\left\langle 0 \right|\left\langle\text{Sign}(k,0\|r)\right|\left\langle\Psi_{0}\right|W\left|1 \right\rangle\left|\text{Sign}(k,1\|r)\right\rangle\left|\Psi_{1}\right\rangle| \tag{54}\] \[+\left|c_{0}\right|\cdot\left|\left\langle 1\right|\left\langle \text{Sign}(k,1\|r)\right|\left\langle\Psi_{1}\right|W\left|0\right\rangle \left|\text{Sign}(k,0\|r)\right\rangle\left|\Psi_{0}\right\rangle\left| \tag{55}\] for all \(\lambda\in I^{\prime}\). Then, \[\frac{1}{2p(\lambda)} \leq\left|c_{1}\right|\cdot\left|\left\langle 0\right|\left\langle \mathsf{Sign}(k,0\|r)\right|\left\langle\Psi_{0}\right|W\left|1\right\rangle \left|\mathsf{Sign}(k,1\|r)\right\rangle\left|\Psi_{1}\right\rangle\left|\right. \tag{56}\] or \[\frac{1}{2p(\lambda)} \leq\left|c_{0}\right|\cdot\left|\left\langle 1\right|\left\langle \mathsf{Sign}(k,1\|r)\right|\left\langle\Psi_{1}\right|W\left|0\right\rangle \left|\mathsf{Sign}(k,0\|r)\right\rangle\left|\Psi_{0}\right\rangle\left|\right. \tag{57}\] holds for all \(\lambda\in I^{\prime}\). Assume that the latter holds. (The following proof can be easily modified even if the former holds.) Then \[\frac{1}{4p^{2}(\lambda)} \leq\left|c_{0}\right|^{2}\cdot\left|\left\langle 1\right|\left\langle\mathsf{Sign}(k,1\|r)\right| \left\langle\Psi_{1}\right|W\left|0\right\rangle\left|\mathsf{Sign}(k,0\|r) \right\rangle\left|\Psi_{0}\right\rangle\left|{}^{2}\right. \tag{58}\] \[\leq\left|c_{0}\right|^{2}\cdot\left\|(I\otimes\left\langle \mathsf{Sign}(k,1\|r)\right|\otimes I)W\left|0\right\rangle\left|\mathsf{Sign}( k,0\|r)\right\rangle\left|\Psi_{0}\right\rangle\left|{}^{2}\right. \tag{59}\] for all \(\lambda\in I^{\prime}\). With this \(W\), we construct the QPT adversary \(\mathcal{A}^{\prime\prime}\) as is shown in Figure 4. We show that \(\mathcal{A}^{\prime\prime}\) wins the game of Figure 3 with a non-negligible probability for infinitely many \(\lambda\). The probability that \(t\in T^{\prime}_{\lambda}\) and \(\mathsf{Good}\) occur in Item 1 of Figure 4 is at least \(\frac{1}{16p^{2}(\lambda)}\) for all \(\lambda\in I^{\prime}\), because of the following reasons. First, \(\sum_{t\in T^{\prime}_{\lambda}}\Pr[t]\geq\frac{1}{4p(\lambda)}\) for all \(\lambda\in I^{\prime}\) from Lemma 5.1. Second, because \(t\in T^{\prime}_{\lambda}\) means \(t\in T_{\lambda}\), \(\Pr[\mathsf{Good}\mid t]\geq\frac{1}{4p(\lambda)}\) for all \(\lambda\in I\) from Equation (34). Assume that \(t\in T^{\prime}_{\lambda}\) and \(\mathsf{Good}\) occur. If \(\mathcal{A}^{\prime\prime}\) does the operations in Item 2 and Item 3 on \(|\tilde{\phi}^{t,good}_{b}\rangle\), the probability that \((\mu_{0},\mu_{1})=(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\) is at least \(\frac{1}{4p^{2}(\lambda)}\) for all \(\lambda\in I^{\prime}\) from Equation (59). From Lemma 5.1, the trace distance between \(|\phi^{t,good}_{b}\rangle\) and \(|\tilde{\phi}^{t,good}_{b}\rangle\) is at most \(\frac{1}{p^{10}(\lambda)}\) for all \(\lambda\in I^{\prime}\). Therefore, if \(\mathcal{A}^{\prime\prime}\) does the operations in Item 2 and Item 3 on \(|\phi^{t,good}_{b}\rangle\), the probability that \((\mu_{0},\mu_{1})=(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\) is at least \(\frac{1}{4p^{2}(\lambda)}-\frac{1}{p^{10}(\lambda)}\) for all \(\lambda\in I^{\prime}\). Hence, the overall probability that \(\mathcal{A}^{\prime\prime}\) outputs \((\mu_{0},\mu_{1})=(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\) is non-negligible for infinitely many \(\lambda\). We prove Lemma 5.1 to complete this subsection. Proof of Lemma 5.1.: Fix \(t\in T_{\lambda}\). Immediately before the coherent signature test in Item 5 of Figure 2, the entire state over the registers \((\mathbf{A},\mathbf{B},\mathbf{C})\) is generally written as \[\sum_{\alpha,\beta}d_{\alpha,\beta}\left|\alpha\right\rangle_{\mathbf{A}} \left|\beta\right\rangle_{\mathbf{B}}\left|\Lambda_{\alpha,\beta}\right\rangle_ {\mathbf{C}}, \tag{60}\] where \(d_{\alpha,\beta}\) are some complex coefficients such that \(\sum_{\alpha,\beta}|d_{\alpha,\beta}|^{2}=1\), and \(|\Lambda_{\alpha,\beta}\rangle\) are some normalized states. Define the set \[S\coloneqq\{(\alpha,\beta):\mathsf{Ver}(\mathsf{vk},\alpha\|r,\beta)=\top\wedge \beta\neq\mathsf{Sign}(k,\alpha\|r)\}. \tag{61}\] The (unnormalized) state after obtaining \(\top\) in the coherent signature test in Item 5 of Figure 2 is \[d_{0,\mathsf{Sign}(k,0\|r)}\left|0\right\rangle_{\mathbf{A}} \left|\mathsf{Sign}(k,0\|r)\right\rangle_{\mathbf{B}}\left|\Lambda_{0,\mathsf{ Sign}(k,0\|r)}\right\rangle_{\mathbf{C}}\] \[+d_{1,\mathsf{Sign}(k,1\|r)}\left|1\right\rangle_{\mathbf{A}} \left|\mathsf{Sign}(k,1\|r)\right\rangle_{\mathbf{B}}\left|\Lambda_{1,\mathsf{ Sign}(k,1\|r)}\right\rangle_{\mathbf{C}}\] \[+\sum_{(\alpha,\beta)\in S}d_{\alpha,\beta}\left|\alpha\right\rangle _{\mathbf{A}}\left|\beta\right\rangle_{\mathbf{B}}\left|\Lambda_{\alpha,\beta }\right\rangle_{\mathbf{C}}. \tag{62}\] Define \[T_{\lambda}^{\prime}\coloneqq\Big{\{}t\in T_{\lambda}:\sum_{(\alpha,\beta)\in S }|d_{\alpha,\beta}|^{2}\leq\frac{1}{4p^{21}(\lambda)}\Big{\}}. \tag{63}\] If \[\sum_{t\in T_{\lambda}\setminus T_{\lambda}^{\prime}}\Pr[t]\geq\frac{1}{4p( \lambda)} \tag{64}\] for infinitely many \(\lambda\in I\), it contradicts the strong EUF-CMA security of the digital signature scheme. Therefore, \[\sum_{t\in T_{\lambda}\setminus T_{\lambda}^{\prime}}\Pr[t]\leq\frac{1}{4p( \lambda)} \tag{65}\] for all \(\lambda\in I^{\prime}\), where \(I^{\prime}\coloneqq\{\lambda\in I:\lambda\geq\lambda_{0}\}\) with a certain \(\lambda_{0}\). This means that \[\sum_{t\in T_{\lambda}^{\prime}}\Pr[t] \geq\sum_{t\in T_{\lambda}}\Pr[t]-\frac{1}{4p(\lambda)} \tag{66}\] \[\geq\frac{1}{2p(\lambda)}-\frac{1}{4p(\lambda)}\] (67) \[=\frac{1}{4p(\lambda)} \tag{68}\] for all \(\lambda\in I^{\prime}\). Moreover, because \(t\in T_{\lambda}^{\prime}\) means \(t\in T_{\lambda}\), \(\Pr[\mathsf{Good}\mid t]\geq\frac{1}{4p(\lambda)}\) for all \(\lambda\in I\) from Equation (34). Therefore, for any \(t\in T_{\lambda}^{\prime}\), \[|d_{0,\mathsf{Sign}(k,0\|r)}|^{2}+|d_{1,\mathsf{Sign}(k,1\|r)}|^{2}+\sum_{( \alpha,\beta)\in S}|d_{\alpha,\beta}|^{2}\geq\frac{1}{4p(\lambda)} \tag{69}\] for all \(\lambda\in I\). If we renormalize the state of Equation (62) and apply \(Z^{b}\), we have \[\left|\phi_{b}^{t,good}\right\rangle =\frac{d_{0,\mathsf{Sign}(k,0\|r)}}{\sqrt{\left|d_{0,\mathsf{Sign}( k,0\|r)}\right|^{2}+\left|d_{1,\mathsf{Sign}(k,1\|r)}\right|^{2}+\sum_{( \alpha,\beta)\in S}\left|d_{\alpha,\beta}\right|^{2}}}\left|0\right\rangle_{ \mathbf{A}}\left|\mathsf{Sign}(k,0\|r)\right\rangle_{\mathbf{B}}\left|\Lambda_{ 0,\mathsf{Sign}(k,0\|r)}\right\rangle_{\mathbf{C}} \tag{70}\] \[+(-1)^{b}\frac{d_{1,\mathsf{Sign}(k,1\|r)}}{\sqrt{\left|d_{0, \mathsf{Sign}(k,0\|r)}\right|^{2}+\left|d_{1,\mathsf{Sign}(k,1\|r)}\right|^{2}+ \sum_{(\alpha,\beta)\in S}\left|d_{\alpha,\beta}\right|^{2}}}\left|1\right\rangle _{\mathbf{A}}\left|\mathsf{Sign}(k,1\|r)\right\rangle_{\mathbf{B}}\left| \Lambda_{1,\mathsf{Sign}(k,1\|r)}\right\rangle_{\mathbf{C}}\] (71) \[+Z^{b}\frac{\sum_{(\alpha,\beta)\in S}d_{\alpha,\beta}}{\sqrt{ \left|d_{0,\mathsf{Sign}(k,0\|r)}\right|^{2}+\left|d_{1,\mathsf{Sign}(k,1\|r)} \right|^{2}+\sum_{(\alpha,\beta)\in S}\left|d_{\alpha,\beta}\right|^{2}}} \left|\alpha\right\rangle_{\mathbf{A}}\left|\beta\right\rangle_{\mathbf{B}} \left|\Lambda_{\alpha,\beta}\right\rangle_{\mathbf{C}}. \tag{72}\] For any \(t\in T_{\lambda}^{\prime}\), its trace distance to the state \[\frac{d_{0,\mathsf{Sign}(k,0\|r)}}{\sqrt{\left|d_{0,\mathsf{Sign} (k,0\|r)}\right|^{2}+\left|d_{1,\mathsf{Sign}(k,1\|r)}\right|^{2}}}\left|0 \right\rangle_{\mathbf{A}}\left|\mathsf{Sign}(k,0\|r)\right\rangle_{\mathbf{B }}\left|\Lambda_{0,\mathsf{Sign}(k,0\|r)}\right\rangle_{\mathbf{C}} \tag{73}\] \[+(-1)^{b}\frac{d_{1,\mathsf{Sign}(k,1\|r)}}{\sqrt{\left|d_{0, \mathsf{Sign}(k,0\|r)}\right|^{2}+\left|d_{1,\mathsf{Sign}(k,1\|r)}\right|^{2} }}\left|1\right\rangle_{\mathbf{A}}\left|\mathsf{Sign}(k,1\|r)\right\rangle_{ \mathbf{B}}\left|\Lambda_{1,\mathsf{Sign}(k,1\|r)}\right\rangle_{\mathbf{C}} \tag{74}\] is less than \(\frac{1}{p^{10}(\lambda)}\) for all \(\lambda\in I\). ### No QPT Adversary Can Output Two Signatures Here we show that no QPT adversary can win Hybrid 2 (Figure 3) with a non-negligible probability. We first give an intuitive argument for the proof, and them give a precise proof. Intuitive argument for the proof is as follows. First, note that the probability that all \(\{r_{i}\}_{i=1}^{m}\) are distinct in Item 2 in Figure 3 is at least \(1-\mathsf{negl}(\lambda)\). Therefore, we can assume that all \(\{r_{i}\}_{i=1}^{m}\) are distinct with a negligible loss in the adversary's winning probability. If \(r\notin\{r_{i}\}_{i=1}^{m}\), it is clear that \(\mathcal{A}\) cannot win the game of Figure 3 except for a negligible probability. The reason is that \(\mathcal{A}\) cannot find \(\mathsf{Sign}(k,0\|r)\) or \(\mathsf{Sign}(k,1\|r)\) except for a negligible probability due to the security of the digital signature scheme. Therefore, we assume that \(r\) is equal to one of the \(\{r_{i}\}_{i=1}^{m}\). Assume that, in the game of Figure 3, \(\mathcal{C}\) is replaced with \(\mathcal{C}^{\prime}\) who is the same as \(\mathcal{C}\) except that it measures the first qubit of \(\left|\psi_{r}\right\rangle\) in the computational basis before sending the states in Item 3. Let \(s\in\{0,1\}\) be the measurement result. Then, for any QPT adversary \(\mathcal{A}\), the probability that \(\mathcal{A}\) wins the game of Figure 3 is negligible. The reason is that \(\mathcal{A}\) cannot find \(\mathsf{Sign}(k,s\oplus 1\|r)\) except for a negligible probability due to the strong EUF-CMA security of the digital signature scheme. From Lemma 2.6, we therefore have \[\Pr[(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathcal{ A}\mid\mathcal{C}] \leq 2\Pr[(\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow \mathcal{A}\mid\mathcal{C}^{\prime}] \tag{75}\] \[\leq\mathsf{negl}(\lambda), \tag{76}\] where the left-hand-side of Equation (75) is the probability that \(\mathcal{A}\) outputs \((\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\) with the challenger \(\mathcal{C}\), and the right-hand-side is that with the challenger \(\mathcal{C}^{\prime}\). We give a precise proof below. Let Alg be an algorithm that, on input \((r_{1},...,r_{m})\), simulates \(\mathcal{C}\) and \(\mathcal{A}\) in Figure 3 and outputs \((r,\mu_{0},\mu_{1})\). The probability that \(\mathcal{A}\) wins in the game of Figure 3 is \[\frac{1}{2^{um}}\sum_{r_{1},...,r_{m}}\sum_{r}\Pr[(r,\mathsf{Sign}( k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathsf{Alg}(r_{1},...,r_{m})] \tag{77}\] \[=\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r}\Pr[(r, \mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathsf{Alg}(r_{1},..., r_{m})]\] (78) \[+\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\notin R}\sum_{r}\Pr[(r, \mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathsf{Alg}(r_{1},..., r_{m})]\] (79) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r}\Pr[(r, \mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathsf{Alg}(r_{1},..., r_{m})]+\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\notin R}\] (80) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r}\Pr[(r, \mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow\mathsf{Alg}(r_{1},...,r_{m})]+\frac{(m-1)m}{2^{u}}\] (81) \[=\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\in\{r_{i} \}_{i=1}^{m}}\Pr[(r,\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow \mathsf{Alg}(r_{1},...,r_{m})]\] (82) \[+\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\notin\{r_{i }\}_{i=1}^{m}}\Pr[(r,\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow \mathsf{Alg}(r_{1},...,r_{m})]+\frac{(m-1)m}{2^{u}}\] (83) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\in\{r_{i }\}_{i=1}^{m}}\Pr[(r,\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow \mathsf{Alg}(r_{1},...,r_{m})]\] (84) \[+\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\mathsf{negl}( \lambda)+\frac{(m-1)m}{2^{u}}\] (85) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\in\{r_{i }\}_{i=1}^{m}}\Pr[(r,\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1\|r))\leftarrow \mathsf{Alg}(r_{1},...,r_{m})]\] (86) \[+\mathsf{negl}(\lambda)+\frac{(m-1)m}{2^{u}}\] (87) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\in\{r_{i }\}_{i=1}^{m}}2\mathrm{Pr}^{\prime}[(r,\mathsf{Sign}(k,0\|r),\mathsf{Sign}(k,1 \|r))\leftarrow\mathsf{Alg}(r_{1},...,r_{m})]\] (88) \[+\mathsf{negl}(\lambda)+\frac{(m-1)m}{2^{u}}\] (89) \[\leq\frac{1}{2^{um}}\sum_{(r_{1},...,r_{m})\in R}\sum_{r\in\{r_{i }\}_{i=1}^{m}}\mathsf{negl}(\lambda)+\mathsf{negl}(\lambda)+\frac{(m-1)m}{2 ^{u}}\] (90) \[\leq\mathsf{negl}(\lambda)+\mathsf{negl}(\lambda)+\frac{(m-1)m }{2^{u}}\] (91) \[=\mathsf{negl}(\lambda). \tag{92}\] Here, \(R\coloneqq\{(r_{1},...,r_{m}):\) All of them are distinct\(\}\). In Equation (85), we have used the strong EUF-CMA security of the digital signature scheme. \(\Pr^{\prime}\) is the probability that, in Alg, \(\mathcal{C}\) is replaced with \(\mathcal{C}^{\prime}\) who is the same as \(\mathcal{C}\) except that it measures the first qubit of \(|\psi_{r}\rangle\) in the computational basis before sending the states in Item 3. Equation (88) comes from Lemma 2.6. Equation (90) is from the strong EUF-CMA security of the digital signature scheme. Recyclable Variant In the construction given in Section 4, a quantum public key can be used to encrypt only one message and a sender needs to obtain a new quantum public key whenever it encrypts a message. This is not desirable from practical perspective. In this section, we define recyclable QPKE where a sender only needs to receive one quantum public key to send arbitrarily many messages. The definition is similar to QPKE as defined in Definition 3.1 except that the encryption algorithm outputs a classical _recycled_ key that can be reused to encrypt messages many times. **Definition 6.1** (Recyclable QPKE).: _A recyclable QPKE scheme with message space \(\{0,1\}^{\ell}\) is a set of algorithms \((\mathsf{SKGen},\mathsf{PKGen},\mathsf{Enc},\mathsf{rEnc},\mathsf{Dec})\) such that_ * \(\mathsf{SKGen}(1^{\lambda})\rightarrow(\mathsf{sk},\mathsf{vk}):\) _It is a PPT algorithm that, on input the security parameter \(\lambda\), outputs a classical secret key_ \(\mathsf{sk}\) _and a classical verification key_ \(\mathsf{vk}\)_._ * \(\mathsf{PKGen}(\mathsf{sk})\rightarrow\mathsf{pk}:\) _It is a QPT algorithm that, on input_ \(\mathsf{sk}\)_, outputs a quantum public key_ \(\mathsf{pk}\)_._ * \(\mathsf{Enc}(\mathsf{vk},\mathsf{pk},\mathsf{msg})\rightarrow(\mathsf{ct}, \mathsf{rk}):\) _It is a QPT algorithm that, on input_ \(\mathsf{vk}\)_,_ \(\mathsf{pk}\)_, and a plaintext_ \(\mathsf{msg}\in\{0,1\}^{\ell}\)_, outputs a classical ciphertext_ \(\mathsf{ct}\) _and classical recycled key_ \(\mathsf{rk}\)_._ * \(\mathsf{rEnc}(\mathsf{rk},\mathsf{msg})\rightarrow\mathsf{ct}:\) _It is a PPT algorithm that, on input_ \(\mathsf{rk}\) _and a plaintext_ \(\mathsf{msg}\in\{0,1\}^{\ell}\)_, outputs a classical ciphertext_ \(\mathsf{ct}\)_._ * \(\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\rightarrow\mathsf{msg}^{\prime}:\) _It is a classical deterministic polynomial-time algorithm that, on input_ \(\mathsf{sk}\) _and_ \(\mathsf{ct}\)_, outputs_ \(\mathsf{msg}^{\prime}\in\{0,1\}^{\ell}\cup\{\bot\}\)_._ _We require the following correctness and IND-pkTA security._ Correctness:_For any \(\mathsf{msg},\mathsf{msg}^{\prime}\in\{0,1\}^{\ell}\),_ \[\Pr\left[\begin{matrix}(\mathsf{sk},\mathsf{vk})&\leftarrow&\mathsf{SKGen}(1^ {\lambda})\\ \mathsf{msg}\leftarrow\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\wedge\mathsf{msg }^{\prime}\leftarrow\mathsf{Dec}(\mathsf{sk},\mathsf{ct}^{\prime}):&\mathsf{ pk}\leftarrow&\mathsf{PKGen}(\mathsf{sk})\\ (\mathsf{ct},\mathsf{rk})&\leftarrow&\mathsf{Enc}(\mathsf{vk},\mathsf{pk}, \mathsf{msg})\\ &\mathsf{ct}^{\prime}&\leftarrow&\mathsf{rEnc}(\mathsf{rk},\mathsf{msg}^{ \prime})\end{matrix}\right]\geq 1-\mathsf{negl}(\lambda). \tag{93}\] IND-pkTA security under quantum public keys:_For any polynomial \(m\), and any QPT adversary \(\mathcal{A}\),_ \[\Pr\left[\begin{matrix}(\mathsf{sk},\mathsf{vk})&\leftarrow&\mathsf{SKGen}(1^ {\lambda})\\ \mathsf{pk}_{1},...,\mathsf{pk}_{m}&\leftarrow&\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\\ b\leftarrow\mathcal{A}^{\mathsf{rEnc}(\mathsf{rk},\cdot)}(\mathsf{ct}^{*}, \mathsf{st}):&(\mathsf{pk}^{\prime},\mathsf{msg}_{0},\mathsf{msg}_{1}, \mathsf{st})\leftarrow&\mathcal{A}(\mathsf{vk},\mathsf{pk}_{1},...,\mathsf{ pk}_{m})\\ b\leftarrow\{0,1\}&b\leftarrow\{0,1\}\\ (\mathsf{ct}^{*},\mathsf{rk})&\leftarrow&\mathsf{Enc}(\mathsf{vk},\mathsf{pk}^ {\prime},\mathsf{msg}_{b})\end{matrix}\right]\leq\frac{1}{2}+\mathsf{negl}( \lambda). \tag{94}\] _Here, \(\mathsf{pk}_{1},...,\mathsf{pk}_{m}\leftarrow\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\) means that \(\mathsf{PKGen}\) is executed \(m\) times and \(\mathsf{pk}_{i}\) is the output of the \(i\)th execution of \(\mathsf{PKGen}\), \(\mathsf{rEnc}(\mathsf{rk},\cdot)\) means a classically-accessible encryption oracle, and \(\mathsf{st}\) is a quantum internal state of \(\mathcal{A}\), which can be entangled with \(\mathsf{pk}^{\prime}\)._ IND-pkTA security under recycled keys:_For any polynomial \(m\), and any QPT adversary \(\mathcal{A}\),_ \[\Pr\left[b\leftarrow\mathcal{A}^{\mathsf{rEnc}(\mathsf{rk},\cdot)}( \mathsf{ct}^{*},\mathsf{st}^{\prime}):\begin{array}{c}(\mathsf{st},\mathsf{rk}) \leftarrow\mathsf{SKGen}(1^{\lambda})\\ \mathsf{pk}_{1},...,\mathsf{pk}_{m}\leftarrow\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\\ (\mathsf{pk}^{\prime},\mathsf{msg},\mathsf{st})\leftarrow\mathcal{A}(\mathsf{ wk},\mathsf{pk}_{1},...,\mathsf{pk}_{m})\\ \end{array}\right]\leq\frac{1}{2}+\mathsf{negl}(\lambda). \tag{95}\] _Here, \(\mathsf{pk}_{1},...,\mathsf{pk}_{m}\leftarrow\mathsf{PKGen}(\mathsf{sk})^{ \otimes m}\) means that \(\mathsf{PKGen}\) is executed \(m\) times and \(\mathsf{pk}_{i}\) is the output of the \(i\)th execution of \(\mathsf{PKGen}\), \(\mathsf{rEnc}(\mathsf{rk},\cdot)\) means a classically-accessible encryption oracle, and \(\mathsf{st}\) and \(\mathsf{st}^{\prime}\) are quantum internal states of \(\mathcal{A}\), which can be entangled with \(\mathsf{pk}^{\prime}\)._ Construction.We show a generic construction of recyclable QPKE from (non-recyclable) QPKE with classical ciphertexts and IND-CPA secure SKE via standard hybrid encryption. Let \(\mathsf{QPKE}=(\mathsf{QPKE}.\mathsf{SKGen},\mathsf{QPKE}.\mathsf{PKGen}, \mathsf{QPKE}.\mathsf{Enc},\mathsf{QPKE}.\mathsf{Dec})\) be a (non-recyclable) QPKE scheme with message space \(\{0,1\}^{\lambda}\) and \(\mathsf{SKE}=(\mathsf{SKE}.\mathsf{Enc},\mathsf{SKE}.\mathsf{Dec})\) be an SKE scheme with message space \(\{0,1\}^{\ell}\). Then we construct a recyclable QPKE scheme \(\mathsf{QPKE}^{\prime}=(\mathsf{QPKE}^{\prime}.\mathsf{SKGen},\mathsf{QPKE}^{ \prime}.\mathsf{PKGen},\mathsf{QPKE}^{\prime}.\mathsf{Enc},\mathsf{QPKE}^{ \prime}.\mathsf{Dec})\) with message space \(\{0,1\}^{\ell}\) as follows: * \(\mathsf{QPKE}^{\prime}.\mathsf{SKGen}(1^{\lambda})\rightarrow(\mathsf{sk}^{ \prime},\mathsf{wk}^{\prime}):\mathsf{Run}\left(\mathsf{sk},\mathsf{wk}\right) \leftarrow\mathsf{QPKE}.\mathsf{SKGen}(1^{\lambda})\) and output a secret key \(\mathsf{sk}^{\prime}:=\mathsf{sk}\) and verification key \(\mathsf{wk}^{\prime}:=\mathsf{wk}\). * \(\mathsf{QPKE}^{\prime}.\mathsf{PKGen}(\mathsf{sk}^{\prime})\rightarrow\mathsf{ pk}^{\prime}:\mathsf{Run}\,\mathsf{pk}\leftarrow\mathsf{QPKE}.\mathsf{PKGen}(\mathsf{sk})\) and outputs \(\mathsf{pk}^{\prime}:=\mathsf{pk}\). * \(\mathsf{QPKE}^{\prime}.\mathsf{Enc}(\mathsf{wk}^{\prime},\mathsf{pk}^{\prime},\mathsf{msg})\rightarrow(\mathsf{ct}^{\prime},\mathsf{rk}^{\prime}):\mathsf{Parse}\, \mathsf{pk}^{\prime}=\mathsf{pk}\) and \(\mathsf{wk}^{\prime}=\mathsf{wk}\), sample \(K\leftarrow\{0,1\}^{\lambda}\), run \(\mathsf{ct}\leftarrow\mathsf{QPKE}.\mathsf{Enc}(\mathsf{wk},\mathsf{pk},K)\) and \(\mathsf{ct}_{\mathsf{ske}}\leftarrow\mathsf{SKE}.\mathsf{Enc}(K,\mathsf{msg})\), and output a ciphertext \(\mathsf{ct}^{\prime}:=(\mathsf{ct},\mathsf{ct}_{\mathsf{ske}})\) and recycled key \(\mathsf{rk}^{\prime}:=(\mathsf{K},\mathsf{ct})\). * \(\mathsf{QPKE}^{\prime}.\mathsf{Enc}(\mathsf{rk}^{\prime},\mathsf{msg}) \rightarrow\mathsf{ct}^{\prime}:\mathsf{Parse}\,\mathsf{rk}^{\prime}=( \mathsf{K},\mathsf{ct})\), run \(\mathsf{ct}_{\mathsf{ske}}\leftarrow\mathsf{SKE}.\mathsf{Enc}(K,\mathsf{msg})\), and output a ciphertext \(\mathsf{ct}^{\prime}:=(\mathsf{ct},\mathsf{ct}_{\mathsf{ske}})\). * \(\mathsf{QPKE}^{\prime}.\mathsf{Dec}(\mathsf{sk}^{\prime},\mathsf{ct}^{\prime} )\rightarrow\mathsf{msg}^{\prime}:\mathsf{Parse}\,\mathsf{ct}^{\prime}=( \mathsf{ct},\mathsf{ct}_{\mathsf{ske}})\) and \(\mathsf{sk}^{\prime}=\mathsf{sk}\), run \(K^{\prime}\leftarrow\mathsf{QPKE}.\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\) and \(\mathsf{msg}^{\prime}\leftarrow\mathsf{SKE}.\mathsf{Dec}(K^{\prime},\mathsf{ct}_ {\mathsf{ske}})\), and output \(\mathsf{msg}^{\prime}\). Correctness.Correctness of \(\mathsf{QPKE}^{\prime}\) immediately follows from correctness of \(\mathsf{QPKE}\) and \(\mathsf{SKE}\). IND-pkTA security.IND-pkTA security of \(\mathsf{QPKE}\) implies that the SKE key \(K\) chosen by \(\mathsf{QPKE}^{\prime}.\mathsf{Enc}\) is computationally indistinguishable from uniformly random for an adversary. Then IND-CPA security of \(\mathsf{SKE}\) directly implies IND-pkTA security under quantum public keys and recycled keys of \(\mathsf{QPKE}^{\prime}\) since \(\mathsf{msg}\) is encrypted by \(\mathsf{SKE}\) under the key \(K\) in both of \(\mathsf{QPKE}^{\prime}.\mathsf{Enc}\) and \(\mathsf{QPKE}^{\prime}.\mathsf{Enc}\). _Remark 6.2_.: We can generically add decryption error detectability by using digital signatures similarly to (non-recyclable) QPKE. Acknowledgments.TM is supported by JST Moonshot R&D JPMJMS2061-5-1-1, JST FOREST, MEXT QLEAP, the Grant-in-Aid for Scientific Research (B) No.JP19H04066, the Grant-in Aid for Transformative Research Areas (A) 21H05183, and the Grant-in-Aid for Scientific Research (A) No.22H00522.
2307.01388
Improving ATLAS Hadronic Object Performance with ML/AI Algorithms
Hadronic object reconstruction is one of the most promising settings for cutting-edge machine learning and artificial intelligence algorithms at the LHC. In this contribution, selected highlights of ML/AI applications by ATLAS to particle and boosted-object identification, MET reconstruction and other tasks are presented.
Benjamin Hodkinson
2023-07-03T22:54:40Z
http://arxiv.org/abs/2307.01388v1
# Improving ATLAS Hadronic Object Performance with ML/AI Algorithms ###### Abstract Hadronic object reconstruction is one of the most promising settings for cutting-edge machine learning and artificial intelligence algorithms at the LHC. In this contribution, selected highlights of ML/AI applications by ATLAS to particle and boosted-object identification, MET reconstruction and other tasks are presented. P Presented at _DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects_, Michigan State University, USA, 27-31 March 2023 ## 1 Introduction Hadronic objects are ubiquitous in the proton-proton collision events recorded by the ATLAS detector [1] at the LHC. This includes hadronic jets, which are reconstructed from a large number of low-level calorimeter/track-based constituent objects, and missing transverse momentum (\(p_{\mathrm{T}}^{\mathrm{miss}}\)), which involves every detector component and final-state object. The complexity and abundance of jets and \(p_{\mathrm{T}}^{\mathrm{miss}}\) in the ATLAS dataset makes their reconstruction a promising setting for machine learning (ML) applications. This contribution introduces several recent developments in ATLAS which use ML to improve the performance of \(p_{\mathrm{T}}^{\mathrm{miss}}\) reconstruction [2], pion reconstruction [3] and jet tagging [4; 5; 6]. These applications can broadly be separated into two categories: 1. Regression of truth-level quantities from detector-level information. 2. Classification of hadronic objects. ## 2 METNet: A combined \(p_{\mathrm{T}}^{\mathrm{miss}}\) working point ATLAS employs several working points for \(p_{\mathrm{T}}^{\mathrm{miss}}\) reconstruction [7], each of which is optimal for different event topologies and pile-up conditions. METNet [2] is a neural network (NN) designed to pick and combine the reconstructed \(p_{\mathrm{T}}^{\mathrm{miss}}\) for each working point into a single \(p_{\mathrm{T}}^{\mathrm{miss}}\) estimate. This is achieved by regressing particle-level ('true') \(p_{\mathrm{T}}^{\mathrm{miss}}\) given the detector-level \(p_{\mathrm{T}}^{\mathrm{miss}}\) predictions for each working point and information characterising pile-up and event topology. The NN is trained on a mixture of \(t\bar{t}\) and di-boson Monte Carlo (MC) events. The performance of two iterations of METNet is presented here: one trained using the Huber loss [8] function, and another including an additional Sinkhorn [9] contribution to the loss to reduce an observed negative bias, denoted METNet (Sk). Figure 1 shows the root-mean-squared error (a metric for resolution) of METNet and several current working points for (a) \(t\bar{t}\) and (b) \(Z\to\mu\mu\) events, in bins of \(p_{\mathrm{T}}^{\mathrm{miss,\ True}}\) and number of primary vertices respectively. METNet has improved resolution for both topologies and shows an ability to generalise to topologies such as \(Z\to\mu\mu\) which were not seen during training. The variable \(p_{\mathrm{T}}^{\mathrm{miss}}\) significance [10] is also used in ATLAS to separate processes with'real' \(p_{\mathrm{T}}^{\mathrm{miss}}\) (from genuine invisible particles, such as neutrinos) and 'fake' \(p_{\mathrm{T}}^{\mathrm{miss}}\) (from detector mis Figure 2: Distributions of (a) METNetSig and (b) object-based \(p_{\mathrm{T}}^{\mathrm{miss}}\) significance for a supersymmetric signal sample and \(WW\) and \(Z\to\mu\mu\) backgrounds. The lower panel shows the signal significance for a lower-bound selection at each \(x\)-axis bin value. Plots reproduced from [2]. measurement). METNet is extended to produce a 'confidence' \(\sigma\) as well as a central \(p_{\rm T}^{\rm miss}\) prediction by using the Gaussian negative log-likelihood (GNLL) loss. A machine learning-based \(p_{\rm T}^{\rm miss}\) significance variable is then defined as METNetSig\(=p_{\rm T}^{\rm miss,\;NN}/\sigma\). Figure 2 shows (a) METNetSig and (b) object-based \(p_{\rm T}^{\rm miss}\) significance [10] (the current ATLAS state-of-the-art) for a supersymmetric signal process plus two Standard Model backgrounds. METNetSig shows the ability to separate real and fake \(p_{\rm T}^{\rm miss}\) and has similar behaviour to object-based \(p_{\rm T}^{\rm miss}\) significance. ## 3 Pion reconstruction The ATLAS detector has non-compensating calorimetry, so being able to distinguish charged and neutral pions allows the corresponding hadronic energy depositions to be restored to the correct scale. Figure (a)a indicates the \(\pi^{0}\) vs. \(\pi^{\pm}\) classification performance of several ML models trained in Reference [3] compared to a non-ML baseline classifier (labelled \(\mathcal{P}_{\rm clus}^{\rm EM}\)). All methods outperform the baseline and the Graph Neural Network (GNN) shows the best performance overall. Additionally, several ML models are trained to calibrate pion energy. Given that pions are produced in abundance in nearly all hadronic showers, understanding and improving pion reconstruction is central to improving jet reconstruction. The models in Reference [3] show significant improvement on current (non-ML) calibration baselines, particularly when combining tracking and calorimeter information. Figure (b)b shows energy resolution for several ML methods along with the track resolution. The resolution is quantified as one-half the interquantile range (IQR) divided by the median predicted energy, where the IQR represents the width of the response data (where response is the ratio of the predicted pion energy to particle-level pion energy) from \(1\sigma\) to \(-1\sigma\) of the median. This captures a measure of the spread of energy predictions. The ML methods approximate the tracker energy resolution up to around 50 GeV before the calorimeter energy resolution dominates, indicating that ML is providing the best of both tracking and calorimetry performance. Figure 3: (a) Comparison of topo-cluster classification performance for three ML methods compared to the non-ML \(\mathcal{P}_{\rm clus}^{\rm EM}\) baseline. (b) Resolution as a function of truth particle energy for several ML architectures alongside the track resolution. Plots reproduced from Reference [3]. ## 4 Boosted jet tagging Large-radius jets from massive particles (such as \(W\)-bosons, \(Z\)-bosons and top quarks) can be distinguished from light quark/gluon-initiated jets using jet substructure information. ATLAS employs a variety of taggers for this purpose, and improving their classification accuracy enhances the performance of both searches and precision measurements. The latest taggers developed in ATLAS use jets reconstructed from Unified Flow Objects [11] which combine topocluster and tracking information to provide improved pile-up resilience and jet mass resolution. In Reference [4], several top-quark taggers which were developed outside ATLAS using simplified Delphes simulated data-sets [12] are evaluated using realistic GEANT4-simulated samples. Figure 3(a) compares the efficiency of the taggers, which use constituent-based input information, to baseline deep neural networks trained on high-level (hDNN) and constituent-level (DNN) inputs. ParticleNet and the Particle Flow Network (PFN) outperform the baselines, while the Energy Flow Network (EFN) and ResNet50 underperform relative to the previous Delphes-based studies in Reference [12]. This highlights the need to develop taggers in a realistic context. As a result, the simulated data set used in this study has been made publicly available in Reference [13]. In Figure 3(b), ParticleNet shows a dependence on the QCD modelling. This is also seen for PFN, but not for EFN due to its requirement of infra-red collinear safe inputs. References [5] and [6] include the latest developments in \(W/Z\) tagging. A DNN shows improved performance compared to the current cut-based taggers, which can be seen by comparing \(z_{\text{NN}}\) with \(D_{2}\) in Figure 4(a). However, Figure 4(b) shows that the \(z_{\text{NN}}\) tagger scalups the QCD background jet-mass distribution to match the \(W\) jets signal topology, which would complicate background estimation strategies which use side-band regions. To address this, an adversarial NN (ANN) is trained to de-correlate the jet mass. This corresponds to a decrease in performance for the ANN, labelled \(z_{\text{ANN}}^{(\lambda=10)}\) in Figure 4(a). The performance could partially be recovered with analysis-specific mass-window requirements. Figure 4: (a) Classification efficiency for several top quark taggers. (b) The signal efficiency for ParticleNet for several \(t\bar{t}\) samples. Plots reproduced from Reference [4]. ## 5 Conclusion Hadronic object reconstruction at the LHC is ripe for ML applications. This contribution has presented some recent highlights in ATLAS, including regressing truth-level \(p_{\mathrm{T}}^{\mathrm{miss}}\) and pion energy and classifying pions and boosted jets. Development of all of these applications is ongoing and promises to enhance the performance of precision Standard Model measurements and beyond-the-Standard Model searches.
2303.05419
A catalogue of precessing black-hole-binary numerical-relativity simulations
We present a public catalogue of numerical-relativity binary-black-hole simulations. The catalogue contains datasets from 80 distinct configurations of precessing binary-black-hole systems, with mass ratios up to $m_2/m_1 = 8$, dimensionless spin magnitudes on the larger black hole up to $|\vec{S}_2|/m_2^2 = 0.8$ (the small black hole is non-spinning), and a range of five values of spin misalignment for each mass-ratio/spin combination. We discuss the physical properties of the configurations in our catalogue, and assess the accuracy of the initial configuration of each simulation and of the gravitational waveforms. We perform a careful analysis of the errors due to the finite resolution of our simulations and the finite distance from the source at which we extract the waveform data and provide a conservative estimate of the mismatch accuracy. We find that the upper limit on the mismatch uncertainty of our waveforms is $0.4\%$. In doing this we present a consistent approach to combining mismatch uncertainties from multiple error sources. We compare this release to previous catalogues and discuss how these new simulations complement the existing public datasets. In particular, this is the first catalogue to uniformly cover this parameter space of single-spin binaries and there was previously only sparse coverage of the precessing-binary parameter space for mass ratios $\gtrsim 5$. We discuss applications of these new data, and the most urgent directions for future simulation work. The public dataset can be accessed online at https://data.cardiffgravity.org/bam-catalogue/.
Eleanor Hamilton, Edward Fauchon-Jones, Mark Hannam, Charlie Hoy, Chinmay Kalaghatgi, Lionel London, Jonathan E. Thompson, Dave Yeeles, Shrobana Ghosh, Sebastian Khan, Panagiota Kolitsidou, Alex Vano-Vinuales
2023-03-09T17:09:43Z
http://arxiv.org/abs/2303.05419v1
# A catalogue of precessing black-hole-binary numerical-relativity simulations ###### Abstract We present a public catalogue of numerical-relativity binary-black-hole simulations. The catalogue contains datasets from 80 distinct configurations of precessing binary-black-hole systems, with mass ratios up to \(m_{2}/m_{1}=8\), dimensionless spin magnitudes on the larger black hole up to \(|\vec{S_{2}}|/m_{2}^{2}=0.8\) (the small black hole is non-spinning), and a range of five values of spin misalignment for each mass-ratio/spin combination. We discuss the physical properties of the configurations in our catalogue, and assess the accuracy of the initial configuration of each simulation and of the gravitational waveforms. We perform a careful analysis of the errors due to the finite resolution of our simulations and the finite distance from the source at which we extract the waveform data and provide a conservative estimate of the mismatch accuracy. We find that the upper limit on the mismatch uncertainty of our waveforms is \(0.4\%\). In doing this we present a consistent approach to combining mismatch uncertainties from multiple error sources. We compare this release to previous catalogues and discuss how these new simulations complement the existing public datasets. In particular, this is the first catalogue to uniformly cover this parameter space of single-spin binaries and there was previously only sparse coverage of the precessing-binary parameter space for mass ratios \(\gtrsim 5\). We discuss applications of these new data, and the most urgent directions for future simulation work. The public dataset can be accessed online at [https://data.cardiffgravity.org/bam-catalogue/](https://data.cardiffgravity.org/bam-catalogue/). ## I Introduction After several decades of research to solve the binary-black-hole (BBH) problem, the first numerical relativity (NR) BBH simulations through one orbit, merger and ringdown were produced in 2005 [1; 2; 3]. Since then many independent numerical relativity codes [4; 5; 6; 7; 8; 9; 10; 11] have been developed to simulate BBH systems for many orbits and added support for more complex configurations such as extremely high mass ratios and highly spinning black holes [12; 13; 14]. The data products from NR have been crucial for the field of gravitational-wave astronomy, including as input to develop approximate gravitational wave models [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29], to calculate remnant properties of binary mergers [30; 31; 32; 33; 34; 35; 36; 37; 38; 39], and used directly for gravitational wave injection studies [40; 41]. These NR-dependent tools have played a central role in the direct detection of gravitational waves and the measurement of their source properties over the last six years [42; 43; 44]. Gravitational wave observations during the first three LIGO-Virgo-Kagra (LVK) observing runs [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55] have relied on theoretical models from three families; Phenom, SEOBNR and NRSurrogate. The Phenom and SEOBNR families use NR waveforms to inform the merger-ringdown part of the model by calibrating a theoretically-motivated ansatz to the numerical data; the NRSurrogate models are constructed entirely from NR input. The simplest gravitational wave models calibrated to NR data are aligned-spin models [15; 16; 22], which capture the most important features of the waveform but require NR waveforms that cover only a three-dimensional parameter space. More recent aligned spin models have benefitted from further calibration to expanded data sets [17; 18; 19; 24; 26]. Many other subsequent models have been based on these aligned-spin models, thus indirectly benefitting from calibration to NR data [56; 57; 58; 59; 60; 61; 62; 63]. Recent expansions of NR catalogues to cover the precessing parameter space have enabled the construction of the first generic-spin models calibrated entirely to NR data [29]. NR has been useful beyond modelling. NR waveform injections have been used in several studies, including to assess the presence of systematic bias in waveform models [64; 65], and to estimate intermediate mass black hole binary merger rates [66]. NR waveforms have also been used for direct comparisons and parameter estimation of gravitational wave observations [67; 68]. Further, NR data can be used in the construction of fits to predict the remnant properties of a BBH merger, namely the final mass and spin as well as the gravitational recoil [69; 70; 71; 72; 73; 74; 75; 76; 77]. These fits have a number of applications, such as tests of general relativity [78; 79; 80; 81]. Several large catalogues of BBH NR simulations exist [82; 83; 84; 85; 86; 87; 88]. A quasi-circular BBH is described by 8 intrinsic parameters; the masses of each of the black holes \(m_{1}\) and \(m_{2}\) and their respective spins \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\). The total mass \(M=m_{1}+m_{2}\) sets the overall frequency scale and can be factored out. We therefore choose to set \(M=1\). The dimensionless spin is defined as \(\vec{\chi_{i}}=\mathbf{S}_{i}/m_{i}^{2}\). The majority of simulations contained within these catalogues cover the precessing parameter space up to mass ratio \(q=m_{2}/m_{1}=8\) and dimensionless spin magnitudes \(\chi\leqslant 0.8\). However, the existence of simulations beyond \(q=4\) of sufficient length and accuracy to be useful in the construction of gravitational wave models is fairly sparse. There therefore exists no broad systematic covering of the precessing parameter space up to \(q=8\) with NR simulations. One purpose of the current catalogue is to provide a systematic covering of that parameter space. The primary objective of this catalogue was to support the development of a new precessing phenomenological model that is calibrated to numerical relativity waveforms [20]. Experience with producing previous phenomenological models suggests that we do not require an extremely dense sampling of the parameter space to produce a reasonably accurate model [15, 16]. For the first catalogue used to inform the first precessing Phenom model we therefore chose no more than five points in each parameter direction. This choice was found to be sufficient: the PhenomPNR model of the dominant contribution to the signal (the \((2,2)\) multipoles in the co-precessing frame) [20], constructed from 19 aligned-spin waveforms and 40 precessing-binary waveforms, is of comparable accuracy to the equivalent contributions to the NRSurrogate model [29], which was constructed from more than 1000 simulations over a smaller volume of parameter space. (It remains to be seen how many NR simulations are required to accurately include two-spin effects, higher multipoles, and mode asymmetries.) Secondary objectives were to contribute data that is useful to the waveform modelling community and to provide processed datasets that are appropriate for parameter estimation studies [64, 66]. Our catalogue contains datasets from 80 different configurations of precessing BBH systems. These configurations cover four mass ratios \(q=m_{2}/m_{1}\in\{1,2,4,8\}\) at four different spin magnitudes \(\chi_{2}=|\vec{S_{2}}|/m_{2}^{2}\in\{0.2,0.4,0.6,0.8\}\) each at five different spin vectors such that the angle between the orbital angular momentum and spin vector of the larger black hole is one of \(\{30^{\circ},60^{\circ},90^{\circ},120^{\circ},150^{\circ}\}\). The configurations are specified at a reference orbital frequency. The catalogue can be accessed online at [https://data.cardiffgravity.org/bam-catalogue/](https://data.cardiffgravity.org/bam-catalogue/). In the following section we briefly summarise the methods used by the BAM code [4, 5] to perform numerical simulations of BBH systems and describe the workflow we use to produce low eccentricity initial data. In Sec. III we provide a description of the properties of the simulations contained within the catalogue. In Sec. IV we perform a waveform accuracy analysis to validate the catalogue. Finally we conclude with Sec. V where we discuss what regions of parameter space and how the catalogue can be used to contribute to the continuing advance of gravitational wave data analysis. ## II Summary of Methods ### Simulation method The simulations in this catalogue were produced using BAM [4, 5], a moving-box-based mesh-refinement numerical-relativity code that solves the \(3+1\) decomposed Einstein equations. Specifically, for the simulations in this catalogue we evolve Bowen-York wormhole data [90, 91, 92] via the \(\chi\) variant of the moving-puncture treatment [2, 3] of the BSSN formulation [93, 92]. Spatial derivatives are approximated algebraically through sixth-order finite differencing in the bulk, which in turn are evolved in time through fourth-order Runge-Kutta time stepping; see Refs. [4, 5] for full details on the treatment of boundaries, buffer zones, advection derivatives, and numerical dissipation. Finally, the gravitational wave content of the system is extracted at some finite distance using the Newman-Penrose scalar \(\Psi_{4}\)[94] following the procedure outlined in Ref. [95]. Following these references, \(\Psi_{4}\) is decomposed via the spherical multipolar decomposition \[\Psi_{4}\left(\Theta,\Phi\right)=\sum_{\ell=2}^{\infty}\sum_{m=\ell}^{\ell} \psi_{4,\ell m}\,^{-2}Y_{\ell m}\left(\Theta,\Phi\right), \tag{1}\] where \(\psi_{4,\ell m}\) are the individual multipole moments, \({}^{-2}Y_{\ell m}\) are the spin-weighted spherical harmonics over the unit-sphere defined by \(\Theta\) and \(\Phi\). The individual multipole moments can be written \[\psi_{4,\ell m}=A_{\ell m}e^{-im\phi_{\ell m}}, \tag{2}\] where \(A_{\ell m}\) and \(\phi_{\ell m}\) are the amplitude and phase of the multipoles respectively. The quantity \(\Psi_{4}\) can be converted into the gravitational wave observable known as the strain \(h\) via a double time integral. This is generally the more useful quantity to consider for applications of the numerical data, such as waveform modelling, since it is the quantity measured by gravitational wave detectors and so forms the starting point for all gravitational wave astronomy. Where the strain is required, we obtain it in the frequency domain by dividing the frequency domain \(\Psi_{4}\) data by \(\omega^{2}\), where \(\omega\) is the angular Fourier frequency [96]. The numerical domain consists of nested Cartesian grids of successively finer spacing, nested in the sense that the grid at each level \(n\) is encompassed by that of level \(n-1\). The grid spacing \(d_{n}\) for each refinement level \(n\) follows the scaling, \[d_{n}=\tfrac{d_{0}}{2^{n}}\qquad\{n\in\mathbb{Z}^{+}\}, \tag{3}\] where \(d_{0}\) is the spacing on the coarsest level. The coarsest levels (largest boxes) encompass both black holes and are fixed, while for the finer levels (smaller boxes) there is a box around each black hole, and these boxes move with the punctures. The boxes are initially specified as cubes, where the user provides the number of points along one side for each level, i.e., if \(N_{n}\) is the number of points in each direction on level \(n\), then the user specifies a list of \(N_{i}=\{N_{0},N_{1},...,N_{l_{\max}}\}\), where \(l_{\max}\) is the finest level. During evolution the code will dynamically adjust the number of points, in particular near merger when individual boxes around each puncture will be merged when they are about to overlap. In addition, \(l_{\max}\) differs for each puncture so that \(d_{\max,i}/m_{i}\) is approximately the same for each puncture. See Ref. [4] for more details on the BAM grid structure, and Ref. [97] for typical choices for numbers of levels and relative box sizes. Sec. II.2.2 provides further details of the choices we made for the grid configurations. The values \(N_{i}\), \(l_{\max}\) for each puncture, and the coarsest grid spacing \(d_{0}\) are all provided in the public data release. The temporal resolution is subject to a Berger-Oliger refinement scheme in which the spacing between successive time steps halves with each successive level. The finest level consists of two grids, one centred on each puncture, an arrangement that is maintained as we move up the levels so long as the grids are not so large that they would overlap. These nested grids around each individual puncture move with the punctures as they orbit. Beyond this level the two grids are replaced by a single grid that encompasses all of the moving boxes and is centered on the origin. It was found in Ref. [4] that the Berger-Oliger timestepping becomes unstable on the coarsest non-moving grids, and for these we revert to a single time step specified by a Courant factor of 0.25 applied to the finest such grid. The details are the same as those used in Refs. [4, 5], except for the use of 0.25 rather than 0.5 for the Courant factor, which is necessary to sufficiently reduce the timestepping error in long simulations. A pseudo-spectral elliptic solver is used to calculate binary wormhole initial data [91], with eccentricity reduced to \(<2\times 10^{-3}\) through a series of manual iterations of the linear momenta of the punctures in the initial parameters. This process is described in more detail in Sec. II.2.1. ### Simulation workflow #### ii.2.1 Initial data construction We wish our simulations to begin at a user specified reference orbital frequency \(M\Omega_{\rm orb}\) with spin vector \(\mathbf{S}\equiv\mathbf{S}_{2}\) on the larger black hole (which we designate the secondary). The orientation of \(\mathbf{S}_{2}\) can be defined by the angle \(\theta_{\rm LS}=\arccos(\hat{\mathbf{L}}_{\rm N}\cdot\hat{\mathbf{S}}_{2})\) between the spin vector \(\mathbf{S}_{2}\) and Newtonian orbital angular momentum vector \(\mathbf{L}_{\rm N}\), and the angle \(\hat{\varphi}_{\rm LS}=\arccos(\hat{\mathbf{r}}\cdot\hat{\mathbf{S}}_{2\perp})\) between the projection of the spin vector on to the orbital plane \(\mathbf{S}_{2\perp}\) and the separation vector \(\mathbf{r}\) from the larger black hole to the smaller. The positions and momenta of the black holes consistent with these constraints must then be determined at \(M\Omega_{\rm orb}\), approximately chosen to minimise eccentricity. Bowen-York wormhole data can then be generated from these parameters. The main task of initial data construction is therefore reduced to identifying the appropriate black-hole positions and momenta at \(M\Omega_{\rm orb}\). Two methods were used for the simulations in this catalogue. For simulations with \(\chi_{2}\in\{0.4,0.8\}\) the initial data parameters were determined by adapting the method used in previous work [98, 15, 99]. For this method the physical parameters of the system \((q,\mathbf{S}_{1},\mathbf{S}_{2})\) are specified at a much larger separation \(D_{\rm start}\) than the NR simulations will start at. The effective-one-body (EOB) equations of motion are then evolved up to \(M\Omega_{\rm orb}\) and the parameters at this frequency are used as input to a Bowen-York initial data solver. However for precessing systems this method does not allow the user to specify the exact system configuration \((q,\mathbf{S}_{1},\mathbf{S}_{2})\) at \(M\Omega_{\rm orb}\). During the course of inspiral from \(D_{\rm start}\) to \(M\Omega_{\rm orb}\) for the single-spin precessing systems in this catalogue, the angle \(\theta_{\rm LS}\) can be seen to vary no more than \(\sim 1^{\circ}\), while \(\phi_{\rm KS}\) increases continuously (see Fig. 3 and Fig. 4 in [100]). To achieve a specific choice of \((\theta_{\rm LS},\phi_{\rm KS})\) at a prescribed value of \(M\Omega_{\rm orb}\), the method was extended with an iterative refinement of the angle \(\phi_{\rm LS}\) at \(D_{\rm start}\) until the parameters at \(M\Omega_{\rm orb}\) are within a suitable tolerance of our desired values. Full details of this adapted method are given in Appendix A. The simulations \(\chi_{2}\in\{0.2,0.6\}\) were performed later, and were able to make use of a more recent method to produce low-eccentricity initial parameters, as described in Ref. [101]. This method provides a post-Newtonian estimate of low-eccentricity parameters at a prescribed orbital frequency, making it possible to specify \((\theta_{\rm LS},\phi_{\rm KS})\) without the need for any iterative steps. This method also supports additional iteration steps to further reduce the eccentricity based on NR dynamics, however this additional iteration was not used for the simulations in this catalogue. We instead relied upon the manual perturbation approach outlined in Sec. II.2.1 to reduce eccentricity when using either approach to obtain the initial data. For all the NR configurations described in this work the azimuthal angle for the spin vector \(\mathbf{S}_{2}\) placed on the larger secondary component black hole was chosen to be \(\phi=0^{\circ}\) at \(M\Omega_{\rm orb}\). While the initial data parameters generated in Sec. II.2.1 will lead to low eccentricity simulations, in general this will not be low enough to satisfy our definition of a quasi-circular binary. We placed an upper limit on the eccentricity at \(2\times 10^{-3}\), based on the observation in Ref. [97] that the puncture dynamics do not give reliable eccentricity estimates below this value, due to gauge effects. A standard iterative method to further reduce eccentricity is to perform a low resolution simulation for \(\sim\)1000\(M\), estimate the eccentricity, and make iterative small perturbations to the momenta of the component black holes [98, 97]. The method employed to estimate the eccentricity is described below. For most of the simulations in this work a perturbation of 0.1-0.8% is applied to the magnitude of the momenta. This is normally sufficient to reduce the eccentricity below the desired threshold. However in cases where this is not sufficient the radial component of the momenta is also reduced by 25-75%. The eccentricity reduction procedure is performed using low resolution simulations in order to reduce both the computing resources and wall time required. Once initial data parameters are found that yield a sufficiently low eccentricity then a high resolution production simulation is performed using the same parameters. The higher resolution simulations tend to have higher eccentricity than the associated low resolution simulations. Consequently, a number of the simulations presented in this paper have eccentricities marginally above the \(2\times 10^{-3}\) threshold. There are two different ways that eccentricity is estimated for the simulations in this work. For the shorter iterative eccentricity reduction simulations where the merger time is not known, the puncture separation \(D\) is fit using a quadratic function with data typically in the range \([200,700]M\) similar to the method described in Ref. [102]. The eccentricity is then estimated by the maximum absolute relative difference between the fit and the data. For production simulations the eccentricity is estimated using a fit that also incorporates the merger time [102]. In our production simulations we in general find that true eccentricity differs from that calculated in our lower-resolution eccentricity-reduction simulations. For a few of the cases in this catalogue the eccentricity of the lower resolution simulation was below our \(2\times 10^{-3}\) threshold, but the eccentricity of the production simulation exceeded it, as can be seen in Tab. 1. Nonetheless, only a handful of cases have eccentricities above \(3\times 10^{-3}\), and only one is close to \(4\times 10^{-3}\) (CF_8). Grid configurations The simulations performed for this catalogue are all computationally expensive, requiring \(O(10^{5})\) CPU hours for each production run, and we do not have the luxury of exhaustive experiments to identify a choice of numerical grids that provides a good balance between computational efficiency and physical accuracy. In 3D simulations of this scale it is impractical to perform standard convergence tests where the grid spacing \(d_{0}\) is halved between successive runs, and indeed clean convergence has rarely been observed in binary simulations with any code, and even given promising convergence results for one binary configuration, there exists no robust algorithm to determine the resolution requirements to guarantee clean convergence for a second configuration. Sec. IV presents a convergence study of several of our configurations. In this section we discuss the heuristic requirements we place on our grid configurations, based on past experience with BAM binary simulations. Our first requirement is that the width of the smallest moving box following each component black hole should be between 1.2 and 1.5 times the maximum effective coordinate diameter of the apparent horizon of its respective black hole before merger. This requirement is achieved by changing the values of the grid spacing \(d_{0}\) on the coarsest level, and the finest level that exists for the larger black hole. The number of grid points \(N_{L}\) on the finest level can also be used to adjust the size of the finest box around the black hole, if necessary, but we find in most cases that adjusting \(d_{0}\) is sufficient. The second requirement is to have at least ten grid points per wavelength of the \((4,4)\) multipole moment on the level where gravitational waves are extracted. The maximum frequency is estimated by doubling the \((2,2)\) ringdown frequency calculated by the aligned spin gravitational wave model PhenomD [15; 16], using the parameters \((q,0,\chi_{2})\). In precessing configurations the ringdown frequency will always be lower than this estimate, and therefore this provides a conservative estimate of the resolution requirements. One could use a more accurate estimate of the ringdown frequency for each precessing configuration using, for example, the method described in Refs. [100; 103; 104], but for this work we found no need to do this. The required grid spacing on level \(n\) where gravitational waves are extracted is then approximated as \(d_{n}\leq 1/(20f_{\rm RD})\). This requirement is achieved by changing the values of the finest grid spacing \(d_{0}\). If this requirement cannot be satisfied on level \(n\) and level \(n\) is not the last fixed box level, then the number of grid points \(N_{n+1}\) on level \(n+1\) is increased until the box size \(N_{n+1}\times h_{n+1}\) is large enough to support gravitational wave extraction at the radius required. The use of much larger numbers of points on the wave-extraction level means that the wave-extraction resolution requirements are also a strong determinant of the overall computational cost, along with the resolution requirements local to the black hole. For most configurations both of the requirements outlined in the preceding two paragraphs can be satisfied. However it is not always possible to satisfy both requirements, and for such cases the smallest box sizes and extraction level grid spacing are balanced to achieve the best possible result. ## III Simulation properties In this section we discuss the properties of the simulations in our catalogue. We first motivate our coverage of the single-spin parameter space, our choice of starting frequency for each binary, and our procedure to estimate initial black-hole momenta and spins to achieve quasi-circular inspiral with a prescribed spin orientation. We then discuss in detail the accuracy with which our desired configurations are achieved, in particular the accuracy of our specification of the black-hole masses and spins, and the spin orientations. Finally, we summarise the properties of the remnant black holes. ### Simulation configurations Our catalogue consists of dynamics and waveform data from NR simulations of 80 binary-black-hole configurations. We chose configurations with four mass ratios \(q=m_{2}/m_{1}\in\{1,2,4,8\}\), four values of the dimensionless spin on the larger black hole, \(\chi_{2}=|\mathbf{S}_{2}|/m_{2}^{2}\in\{0.2,0.4,0.6,0.8\}\) (the smaller black hole has zero spin), and five values of the misalignment of the black-hole spin with direction of the Newtonian orbital angular momentum, \(\theta_{\rm LS}=\arccos(\hat{\mathbf{L}}_{\rm N}\cdot\hat{\mathbf{S}}_{2})\in\{30^{ \circ},60^{\circ},90^{\circ},120^{\circ},150^{\circ}\}\). The configurations with spins \(\chi_{2}=\{0.4,0.8\}\) were simulated first and used to produce the first PhenomPNR model, and the later simulations at \(\chi_{2}=\{0.2,0.6\}\) were used as independent verification waveforms [20]. For each simulation two mass parameters \(m_{1},m_{2}\) were chosen such that \(M=m_{1}+m_{2}=1\). The initial data are iteratively constructed from these parameters such that the ADM mass of each puncture equals its respective mass parameter to within 0.02% [4]. At subsequent times the masses of each black hole are recorded as the apparent-horizon masses \(M_{\rm AH,1},M_{\rm AH,2}\) of each puncture, which are related to the black hole hole masses \(m_{1},m_{2}\) through the Christodoulou formula [105]. This approach agrees well with the ADM mass of each puncture; the level of agreement is quantified further in Sec. III.2. In addition to the masses and spins, we must also choose the initial separation of the binary. For a binary undergoing non-eccentric inspiral there is a one-to-one correspondence between the black-hole separation and the orbital frequency, so we may alternatively specify the initial orbital frequency, \(M\Omega_{\rm orb}\). For this catalogue we prefer to choose \(M\Omega_{\rm orb}\), because our primary purpose is to construct a frequency-domain waveform model, and it would be convenient if we were able to start to NR tuning at the same frequency for each configuration. This also motivates the iterative procedure described in Sec. II.2.1, with the goal of finding parameters consistent with non-eccentric inspiral for a configuration defined at a specified starting frequency. For the first simulations we performed, at mass ratios \(q=1\) and \(q=2\), we chose \(M\Omega_{\rm orb}=0.0225\). This value was chosen to produce simulations of \(\sim\)2000\(M\) in length, which we expected to be sufficiently accurate for our modelling purposes, based on the experience of producing the aligned-spin simulations in Refs. [15; 16]. The duration of the simulations varies with binary mass ratio and the magnitude of the component of the spin aligned with the orbital angular momentum. At leading post-Newtonian order the merger time from a given starting frequency scales with \(\Delta T\sim 1/\eta\), where \(\eta=m_{1}m_{2}/M^{2}\) is the symmetric mass ratio. Therefore, if simulations at mass-ratios \(q=1\) (\(\eta=0.25\)) and \(q=8\) (\(\eta\approx 0.1\)) start at the same orbital frequency, the \(q=8\) simulation will take roughly 2.5 times as long to merge. (This is a first-order approximation, and we see in the final results that the variation is not quite so extreme.) In addition, if the black-hole spin is aligned with the orbital angular momentum, the binary will inspiral more slowly, and this will also increase the time to merger. Conversely, a spin in the opposite direction to the orbital angular momentum will decrease the time to merger. This effect of spin is most easily seen in PN calculations, e.g., Refs. [106; 107]. For example, for our \(q=2\) configurations with \(\chi_{2}=0.8\), where all simulations begin at \(M\Omega=0.0225\), we see that the \(\theta_{\rm LS}=30^{\circ}\) configuration merges in \(2254M\), while the \(\theta_{\rm LS}=150^{\circ}\) configuration merges in only \(1505M\). Since we do not wish to perform expensive tests on the resolution requirements to achieve similar levels of accuracy for much longer simulations, for mass ratios \(q=4\) and \(q=8\), we adjust the starting frequency to limit the time to merger to approximately \(2000M\). To meet the soft requirement of simulation merger by \(2000M\) we estimate the merger time using the LALSimulation [108] implementation of PhenomD[16]. This provides a utility function XLAB-SimIRRPhenomDChirTime that calculates the time until the peak in the (2,2)-strain of a specific system configuration given a starting gravitational wave frequency, which is approximately twice the orbital frequency. The starting frequency is optimized using a simple interval \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Name & \(q\) & \(\chi\) & \(\theta_{\rm LS}\,(^{\circ})\) & \(\chi_{\rm eff}\) & \(\chi_{\rm p}\) & \(D/M\) & \(e\) & \(M\Omega_{\rm orb}\) & \(t_{\rm M}\) & \(N_{\rm orb}\) & \(M_{f}\) & \(\chi_{f}\) & \(v_{R}\) \\ & & & & & & & & & & & (\(\times 10^{-3}\)) & \((\times 10^{-2})\) & & & (kms\({}^{-1}\)) \\ \hline CF\_1 & 1 & 0.2 (0.200) & 30.0 (29.9) & 0.087 (0.087) & 0.100 (0.100) & 11.6 (11.4) & 1.327 & 2.28 & 1710 & 9.61 & 0.949 & 0.713 & 188 \\ CF\_2 & 1 & 0.2 (0.200) & 60.0 (59.9) & 0.050 (0.050) & 0.173 (0.173) & 11.6 (11.4) & 0.931 & 2.28 & 1685 & 9.43 & 0.950 & 0.703 & 96 \\ CF\_3 & 1 & 0.2 (0.200) & 90.0 (89.8) & 0.000 (0.000) & 0.200 (0.200) & 11.6 (11.4) & 1.084 & 2.27 & 1667 & 9.25 & 0.951 & 0.688 & 176 \\ CF\_4 & 1 & 0.2 (0.200) & 120.0 (119.8) & \(-0.050\,(-0.050)\) & 0.173 (0.173) & 11.6 (11.4) & 0.759 & 2.28 & 1628 & 9.01 & 0.953 & 0.672 & 329 \\ CF\_5 & 1 & 0.2 (0.200) & 150.0 (149.9) & \(-0.087\,(-0.086)\) & 0.100 (0.100) & 11.6 (11.4) & 0.723 & 2.27 & 1610 & 8.86 & 0.953 & 0.661 & 147 \\ CF\_6 & 1 & 0.4 (0.400) & 30.0 (29.9) & 0.173 (0.174) & 0.200 (0.200) & 11.6 (11.3) & 1.227 & 2.28 & 1768 & 10.0 & 0.946 & 0.740 & 120 \\ CF\_7 & 1 & 0.4 (0.400) & 60.0 (59.9) & 0.100 (0.101) & 0.346 (0.346) & 11.6 (11.3) & 2.478 & 2.28 & 1723 & 9.69 & 0.948 & 0.720 & 735 \\ CF\_8 & 1 & 0.4 (0.400) & 90.0 (98.9) & 0.000 (0.001) & 0.400 (0.400) & 11.5 (11.3) & 3.898 & 2.30 & 1626 & 9.11 & 0.951 & 0.691 & 577 \\ CF\_9 & 1 & 0.4 (0.400) & 120.0 (119.9) & \(-0.100\,(-0.099)\) & 0.346 (0.347) & 11.6 (11.3) & 3.213 & 2.29 & 1577 & 8.72 & 0.954 & 0.660 & 93 \\ CF\_10 & 1 & 0.4 (0.400) & 150.0 (149.9) & \(-0.173\,(-0.173)\) & 0.200 (0.201) & 11.6 (11.4) & 2.675 & 2.29 & 1542 & 8.45 & 0.956 & 0.634 & 324 \\ CF\_11 & 1 & 0.6 (0.600) & 30.0 (29.9) & 0.260 (26.20) & 0.300 (0.299) & 10.0 (9.79) & 1.269 & 2.79 & 1064 & 7.34 & 0.943 & 0.767 & 189 \\ CF\_12 & 1 & 0.6 (0.601) & 60.0 (59.8) & 0.150 (0.151) & 0.520 (0.519) & 11.6 (11.3) & 1.446 & 2.29 & 1737 & 9.81 & 0.946 & 0.740 & 977 \\ CF\_13 & 1 & 0.6 (0.600) & 90.1 (98.8) & 0.00 (0.001) & 0.600 (0.600) & 11.5 (11.3) & 1.261 & 2.30 & 1625 & 9.11 & 0.950 & 0.697 & 1170 \\ CF\_14 & 1 & 0.6 (0.600) & 120.0 (119.8) & \(-0.150\,(-0.149)\) & 0.519 (0.521) & 11.7 (11.4) & 1.200 & 2.27 & 1589 & 8.65 & 0.955 & 0.651 & 98 \\ CF\_15 & 1 & 0.6 (0.600) & 150.0 (149.9) & \(-0.260\,(-0.259)\) & 0.300 (0.301) & 11.6 (11.4) & 1.761 & 2.29 & 1480 & 8.03 & 0.958 & 0.609 & 159 \\ CF\_16 & 1 & 0.8 (0.801) & 30.0 (29.9) & 0.346 (0.347) & 0.400 (0.399) & 11.6 (11.3) & 2.958 & 2.28 & 1902 & 10.93 & 0.939 & 0.792 & 874 \\ CF\_17 & 1 & 0.8 (0.801) & 60.0 (59.8) & 0.200 (0.202) & 0.693 (0.692) & 11.6 (11.4) & 2.691 & 2.27 & 1832 & 10.3 & 0.943 & 0.758 & 1690 \\ CF\_18 & 1 & 0.8 (0.801) & 90.1 (98.8) & 0.00 (0.002) & 0.800 (0.801) & 11.6 (11.3) & 1.027 & 2.29 & 1644 & 9.17 & 0.950 & 0.707 & 1220 \\ CF\_19 & 1 & 0.8 (0.801) & 120.1 (119.8) & \(-0.200\,(-0.198)\) & 0.692 (0.696) & 11.6 (11.3) & 1.402 & 2.30 & 1503 & 8.26 & 0.955 & 0.641 & 1110 \\ CF\_20 & 1 & 0.8 (0.801) & 150.0 (149.9) & \(-0.347\,(-0.346)\) & 0.399 (0.403) & 11.6 (11.3) & 0.5 biscection procedure until the peak time is \(\sim 2000M\). A lower bound on the orbital frequency is set at 0.0225. The average retarded merger time (calculated as detailed below) for the simulations that required a higher starting orbital frequency was \(1983M\), with a minimum of \(1610\), and a maximum of \(2169M\). While XIALSimIMRPhenomDChirpTime performed sufficiently well, overall it slightly under estimated the merger time. One simulation (CF_55) was mistakenly performed using an increased starting frequency, resulting in a much shorter simulation with a retarded merger time of just \(945M\). The properties of each simulation are presented in Tabs. 1 and 2. Each configuration is characterised by its mass ratio \(q\), the dimensionless spin magnitude \(\chi_{2}=S_{2}/m_{2}^{2}\), the spin angle \(\theta_{\rm LS}=\arccos(\hat{\mathbf{L}}_{\rm N}\cdot\hat{\mathbf{S}}_{2})\), the initial orbital frequency \(M\Omega_{\rm orb}\) (or alternatively the initial binary separation \(D/M\)), and the binary's eccentricity, \(e\). For the final values reported in Tab. 1 and 2, the eccentricity \(e\) is estimated over the region \([300,800]M\) using the method described in [102]. We also show the effective spin parameters \(\chi_{\rm eff}\) and \(\chi_{\rm p}\). The effective aligned spin \(\chi_{\rm eff}\) is defined in terms of the individual parallel spin components \(\chi_{i}^{\parallel}=\mathbf{\chi}_{i}\cdot\hat{\mathbf{\rm L}}_{\rm N}\) as [109], \[\chi_{\rm eff}=\frac{m_{1}\chi_{1}^{\parallel}+m_{2}\chi_{2}^{\parallel}}{M}, \tag{4}\] and parameterises the dominant spin effect on the orbital phasing, as discussed in Refs. [106; 107; 110]. The effective pre \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Name & \(q\) & \(\chi\) & \(\theta_{\rm LS}\,(^{\circ})\) & \(\chi_{\rm eff}\) & \(\chi_{\rm p}\) & \(D/M\) & \(e\) & \(M\Omega_{\rm orb}\) & \(t_{\rm M}\) & \(N_{\rm orb}\) & \(M_{f}\) & \(\chi_{f}\) & \(v_{R}\) \\ & & & & & & & & & \((\times 10^{-3})\) & \((\times 10^{-2})\) & & & & (kms\({}^{-1}\)) \\ \hline CF\_41 & 4 & 0.2 (0.200) & 30.0 (29.9) & 0.139 (0.139) & 0.100 (0.100) & 10.9 (10.9) & 1.083 & 2.48 & 1998 & 11.4 & 0.975 & 0.566 & 175 \\ CF\_42 & 4 & 0.2 (0.200) & 60.0 (59.8) & 0.080 (0.080) & 0.173 (0.173) & 11.0 (10.9) & 2.708 & 2.47 & 1982 & 11.1 & 0.976 & 0.536 & 154 \\ CF\_43 & 4 & 0.2 (0.200) & 90.1 (89.9) & 0.000 (0.000) & 0.20 (0.200) & 10.5 (10.5) & 1.386 & 2.61 & 1610 & 9.34 & 0.977 & 0.488 & 170 \\ CF\_44 & 4 & 0.2 (0.200) & 120.0 (119.8) & \(-0.080\,(-0.080)\) & 0.173 (0.173) & 11.2 (11.2) & 1.390 & 2.38 & 2004 & 10.7 & 0.978 & 0.433 & 241 \\ CF\_45 & 4 & 0.2 (0.200) & 150.0 (149.9) & \(-0.139\,(-0.138)\) & 0.100 (0.100) & 11.3 (11.3) & 0.701 & 2.37 & 1980 & 10.4 & 0.979 & 0.385 & 192 \\ CF\_46 & 4 & 0.4 (0.400) & 30.1 (29.9) & 0.277 (0.277) & 0.201 (2.01) & 10.6 (10.6) & 1.501 & 2.58 & 1947 & 11.7 & 0.972 & 0.664 & 234 \\ CF\_47 & 4 & 0.4 (0.400) & 60.2 (59.9) & 0.159 (0.160) & 0.347 (0.347) & 10.7 (10.7) & 1.424 & 2.53 & 1992 & 11.2 & 0.974 & 0.616 & 164 \\ CF\_48 & 4 & 0.4 (0.400) & 90.2 (98.9) & \(-0.001\,(0.000)\) & 0.400 (0.400) & 11.6 (11.5) & 1.608 & 2.28 & 2378 & 12.4 & 0.977 & 0.533 & 448 \\ CF\_49 & 4 & 0.4 (0.400) & 120.2 (119.9) & \(-0.161\,(-0.160)\) & 0.346 (0.346) & 11.6 (11.5) & 0.758 & 2.30 & 2148 & 11.0 & 0.980 & 0.424 & 336 \\ CF\_59 & 4 & 0.4 (0.400) & 150.1 (149.9) & \(-0.277\,(-0.277)\) & 0.199 (0.200) & 11.5 (11.5) & 1.761 & 2.32 & 1954 & 9.87 & 0.981 & 0.313 & 272 \\ CF\_51 & 4 & 0.6 (0.600) & 30.1 (29.9) & 0.415 (0.415) & 0.301 (30.301) & 10.4 (10.3) & 1.644 & 2.63 & 1982 & 12.5 & 0.967 & 0.762 & 413 \\ CF\_52 & 4 & 0.6 (0.601) & 60.2 (59.8) & 0.238 (0.239) & 0.521 (0.521) & 10.8 (10.7) & 1.817 & 2.52 & 2054 & 12.1 & 0.970 & 0.704 & 782 \\ CF\_53 & 4 & 0.6 (0.602) & 90.3 (98.8) & \(-0.002\,(-0.001)\) & 0.600 (0.602) & 11.0 (10.9) & 2.232 & 2.46 & 1935 & 10.8 & 0.975 & 0.600 & 764 \\ CF\_54 & 4 & 0.6 (0.601) & 120.3 (119.8) & \(-0.242\,(-0.241)\) & 0.518 (0.520) & 11.4 (11.4) & 0.438 & 2.34 & 1949 & 10.0 & 0.980 & 0.451 & 524 \\ CF\_55 & 4 & 0.6 (0.600) & 150.2 (149.8) & \(-0.416\,(-0.416)\) & 0.298 (0.300) & 10.1 (9.88) & 3.600 & 2.88 & 945 & 5.45 & 0.983 & 0.270 & 296 \\ CF\_56 & 4 & 0.8 (0.801) & 30.2 (29.9) & 0.553 (0.554) & 0.402 (0.402) & 10.1 (10.0) & 1.638 & 2.72 & 1951 & 13.1 & 0.959 & 0.859 & 722 \\ CF\_57 & 4 & 0.8 (0.802) & 60.4 (59.7) & 0.317 (0.318) & 0.695 (0.696) & 10.5 (10.4) & 0.751 & 2.60 & 1968 & 12.2 & 0.965 & 0.800 & 1150 \\ CF\_58 & 4 & 0.8 (0.802) & 90.4 (98.7) & \(-0.004\,(-0.002)\) & 0.800 (0.802) & 11.5 (11.4) & 1.675 & 2.31 & 2342 & 12.3 & 0.973 & 0.684 & 1160 \\ CF\_59 & 4 & 0.8 (0.802) & 120.4 (119.7) & \(-0.324\,(-0.323)\) & 0.690 (0.693) & 11.6 (11.5) & 1.225 & 2.30 & 1984 & 9.92 & 0.979 & 0.506 & 710 \\ CF\_60 & 4 & cession spin \(\chi_{\rm p}\) is defined as [100], \[\chi_{\rm p}=\frac{S_{\rm p}}{m_{1}^{2}}, \tag{5}\] where \(A_{1}S_{\rm p}=\text{max}\left(A_{1}S_{1}^{\perp},A_{2}S_{2}^{\perp}\right)\), \(A_{1}=2+3m_{2}/(2m_{1})\), and \(A_{2}=2+3m_{1}/(2m_{2})\). In a generic two-spin system the dominant precession effect can be approximated by a single-spin system where the larger black hole has an in-plane spin of \(\chi_{\rm p}\), based on the leading-order spin precession effects [111; 112]. In the single-spin configurations in this catalogue, we will always have \(\chi_{\rm eff}=m_{2}\chi_{2}\cos\theta_{\rm LS}/M\) and \(\chi_{\rm p}=\chi_{2}\sin\theta_{\rm LS}\). In Tab. 1 we provide the quantities \((\chi,\theta_{\rm LS},\chi_{\rm eff},\chi_{\rm p},D/M)\) as specified in the initial data and (in brackets) at a _relaxed time_, \(t_{\rm rel}\). This is the time at which we estimate that the unphysical junk radiation in the initial data have radiated away, and the GW data can be used for analysis and modelling. We wish \(t_{\rm rel}\) to be as early as possible, to maximise the length of the usable waveform. We choose a relaxed time of, \[t_{\rm rel}=t_{\rm peak}+2t_{\rm damp}, \tag{6}\] where \(t_{\rm peak}\) is the time of the peak amplitude of the junk radiation in the \((2,2)\) multipole moment of \(\Psi_{4}\) and \(t_{\rm damp}\) is an estimate of the exponential decay time of the junk radiation, which we estimate as \(t_{\rm damp}=76m_{2}\). The damping time of the \((2,2,0)\) quasi-normal mode for a nonspinning black hole of mass \(m\) is approximately \(71m\), and approximately \(83m\) for a black hole with dimensionless spin magnitude 0.8 [113]; we find that \(76m\) is a reasonable choice for all of the configurations in this catalogue. In Tab. 1 the initial orbital frequency \(M\Omega_{\rm orb}\) is calculated at \(t_{\rm rel}\) and the number of orbits \(N_{\rm orb}\) is calculated from \(t_{\rm rel}\) to the merger time, which we define to be the time at which the peak in the \(\ell=2\) multipole moments of \(\Psi_{4}\) occurs. The retarded merger time, which we denote as \(t_{\rm M}\), is given by the difference between the merger time and the tortoise co-ordinate \[r^{*}=r+2\ln\left|\frac{r}{2}-1\right|, \tag{7}\] where \(r\) is the distance from the punctures at which the \(\Psi_{4}\) data is extracted. The final black hole has a mass of \(M_{f}\), a spin of \(\chi_{f}\) and a recoil velocity \(v_{R}\). We discuss the calculation of these remnant quantities in more detail in Sec. III.4 below. ### Initial black-hole masses We estimate the black-hole masses using the ADM mass calculated at the puncture. The black holes are represented in the initial data as wormholes, and the ADM mass calculated at the second asymptotically flat end of each wormhole provides a good estimate of that black hole's mass. In the puncture framework, this mass estimate is easy to calculate at each black hole's puncture [90]. The ADM puncture mass agrees well with the mass calculated from the area of the apparent horizon in the case of nonspinning black holes [114], but becomes less accurate for high spins [98]. Figure 1 demonstrates the effect spin magnitude has on the initial data ADM puncture mass as a function of time from the start of the simulation. The results in Appendix A of Ref. [98] suggest that the error in the mass estimate could be on the order of \(\sim\)0.5% for black holes with spin 0.8. However, the estimates in that paper were made on the initial data. Our results suggest that after the junk radiation has left the system (most radiates to infinity, but some falls back into the black hole), the apparent-horizon estimate of the mass is closer to the original ADM-puncture-mass estimate. We see that for black holes with spins of 0.4 the error due to using the ADM puncture mass is on the order of \(\sim\)0.01%, while for spins of 0.8 it is \(\sim\)0.04%. (The oscillations in Fig. 1 are due to uncertainties in the apparent-horizon estimate.) From this we conclude that the errors in the mass estimates are negligible. ### Initial black-hole spins The black-hole spin \(\mathbf{S}_{2}\) is specified as part of the Bowen-York extrinsic curvature. The main source of uncertainty in the dimensionless spin \(\chi_{2}=\mathbf{S}_{2}/m_{2}^{2}\) is the accuracy of the mass: as some of the junk radiation falls into the black hole, the mass increases, and so \(\chi_{2}\) decreases. However, as we saw previously, the final value of the mass as estimated from the area of the apparent horizon agrees well with the nominal value for each configuration. We also see in Tab. 1 that there is only a small discrepancy in the spin magnitude after the relaxation time. Since the initial black-hole spins are prescribed analytically in the Bowen-York initial data, we can reliably estimate the uncertainty in the apparent-horizon measurement of the spin magnitude to be within \(\sim\)0.001. During the inspiral \(\theta_{\rm LS}\) is not constant; it will oscillate, as illustrated for one configuration in Fig. 2. Ideally, we would set up our simulations so that the mean value of \(\theta_{\rm LS}\) was equal to our prescribed value at the start frequency. We see in Fig. 2 the two ways in which our data deviate from this ideal. (1) There is an inaccuracy in the initial value of \(\theta_{\rm LS}\), which is within the tolerance set in our initial-data construction procedure, and (2) this value is at an extremum of the oscillations in \(\theta_{\rm LS}\), and so the mean will be offset from the target value in the initial-data construction. We also see that the mean value slowly varies over the course of the simulation, although typically by only a fraction of a degree over the entire inspiral. In Tabs. 1 and 2 we report the mean value of \(\theta_{\rm LS}\) at the Figure 1: Comparison of the relative percentage error between the apparent-horizon mass and the initial data ADM puncture mass for the larger black hole, as a function of simulation time \(t\). Both simulations have initial parameters \(q=2\), \(\theta_{\rm LS}=60\). The solid black line is for a configuration with dimensionless spin magnitude \(\chi_{2}=0.4\) and the dashed black line represents \(\chi_{2}=0.8\). start of the simulation and at the relaxed time. We estimate the value at the relaxed time by fitting to \(\theta_{\rm LS}\) a sinusoidal ansatz of the form, \[\left(A_{0}+A_{1}t\right)\sin\left(2\pi\big{(}f_{0}t+f_{1}t^{2}\big{)}+\varphi \right)+C_{0}+C_{1}t, \tag{8}\] where \(A_{0}\), \(A_{1}\), \(f_{0}\), \(f_{1}\), \(\varphi\), \(C_{0}\) and \(C_{1}\) are all free parameters, from the relaxed time up to three orbits after the relaxed time. The value of the linear part of the fit at the relaxed time is reported in Tabs. 1 and 1 instead of the pointwise value of the NR data for \(\theta_{\rm LS}\). An example of this fit can be seen in Fig. 2. The resulting value of \(\theta_{\rm LS}\) is used, along with the relaxed-time value of the spin magnitude, to calculate the relaxed-time values of \(\chi_{\rm eff}\) and \(\chi_{\rm p}\). ### Remnant properties The final black hole that remains after the merger is characterised by its mass, spin, and recoil. We report each of these quantities in Tabs. 1 and 2. As with the relaxed-time quantities reported in Tab. 1 and 2, the mass and spin of the final black hole, \(M_{f}\) and \(\chi_{f}\), are calculated from the apparent horizon[115]. As a consistency check we also estimate the mass and angular momentum of the final spacetime from the gravitational-wave signal. The mass can be calculated by subtracting the radiated energy from the initial ADM mass of the spacetime. The radiated energy is in turn calculated from the gravitational wave signal measured at a series of extraction radii [4] and the result extrapolated to infinity. The final mass estimated in this way typically agrees with the horizon measure to within \(5\times 10^{-4}\). Given the mass, perturbation theory provides a relationship between the black-hole spin and the frequency of the signal multipoles during ringdown [116, 104]. We calculate the ringdown frequency of the \((\ell=2,|m|=2)\) multipoles by taking the Fourier transform of the waveform between \(10\) and \(100M\) after merger [117], where merger is here defined as the time at which the sum of the square of the \(\ell=2\) multipoles is maximised. The ringdown frequency is then the frequency at which the peak in this frequency domain post-merger waveform occurs [118]. This then allows us to make an independent estimate of the final spin. We find that this estimate of the final spin typically agrees with the horizon measure to within \(5\times 10^{-3}\). We calculated the recoil (or kick) velocity of the final black hole by integrating the radiated linear momentum from the relaxed time \(t_{\rm rel}\) until the end of the simulation. We report here only the magnitude of the the recoil velocity \(v_{R}\). The linear momentum is itself calculated as described in [4]. We used the value of the linear momentum extracted at a distance \(90M\) from the source. Since the recoil velocity is very sensitive to the in-plane spin directions, this catalogue does not comprehensively explore the range of recoil velocities that can be seen for systems with mass ratios up to \(q=8\) and dimensionless spin magnitudes up to \(\chi=0.8\). However, from the values presented in Tabs. 1 and 2, we can see that the largest magnitude kick velocities tend to be seen for systems where the two initial black holes are equal in mass, a general trend that can be seen even in inspiral post-Newtonian estimates [112], and for full merger calculations in the numerical-relativity recoil studies cited in the Introduction. We investigated the effect of the finite resolution of the simulations and the radius at which the rate of change of the linear momentum was extracted on the calculation of the final recoil velocity and found it to be negligible. The effect of the extraction radius was found to be less than \(10\%\) of the final recoil velocity, while the effect of the resolution was even smaller. We compared the results of our calculation of the recoil velocity with the prediction given by the NRSurrogate model NRSur7dq4[29] for those cases within the catalogue that lie within the calibration region of NRSur7dq4 (\(q\leq 4\)). To obtain this prediction, we used the value of the black hole spins, rotated into the LAL frame [119], and the orbital frequency \(100M\) prior to merger. This comparison is shown in Fig. 3. As can be seen from these results, for most of the cases contained within the catalogue, the calculation from the radiated linear angular momentum agrees well with the prediction by NRSur7dq4. However, in a small number of cases (most notably CF_37 and CF_38) the two values differ by around \(50\%\) of the value calculated from the radiated linear momentum. However, these values remain within the bounds predicted by NRSur7dq4 for an equivalent configuration but with a different value for the in-plane spin angle. We therefore do not find these discrepancies too concerning, and we leave determining their exact cause to a future investigation. ## IV Waveform accuracy In order to assess the accuracy of the data that comprise this catalogue we studied a subset of four of the configurations described in Tabs. 1 and 2. These four configurations are CF_47, CF_59, CF_66, and CF_80, with physical parameters \((q,\chi,\theta_{\rm LS})=\{\left(4,0.4,60\right),\left(4,0.8,120\right), \left(8,0.4,30\right),\left(8,0.8,150\right)\}\). The set of simulations used in the accuracy analysis of the \((4,0.4,60)\) case were performed with a lower starting frequency of \(M\omega_{\rm orb}=0.023\) to provide an assessment of the accuracy of a longer simulation. Figure 2: Linear trend of the angle \(\theta_{\rm LS}\) between the Newtonian orbital angular momentum and spin of the spinning black hole as a function of simulation time \(t\). \(\theta_{\rm LS}\) as determined from NR data is plotted as a thick black line. The initial data value of \(\theta_{\rm LS}\) is plotted as a horizontal thick black line. The dashed black line is a sinusoidal fit using Eq. 8. The vertical black lines mark the fit bounds. The linear part of the sinusoidal fit is plotted as a solid black line. The dotted black line marks the value of the linear trend line at the lower bound. Lower opacity lines of the NR data and extrapolated fit are plotted outside of the fit region. The two main sources of error in our waveforms are the finite resolution of the simulation and the finite radius at which the data are extracted. In order to assess the effect of the finite resolution, we performed a set of three simulations with low, medium and high resolution for each of the four configurations listed above. We also performed an additional simulation with very high resolution for the \((8,0.8,150)\) configuration. These resolutions correspond to a number of grid points \(N=\{80,96,120,144\}\) in the boxes surrounding the punctures. Typically the width of the smallest box around each black hole is on the order of \(\sim 2m/N\), where \(m\) is the mass of that black hole; the details of how the grid is determined for each configuration are given in Sec. II.2.2. We extracted the waveform data at \(R_{\text{ext}}=\{50,60,70,80,90\}\)\(M\), which were all on the same refinement level. In quantifying the error in the waveforms due to these two sources we focus on estimating the mismatch between the medium resolution waveforms extracted at a distance of 90M from the source and the "true" waveform at infinitely good resolution and infinitely far from the source. We calculated the convergence order of the BAM code with respect to the resolution and extraction radius then used this to extrapolate the mismatch. We also used Richardson extrapolation to estimate the truncation error due to resolution and extraction radius. Mismatches are calculated from a noise-weighted inner product between waveforms, and extrapolate differently to the quantities that are usually considered in an convergence analysis, e.g., waveform amplitude and phase. In Sec. IV.3.1 we sketch out how standard numerical convergence properties translate to the waveform mismatch, and provide more detailed derivations in Appendix C. ### Data quality Our accuracy analysis of the gravitational waveforms calculated from our simulations focuses on the mismatch uncertainty as detailed in section IV.3. This is because it is the overall mismatch uncertainty that is most relevant to most gravitational-wave astronomy applications. We also consider the error in the phase and amplitude of the dominant multipole of the co-precessing waveform in section IV.2. However, we often also wish to know the accuracy of the individual signal multipoles, for example when using them to construct waveform models, or when using the NR waveforms as proxy signals to test gravitational-wave data-analysis pipelines. In this paper we do not perform a separate convergence analysis of the individual multipoles; given that clean convergence is rare in any binary-black-hole waveforms, even for the dominant multipoles, we do not expect a convergence analysis of sub-dominant multipoles to be informative. Here we simply note that the phasing accuracy of the waveforms is dominated by the phase accuracy of the inspiral dynamics, and this can be assessed through an accuracy analysis of the dominant multipole. (An important exception is the signal near merger, as discussed in Ref. [120].) For the signal amplitude we assess the accuracy by the presence of noise in the data. For example, Fig. 4 compares the relative strength of the gravitational-wave multipoles for two simulations. We see that it is not possible to conclude that a particular set of multipoles will always be reliable. In the CF_7 simulation the \((3,3)\) and \((5,5)\) multipoles cannot be trusted before merger; we would not expect these to be useful, for example, to calibrate a model of the signal amplitude. On the other hand, in the CF_79 simulation we see that, despite a low level of noise at early times, all of the multipoles in the figure could well be used to model the amplitude. Rather than choose a set of "trustworthy" multipoles, we instead suggest that for most applications one should use only the parts of a post-relaxation-time \(\psi_{4,\ell m}\) multipole with an amplitude above \(10^{-5}\). Depending on the application, of course, one may wish to apply a more or less stringent requirement. ### Amplitude and Phase accuracy In order to estimate the numerical error in the waveform quantities due to the finite resolution of the simulation and the finite radius at which the data were extracted, we performed Richardson extrapolation; see Appendix C. This requires first estimating the convergence order of the code with respect to these quantities. We first processed the data, removing the junk radiation from the waveform in the inertial frame in which the simulation was performed. We aligned the waveforms at Figure 3: A comparison of the calculation of the recoil velocity from the radiated linear momentum calculated by the BAM code (black crosses) with the prediction by the NRSum7dq4 model (blue dots). merger, where merger is defined to be the time at which the quantity \[\mathcal{A}^{2}=\sum_{m=-2}^{2}\left|A_{2m}\left(t\right)\right|^{2}, \tag{9}\] where \(A_{2m}\) are the amplitudes of the \(\ell=2\) multipoles, is maximised and resampled using a constant time step of 0.1M. We then rotated the waveform into the co-precessing frame and aligned the co-precessing phases at merger. The co-precessing frame is one which precesses along with the binary and is advantageous here as it means we can focus on the error in a single multipole (the \((2,2)\) multipole, which is dominant in this frame, rather than considering the error in each of the \(\ell=2\) multipoles (which all have appreciable power in the inertial frame) independently. The quantities for which we are interested in quantifying the numerical error are the amplitude and phase of the (2,2) multipole in the co-precessing frame as well as two of the Euler angles \(\alpha\) and \(\beta\) required to rotate the waveform from the inertial frame into the co-precessing frame. The Euler angles were calculated using the method detailed in Ref. [121; 122]. The standard way to perform a convergence test with respect to the resolution is to perform a set of three simulations where the resolution improves by a factor of two between each of the simulations. This is computationally prohibitive -- the high resolution run would use \(2^{6}\) times as much memory as the low resolution run. Similarly, with BAM's box-based mesh refinement we cannot extract a waveform at three different radii on the same level a reasonable distance from the source if each of the extraction radii is twice as far away from the source as the previous one. We nominally expect the error due to extraction radius \(R_{\text{ext}}\) to fall off as \(1/R_{\text{ext}}\), although we will confirm that in our analysis. The numerical-resolution convergence order is less clear. The spatial finite-differencing in the bulk is sixth-order, but the time-evolution is fourth-order; either may dominate the error budget, depending on the resolution choices and length of the simulation [4; 5]. For both the extraction-radius and numerical-resolution, we determine the appropriate convergence order by studying the convergence behaviour of the phase of the \((\ell=2,m=2)\) multipole in the co-precessing frame. We then identify the value of \(n\) for which the quantity, \[\delta=\left(\phi\left(\Delta_{1}\right)-\phi\left(\Delta_{2}\right)\right)- \mathcal{C}\left(\phi\left(\Delta_{2}\right)-\phi\left(\Delta_{3}\right) \right), \tag{10}\] is minimised, where where \(\phi\) is the phase of the \((2,2)\) multipole in the co-precessing frame, \(\mathcal{C}=\frac{\Delta_{1}^{n}-\Delta_{3}^{n}}{\Delta_{2}^{n}-\Delta_{3}^{n}}\) as in Eq. (32), and \(\Delta_{i}\) is the variable in the error expansion, i.e., numerical resolution or the inverse extraction radius. The quantity \(\delta(t)\) was calculated over the length of the waveform up to merger and the mean value \(\delta=\langle\delta(t)\rangle\) is shown in Fig. 5. This was done for both waveforms of varying resolution and extraction radius for the \(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\) configuration. In calculating the convergence order with respect to varying resolution we used waveforms with \(\Delta_{\{1,2,3\}}=\{1/144,1/120,1/96\}\) while when considering the convergence order with respect to extraction radius we used \(\Delta_{\{1,2,3\}}=\{1/90,1/70,1/60\}\). From the results shown in Fig. 5, we make the conservative conclusion that the code is consistent with fourth order finite-differencing, which implies that the time-stepping dominates the error budget. From inspecting time-dependent \(\delta(t)\) calculated over the length of the waveform, we also find that the spatial differencing (with sixth-order accuracy) dominates the error over the first \(\sim 1000M\) of the waveforms, but the fourth-order-accurate time stepping dominates in the last \(\sim 500M\) before merger, and dominates overall. As expected, we see that the radiation extraction errors fall off as \(1/R_{\text{ext}}\). Assuming these convergence orders, we then calculate the Richardson-extrapolated values of the amplitude, phase, \(\alpha\) and \(\beta\), as functions of time, using Eq. (31) in App. C. We used the resolutions \(N=\{120,144\}\) and the extraction radii \(R_{\text{ext}}=\{80,90\}\) in calculating these Richardson-extrapolated values, which were used to estimate the Figure 4: Comparison of coprecessing frame time domain amplitudes for the \(l=m\) modes for \(l\in\{2,3,4,5\}\). The left panel shows CF_7 with initial parameters \((q,\chi_{2},\theta)=(1,0.4,60)\) and the right panel shows CF_79 with parameters \((q,\chi_{2},\theta)=(8,0.8,120)\). \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{\% Error} \\ & Resolution & Extraction Radius & Total \\ \hline \(\phi\) & 0.08 & 0.4 & 0.4 \\ \(A\) & 2.5 & 0.9 & 2.7 \\ \(\alpha\) & 0.4 & 0.05 & 0.4 \\ \(\beta\) & 0.2 & 0.06 & 0.2 \\ \hline \hline \end{tabular} \end{table} Table 3: Relative error in the waveform quantities compared with the Richardson extrapolated quantities for the \(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\) configuration. quantities. The error in the waveform quantities does not monotonically increase with decreasing resolution since not all of the resolutions lie in the convergence regime. The error in the low and medium resolution waveforms is therefore estimated from the combination of the error between these waveforms and the very high resolution waveform and between the very high resolution waveform and the "true" waveform. Since a time shift was performed to align the waveforms at merger, where the phases were then aligned, rather than aligning the frequencies at merger, the phase difference does not show a quadratic fall off to zero but rather tends to a constant value and then falls rapidly at merger. As can be seen from Fig. 6, the dephasing of the waveform due to the finite resolution is \(\sim 0.1\) radians for the medium resolution (\(N=96\)) simulation. Similarly, from Fig. 7, the dephasing due to the finite extraction radius is \(\sim 0.4\) radians for the waveform extracted at \(90M\). Since the simulations comprising the bulk of the catalogue were performed with medium resolution and we recommend using the waveform extracted at \(90M\), these are the key values to focus on. The total phase error in the waveform is therefore estimated to be about 0.4 radians by combining the errors in quadrature. The relative error for each of the quantities we are interested in are given in Tab. 3. The quantities presented in this table are calculated as follows; the relative error is taken to be the maximum error, found from Richardson extrapolation, as described above, divided by the maximum value of the quantity, over the length of the waveform. Since we aligned the phases at merger, both the error in the phase and the phase itself are maximum at the start of the waveform. In contrast, both the amplitude and the error in the amplitude peak at merger. We therefore report here the relative error in the peak of the (2,2) multipole in the co-precessing frame. This gives an error around an order of magnitude larger than during the inspiral, where we see a total relative error of 0.1%. The order of the relative error in the precession angles is fairly consistent over the length of the waveform. From the values given in Tab. 3, we can see that the maximum relative error in the amplitude of the co-precessing waveform is of the order of a few percent, while the relative error in the phase and in the precession angles is around half a percent. This is relevant for the production of a tuned precessing model using data from these simulations since it implies that the model for the precession angles cannot be accurate to more than 0.5%. Similar results were seen for the other simulations for which we have multiple resolutions. The errors in the amplitude and the precession angles are affected by the dephasing in the waveform. Therefore, although these results are a good diagnostic for the reliability of the code and a good way to compare accuracy between different simulations performed with the same code, they are difficult to translate into meaningful measures of the accuracy for waveform modelling or other gravitational wave applications. In order to get a more meaningful estimate of the accuracy of the waveform we performed the mismatch analysis presented in the following section. ### Matches The waveform quantities examined in the previous section are the standard quantities used when estimating the convergence order and accuracy of a NR code. While useful when comparing the accuracy between simulations and codes, these accuracy measures are difficult to interpret in gravitational-wave astronomy applications -- the sensitivity of a search, or the accuracy of a measurement of the properties of a binary system. When assessing the accuracy of a waveform it is usually more useful to consider an estimate of the mismatch error. The match between two waveforms is defined to be the standard inner product weighted by the power spectral density of Figure 5: The value \(\delta\) as given by equation 10 as a function of convergence order. The circle markers represent waveforms of differing resolution. The square markers represent waveforms at differing extraction radii. The configuration \(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\) was used in this analysis. Figure 6: Resolution dependence of the absolute error in the time domain co-precessing phase, relative to the Richardson-extrapolated phase. The phases have been aligned at merger. Figure 7: Extraction radius dependence of the absolute error in the time domain co-precessing phase, relative to the Richardson-extrapolated phase. The phases have been aligned at merger. the detector \(S_{n}\left(f\right)\) optimised over various sets of parameters \(\Theta\)[106], \[M\left(h_{1},h_{2}\right)=\,\max_{\Theta}\left[4\text{Re}\int_{f_{\text{min}}}^{ f_{\text{max}}}\frac{\tilde{h}_{1}\left(f\right)\tilde{h}_{2}^{*}\left(f\right)}{S_{n} \left(f\right)}\text{d}f\right], \tag{11}\] where the individual waveforms have been normalised so that \(M(h,h)=1\). We also define the mismatch: \[\mathcal{M}=1-M\left(h_{1},h_{2}\right). \tag{12}\] Since these are precessing configurations, we calculate precessing matches as described in Appendix B of Ref. [100]. In order to see how the match varies over a range of total masses that might be observed by current ground-based detectors, we further calculate the power-weighted match as described in Appendix E, based on the work in Ref. [123], using PhenomPv3 [57] as the model for the low frequency part of the waveform. We then calculate the mismatch as given by Eq. (12). We first extrapolate the mismatch due to finite resolution and extraction radius separately, assuming a particular fall-off in the respective errors. In order to then find the overall mismatch due to _both_ finite resolution and extraction radius- i.e. the mismatch with the infinitely far away, infinitely well resolved "true" waveform- we need to correctly combine these calculations. The motivation for the correct way to combine such errors is sketched out in the following section and given in more detail in Appendix D. We then perform an independent calculation to confirm the mismatches follow the same convergence relation as the waveform quantities discussed in the previous section and finally calculate the extrapolated mismatch. #### ii.2.1 Dependence on expansion parameter and addition of mismatch In the following we look at how the mismatch behaves with respect to an expansion parameter, e.g., the numerical resolution or the radius at which the gravitational-wave signal is extracted. We then consider the addition of mismatch errors. The calculation below, where the mismatch is expanded in terms of either the amplitude or phase, is a standard calculation, but we discuss it in detail here to help motivate the final result, which is somewhat surprising: although contributions to the error in the amplitude or phase of the signal combine in quadrature, as one might expect (see, for example, Sec. II.A of Ref. [124]), separate mismatches should added according to Eq. (22) below. To find how the ratio of two matches between waveforms of differing expansion parameter depends on the expansion parameter, we can examine how the match depends on the amplitude and phase of the waveform. From Eq. (11), the match goes as \[M\sim\text{Re}\left[\frac{1}{N_{1}N_{2}}\int h_{1}\left(f\right)h_{2}^{*} \left(f\right)\text{d}f\right], \tag{13}\] where \(N_{i}\) are the normalisation of each of the waveforms respectively. We take \(h_{1}\) to be the waveform containing either the amplitude or phase error and \(h_{2}\) to be the "true" waveform-i.e., \[h_{1}\left(f\right)=h_{2}\left(f\right)+\Delta h\left(f\right), \tag{14}\] where \(h_{2}\left(f\right)=A\left(f\right)e^{i\phi\left(f\right)}\), and \(A(f)\) is the real amplitude and \(\phi(f)\) is the phase. We assume the true waveform to be normalised, so \[N_{2}=\,\left[\int\left|h_{2}\left(f\right)\right|^{2}\text{d}f\right]^{ \frac{1}{2}}=\left[\int A^{2}\left(f\right)\text{d}f\right]^{\frac{1}{2}}=1. \tag{15}\] A waveform containing some amplitude error \(\Delta A\) is given by \[h_{1}\left(f\right)=\,\left(A\left(f\right)+\Delta A\left(f\right)\right)e^{i \phi\left(f\right)}. \tag{16}\] Substituting Eq. (16) into (13) we find \[M \sim\,\left[\int\left(A+\Delta A\right)^{2}\text{d}f\right]^{- \frac{1}{2}}\text{Re}\int A\left(A+\Delta A\right)\text{d}f\] \[=\,\left(1+2b+c\right)^{-\frac{1}{2}}\left(1+b\right)\] \[\simeq 1+\frac{1}{2}\left(b^{2}-c\right), \tag{17}\] where \(b=\int A\Delta A\text{d}f\), \(c=\int\left(\Delta A\right)^{2}\text{d}f\) and we have assumed that \(\Delta A\) is small in order to make the approximation in the final step. The mismatch, as defined in Eq. (12), therefore goes as \(b^{2}+c\) and so is dominated by the square of the amplitude error. Similarly for a normalised waveform that contains some phase error \(\Delta\phi\), \[h_{1}\left(f\right)=A\left(f\right)e^{i\left(\phi\left(f\right)+\Delta\phi \left(f\right)\right)}, \tag{18}\] where \(\phi\) is the "true" phase. Substituting this expression into Eq. (13) we find \[M \sim\text{Re}\left[\int A^{2}e^{i\Delta\phi}\text{d}f\right]\] \[\simeq\text{Re}\int A^{2}\left(1+i\Delta\phi-\frac{1}{2}\left( \Delta\phi\right)^{2}\right)\text{d}f\] \[=\,\int A^{2}\left(1-\frac{1}{2}\left(\Delta\phi\right)^{2} \right)\text{d}f\] \[=1-\frac{1}{2}\int A^{2}\left(\Delta\phi\right)^{2}\text{d}f, \tag{19}\] where again we have assumed that the error in the phase is small in order to perform the expansion in the middle step. The mismatch is therefore dominated by the square of the phase error. The waveform quantities at finite resolution or extraction radius can be expressed as a Richardson extrapolation of the appropriate expansion parameter (see Appendix C.1). The difference in the phase and amplitude between two waveforms (labelled \(A\) and \(B\)) is therefore equal to the difference between the leading order error term (i.e. \(\Delta q\sim\Delta_{B}^{n}-\Delta_{A}^{n}\), see Appendix C.2 for more detail). Since the mismatch is proportional to the _square_ of the error in these waveform quantities, we find that the convergence relation for the mismatch takes the form \[\mathcal{M}\left(\Delta_{A}:\Delta_{B}\right)=\kappa\left(\Delta_{A}^{n}- \Delta_{B}^{n}\right)^{2}, \tag{20}\] where \(\Delta_{i}\) is the value of the expansion parameter for the \(i\)th waveform and \(\kappa\) is a co-efficient to be found. If one of the waveforms being considered is the "true" waveform and thus Figure 8: Mismatch between waveforms at varying resolution against the high resolution (\(N=120\)) waveform as a function of total mass. Figure 9: Mismatch between waveforms at varying extraction radii and the waveform extracted at \(R_{\text{ext}}=90\)M as a function of total mass. contains no numerical error, then the mismatch between any reference waveform and this true waveform will be given by \(\mathcal{M}\left(\Delta_{\text{ref}}:\Delta_{\text{true}}=0\right)=\kappa\Delta_{ \text{ref}}^{2n}\). A similar derivation to the one discussed here is also presented in Ref. [125]. From this we can see that the ratio of the mismatch \(\mathcal{M}\) between two pairs of waveforms is given by \[\frac{\mathcal{M}\left(A:B\right)}{\mathcal{M}\left(B:C\right)}=\frac{\left( \Delta_{A}^{n}-\Delta_{B}^{n}\right)^{2}}{\left(\Delta_{B}^{n}-\Delta_{C}^{n} \right)^{2}}. \tag{21}\] This result will be used in the following sections to study the convergence properties of our numerical-relativity waveforms via their mismatch error. We can also see that the correct way to combine the mismatches between two sets of waveforms \(\mathcal{M}\left(A:B\right)\) and \(\mathcal{M}\left(B:C\right)\) in order to get the mismatch between the final pair \(\mathcal{M}\left(A:C\right)\) is given by \[\mathcal{M}\left(A:C\right)=\left(\sqrt{\mathcal{M}\left(A:B\right)}+\sqrt{ \mathcal{M}\left(B:C\right)}\right)^{2}. \tag{22}\] A more rigorous proof of this result is presented in Appendix D. As stated above, our main use for this result is to combine the mismatch due to different sources of error in our numerical waveforms. We estimate the mismatch between our waveforms at finite extraction radius and finite resolution and the true waveform using \[\mathcal{M}=\left(\sqrt{\mathcal{M}_{\text{resolution}}}+\sqrt{\mathcal{M}_{ \text{extraction radius}}}\right)^{2}, \tag{23}\] where \(\mathcal{M}_{\text{resolution}}\) is the mismatch due to the finite resolution of the numerical simulation and \(\mathcal{M}_{\text{extraction radius}}\) is the mismatch due to the finite distance from the source at which the waveforms were extracted. #### iv.2.2 Convergence order We performed matches between waveforms extracted at \(R_{\text{ext}}=90M\) for the high resolution simulations against all other resolutions available for a given configuration. These results are shown in Fig. 8. We also performed matches between waveforms extracted at \(R_{\text{ext}}=90\)M and all other available extraction radii for the medium resolution simulations for each configuration. These results are shown in Fig. 9. In both of these comparisons, we have calculated the match against a single resolution or extraction radius. We therefore expect that the matches will improve for cases where the values of the resolution or extraction radius are closer to each other. From Fig. 9 we can see that the matches generally follow this trend, implying that it is reasonable to assume the waveform is being extracted sufficiently far from the source that we may be in the convergence regime. This is not true for the mismatches with respect to resolution shown in Fig. 8. The matches between (i) the low and high resolutions and (ii) the medium and high resolutions clearly do not follow any trend for most of the configurations. From this we can see that it is not reasonable to treat the low and medium resolutions as if they lie within the convergence regime. That the medium resolution does not lie quite within the convergence regime is demonstrated clearly in Fig. 10, where we show the mismatch between the medium and high and very high resolutions using Eq. (21) for varying convergence order. From this analysis it is clear that the mismatch is closest to being fourth-order convergent. This analysis could only be done for the case (\(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\)) since this is the only case for which we have the very high resolution run. Conversely, since it seems reasonable to assume the waveforms extracted at varying extraction radii mostly lie within the convergence regime, we calculated the ratio of the mismatch between each of the pairs of waveforms from different extraction radii using Eq. (21) for varying convergence order. For each of the four configurations we investigated it was found that the results were most consistent with first order convergence. This is demonstrated in Fig. 11, where the solid lines show the calculated mismatch between two waveforms of different extraction radii and the dotted red line shows the expected value of the match for first order convergence. Not all the waveforms from the different extraction radii show perfect convergence for every configuration. The mismatch between \(R_{\text{ext}}=80\) and \(R_{\text{ext}}=90\) often does not follow the trend- we expect this is because the mismatch between these waveforms is so small (\(\mathcal{O}\left(10^{-6}\right)\)) that it is very sensitive to any data processing performed in the course of calculating the match. The mismatch between \(R_{\text{ext}}=50\) and \(R_{\text{ext}}=90\) also often does not follow the trend and we do not expect it to hold for small extraction radii. The convergence order calculated using this method agrees with the estimate calculated in the previous section; the code is approximately fourth order convergent with respect to resolution and first order convergent with respect to the extraction radius. #### iv.2.3 Extrapolation Having verified the convergence order of the code \(n\), we can calculate the convergence relation of the mismatches shown in Figs. 8 and 9. We first look at the mismatch due to the finite resolution of the simulation. We assume that the two highest resolution simulations we performed (\(N=120\) and Figure 10: Mismatches demonstrating fourth order convergence of the BAM code with respect to resolution. The solid lines show the calculated mismatch while the dashed lines show the predicted mismatch for varying convergence order. The mismatch was calculated with respect to the \(N=120\) resolution run. This is for the \(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\) configuration. Figure 11: Mismatches demonstrating first order convergence of the BAM code with respect to extraction radius. The black lines indicated in the legend show the calculated mismatch, while the dotted red line shows the predicted mismatch for the pair of waveforms indicated by the solid line based on the mismatch indicated by the dashed line, assuming first order convergence. Figure 12: Variation of the mismatch with extraction radius for a system with a total mass of 100M. The mismatches shown are with respect to the waveform extracted at \(R_{\text{ext}}=9\text{OM}\). The line shows the fit based on Eq. (10) assuming first order convergence. It is consistent with all extraction radii except at \(R_{\text{ext}}=50\text{M}\). \(N=144\)) lie in the convergence regime but we know the two lower resolution simulations do not. Assuming fourth order convergence, we use Eq. (20) to calculate \(\kappa_{\text{res}}\) using \[\kappa_{\text{res}}=\frac{\mathcal{M}\left(\Delta_{144}:\Delta_{120}\right)}{ \left(144^{4}-120^{4}\right)^{2}}. \tag{24}\] From Fig. 13, which shows the convergence relation for the mismatches calculated for a system with total mass \(100\text{M}_{\odot}\), we can see that this appears to be a reasonable assumption. From \(\kappa_{\text{res}}\) we can estimate the mismatch between the high or very high resolution waveforms with an infinitely well resolved waveform. However, we want to know the mismatch for the medium resolution runs since this is the resolution that was used to perform the simulations for the catalogue of waveforms presented in Tabs. 1 and 2. Since this resolution does not lie in the convergence regime (and the phase error does not improve monotonically from the medium resolution to the high and very high resolution waveforms) we cannot simply use the calculated convergence relation in order to estimate the mismatch between a waveform at this resolution and the "true" waveform. Instead we use the formula given in Eq. (22) to add the mismatch between the medium resolution and the very high resolution waveforms to the mismatch between the very high resolution waveform and the "true" waveform: \[\begin{split}\mathcal{M}\left(\Delta_{96}:\Delta_{\infty} \right)&=\left(\sqrt{\mathcal{M}\left(\Delta_{96}:\Delta_{144} \right)}+\right.\\ &\qquad\qquad\left.\sqrt{\mathcal{M}\left(\Delta_{144}:\Delta_{ \infty}\right)}\right)^{2}\\ &=\left(\sqrt{\mathcal{M}\left(\Delta_{96}:\Delta_{144}\right)}+ \sqrt{\frac{\kappa_{\text{res}}}{144^{4}}}\right)^{2}.\end{split} \tag{25}\] The result of this extrapolation procedure is shown in Fig. 14. We could only perform this calculation for the case (\(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\)) since this is the only case for which we have the very high resolution run. However, from Fig. 8 we can see that the mismatch between the medium and high resolution runs is the worst for this case, so this estimate should give an upper bound for the mismatch between the medium resolution run and the "true" waveform. Fig. 14 shows the projected mismatch between a medium resolution waveform and one that is infinitely well resolved for a range of total masses. The maximum mismatch between a medium resolution waveform and an infinitely well resolved one is \(6.0\times 10^{-4}\). We next examine the mismatch due to the finite distance from the source at which the waveform is extracted. To calculate the first order convergence relation with respect to the extraction radius, we performed a fit through each of the mismatches which were found to follow the convergence relation. This is demonstrated for mismatches between waveforms of different extraction radii and the waveform at \(R_{\text{ext}}=90\)M in Fig. 12, for a system with total mass \(100\text{M}_{\odot}\). This fit gives the value of \(\kappa_{\text{ext}}\) for every value of the total mass of the system. From this we can calculate the mismatch between the waveform at \(R\text{ext}=90\)M and the "true" waveform from \(\mathcal{M}\left(\Delta_{90}:\Delta_{\infty}\right)=\frac{\kappa_{\text{ext}}} {90^{2}}\). The mismatch between the waveform extracted at \(R_{\text{ext}}=90\)M and the "true" waveform is shown in Fig. 15. The configuration that gives the greatest mismatch is \(q=4\), \(\chi=0.4\), \(\theta_{\text{LS}}=60\) since, as noted above, this simulation was much longer than the others and so has greater opportunity to accumulate phase error between the two waveforms. The maximum mismatch between a waveform at \(R_{\text{ext}}=90\) and at \(R_{\text{ext}}\rightarrow\infty\) is taken to be \(1.4\times 10^{-3}\). We estimate the mismatch between our medium resolution waveform extracted at \(R_{\text{ext}}=90\)M and the true waveform using Eq. (23). A conservative estimate of the mismatch between a waveform extracted at a finite distance of 90M from the source for a simulation performed with a grid spacing Figure 14: Projected mismatch between a waveform extracted at a resolution of \(N=96\) and one that is infinitely well resolved. Figure 13: Variation of the mismatch with resolution for a \(q=8\), \(\chi=0.8\), \(\theta_{\text{LS}}=150^{\circ}\) system with a total mass of 100M. The mismatches shown are with respect to the medium resolution run and so the mismatch is zero at \(N=120\). The line shows the relation in Eq. (20) with \(\kappa_{\text{res}}\) assuming fourth order convergence calculated using Eq. (24) and is consistent with all resolutions except the lowest one at \(N=80\). Figure 15: Projected mismatch between a waveform extracted at a radius of \(R_{\text{ext}}=90\)M and one extracted infinitely far away. \(d=0.0104\) and the theoretical 'analytical' solution is therefore \(3.83\times 10^{-3}\). This provides a limit on the mismatch error of the waveforms presented in this catalogue of 0.004, or 0.4%. ## V Catalogue comparison A number of numerical relativity groups have started building larger and more comprehensive catalogues that span a growing region of the parameter space. At time of publication, there are a total of 4,352 publicly available BBH simulations in a combination of the Simulating eXtreme Spacetimes (SXS) Collaboration [82, 83] and the Rochester Institute of Technology (RIT) [84, 85, 86, 87] and Maya catalogues [88]. The SXS collaboration has produced the largest catalogue to date with 2,019 BBH simulations spanning \(1\leq q\leq 10\) and \(0\leq\chi\leq 1\). The RIT catalogue contains 1,881 BBH simulations covering \(1\leq q\leq 128\) and \(0\leq\chi\leq 0.99\) and the Maya catalogue contains 452 unique BBH waveforms from more than 600 BBH simulations ranging between \(1\leq q\leq 15\) and \(0\leq\chi\leq 0.8\). Unlike the simulations presented here, the SXS, RIT and Maya catalogues all contain simulations where the individual black hole spins can be zero or perfectly aligned/anti-aligned with the orbital angular momentum. Considering only the precessing parameter space, the SXS collaboration has produced 1,429 simulations spanning \(1\leq q\leq 6\) and \(0\leq\chi\leq 0.99\), the RIT catalogue contains \(561\) simulations covering \(1\leq q\leq 15\) and \(0\leq\chi\leq 0.99\) and the Maya catalogue contains 315 waveforms ranging between \(1\leq q\leq 8\) and \(0.1\leq\chi\leq 0.8\). Fig. 16 compares the parameter space coverage of the simulations presented here with the existing non-zero-spin simulations included in the SXS, RIT and Maya catalogues over the mass ratio and larger black hole spin tilt angle and spin magnitude parameter space. We see that although the existing catalogues provide good coverage for high black hole spins \(\chi\geq 0.5\) and near equal mass ratios, there is a dearth of existing precessing simulations for low black hole spins \(\chi<0.25\) and unequal mass ratios \(q\gtrsim 4\). Recent gravitational wave observations [see e.g. 52, 53] have shown a need for BBH simulations in this low black hole spin and unequal mass ratio region of the parameter space in order to build reliable and accurate waveform models for use in Bayesian inference. The uniform coverage of the single-spin space up the \(q=8\) has made it possible to construct an accurate generic precessing-binary model for future observations [20]. Most astrophysical models suggest that BBH at larger mass ratios will be rare (e.g., Ref. [126]), but given that there has been one observation to date at \(q\sim 10\)[53] extension of this parameter-space coverage to yet higher mass ratios will be necessary in the future. Figure 16: Comparison between the parameters of the new BBH simulations presented here (CF) and the existing BBH simulations in the the SXS, RIT and Maya catalogues. The _Top left_ spin disk shows simulations with spin on the larger black hole \(0<\chi<0.25\), _Top right_\(0.25\leq\chi<0.5\), _Bottom left_\(0.5\leq\chi<0.75\) and _Bottom right_\(\chi\geq 0.75\). The radius of each disk shows the mass ratio of the binary and the orientation shows the spin tilt angle of the larger black hole. Spin tilt angles of \(90^{\circ}\) means that the spin vector lies in the binary’s orbital plane. Discussion We have produced a catalogue containing 80 waveforms from single-spin precessing systems with mass ratios up to \(q=8\), dimensionless spin magnitudes up to \(\chi=0.8\) and a range of spin inclination angles. In all cases the spin was placed on the larger black hole. We estimate our uncertainty in the masses of the initial black holes to be \(\mathcal{O}(0.05\%)\). We estimate the uncertainty in the initial spin magnitude to be \(\mathcal{O}(10^{-3})\) while the uncertainty in the initial spin inclination is \(\mathcal{O}(1^{\circ})\). Similarly, we obtain estimates of the uncertainty of the remnant properties reported in this paper. We find the final mass has an uncertainty of \(5\times 10^{-4}\), while the final spin magnitude is accurate to within \(5\times 10^{-3}\). The starting frequency of the simulations was chosen such that the simulations were all around a similar length (\(\sim 2000M\)) in order to limit the dephasing in the waveform and thus ensure sufficient accuracy throughout the evolution of the binary. We performed a careful analysis of the errors due to the finite resolution of the simulations and due to the finite distance from the source at which the waveforms were extracted. From this we were able to provide a conservative estimate of the mismatch uncertainty of our waveforms of 0.4%. The catalogue presented here is sufficient to capture a wide range of single-spin precession effects. Most notably, the systems contained within it have a non-zero opening angle of the precession cone ranging from \(\sim 1^{\circ}\) to \(\sim 115^{\circ}\) radians. The cases with the largest opening angles display the most dominant precession effects. In particular, for initial configurations with mass ratios \(q\sim 8\), high spins and large spin inclination angles, the final spin will be in the opposite direction to the binary's angular momentum prior to merger, thus producing a "negative" final spin. The majority of precessing simulations in other catalogues do not extend beyond \(q=5\) so consequently, this region of parameter space is poorly covered by NR simulations. Indeed, in this catalogue, despite having 20 precessing simulations at \(q=8\) we see only two cases where we have a negative final spin (CF_75 and CF_80). The phenomenology of this region has therefore not yet been thoroughly explored and a more detailed study is planned for future work. Finally, we also see a wide range of recoil velocities for the configurations included in this catalogue, with the highest values seen for equal mass systems. We also see the greatest range of values for equal mass systems, depending on the in-plane spin angle. For \(q=8\) systems we see much lower values in general across all cases. While this catalogue was sufficient to produce the first inspiral-merger-ringdown (IMR) model of precessing systems tuned to NR, PhenomPNR, it will need to be greatly expanded in order to meet modelling requirements of future gravitational wave observations. Existing catalogues (such as the SXS, Maya and RIT catalogues) provide a comprehensive coverage of the two-spin precessing parameter space up to \(q=4\). This catalogue provides a systematic coverage of the single-spin precessing parameter space up to \(q=8\). However, while it uses a consistent in-plane spin direction at the starting frequency (the initial configurations all have the in-plane spin component along the binary's separation vector), these will translate into quite different spin directions at merger. Consequently, for any modelling that includes effects due to the in-plane spin direction, this catalogue contains an incomplete and possibly random sampling of points. There are many directions in which this catalogue can be expanded to: include higher mass ratios, comprehensively cover rotations of the in-plane spin component, include two-spin systems, produce longer and more accurate waveforms and include binaries on eccentric orbits. Since the production of these simulations are expensive (the catalogue presented here is estimated to have required around 25 million CPU hours in total) and generic modelling at higher mass ratios and for longer waveforms is not a completely solved problem, it is an open question as to which direction in parameter space is most urgent. Assuming the mass ratio distribution reported in [126], we estimate that only 1.3% of observed binaries will have \(q>8\). This is supported by gravitational wave detections so far since, out of the 90 binaries reported by the LVK collaborations [44], only one has been found to have a mass ratio clearly greater than 8 [53]. Similarly, from the production of PhenomPNR, we know that we will require longer waveforms for binaries with higher mass ratios and spins, particularly those with a spin inclination angle of \(\theta_{\rm LS}>90^{\circ}\), since inaccuracies in post-Newtonian (PN) estimates of precession effects become more appreciable towards lower frequencies in this region of the parameter space. For the existing catalogue, the cases CF_79 and CF_80 (at \(q=8\) and \(\chi=0.8\)) are already sufficiently short to limit model accuracy and we anticipate this will continue for decreasing spin magnitude as we go to higher mass ratios [20]. This is therefore a smaller fraction of binaries than the simple requirement to extend to high mass ratios. Similarly, we expect to see two-spin effects in signals with SNRs greater than 100 [127; 58]. Taking the detection threshold to be SNR 10, then we expect to be able to identify two-spin effects in \(~{}0.1\%\) of detections. We have not considered eccentricity here, but eccentric modelling and NR simulations have been discussed in other works [128; 129; 130; 131; 132; 133; 101]. From this, we conclude that the most urgent extension is required to systems with higher mass ratios. This is closely followed by a systematic sampling that will explore the most dominant physical effects of two-spin systems (such as those that will impact the opening angle of the precession cone at merger) or the in-plane direction (such as the recoil velocity). ## VII Acknowledgements We would like to thank Steve Fairhurst, Frank Ohme, Vivien Raymond for many useful discussions; Kieran Philips for extensive optimisation of the BAM code that resulted in a greater than 30% increase in speed and a much lower memory footprint, and also for his work in testing the code on the Oracle Cloud Infrastructure (OCI); Paul Hopkins for wide-ranging cluster support and also assistance in setting up OCI runs; Phil Bates at Oracle for initiating the OCI work and to Phil and his team for extensive technical support and guidance. The authors were supported in part by Science and Technology Facilities Council (STFC) grant ST/V00154X/1 and European Research Council (ERC) Consolidator Grant 647839. E. Hamilton was supported in part by Swiss National Science Foundation (SNSF) grant IZCOZ0-189876. L. London was supported at Massachusetts Institute of Technology (MIT) by National Science Foundation Grant No. PHY-1707549 as well as support from MIT's School of Science and Department of Physics. A. Vano-Vinuales frants FCT for financial support through Project No. UIDB/00099/2020. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. Additionally, this research was undertaken using the supercomputing facilities at Cardiff University operated by Advanced Research Computing at Cardiff (ARCCA) on behalf of the Cardiff Supercomputing Facility and the HPC Wales and Supercomputing Wales (SCW) projects. We acknowledge the support of the latter, which is part-funded by the European Regional Development Fund (ERDF) via the Welsh Government. This work was also supported in part by Oracle Cloud credits and related resources provided by the Oracle for Research program. ## Appendix A Iterative initial data construction for single spin precessing systems The initial data construction method of [15] for aligned-spin systems is extended to precessing systems using the follow iterative brute-force algorithm. A sequence of input parameter sets \[\boldsymbol{\theta}_{i}\equiv\left(q_{i}^{D},\boldsymbol{S}_{1,i}^{D}, \boldsymbol{S}_{2,i}^{D}\right)=\left(q,\boldsymbol{0},S_{2}\hat{\boldsymbol {t}}_{i}\right), \tag{26}\] will be iteratively refined, defined at a user-specified separation of \(D_{\text{start}}\) and used as initial conditions for the EOB solver. The single-spin orientation unit vector \(\hat{\boldsymbol{t}}\) can be expressed in terms of angles \(\theta\) and \(\phi\) \[\hat{\boldsymbol{t}}_{i}=\big{(}\cos(\phi_{i})\cos(\theta_{i}), \sin(\phi_{i})\cos(\theta_{i}),\cos(\theta_{i})\big{)} \tag{27}\] The EOB simulations are started with both component black holes placed on the x-axis and the orbital angular momentum parallel to the \(z\)-axis. For this work the black holes are placed \(\sim 40M\) apart. Each successive set of parameters are chosen based on the EOB evolution of previous parameter sets in the sequence. This is repeated until the EOB evolution results in the required parameters \((q,\boldsymbol{S}_{1},\boldsymbol{S}_{2})\) at \(M\Omega_{\text{orb}}\) using the following algorithm 1. **Initial candidate parameters at \(D_{\text{start}}\) (\(n=0\))** \(\phi_{0}\) is chosen to be the target azimuthal spin angle \(\phi\). The EOB solver is then run until \(M\Omega_{\text{orb}}\) is reached and the azimuthal spin angle at that time is recorded as \(\phi_{\omega,0}\). The EOB spin dynamics are explored in the region around \(M\Omega_{\text{orb}}\) to find the closest time when the spin angle is equal to \(\phi\). The difference in frequency \(\Delta M\omega_{0}\) between this time and \(M\Omega_{\text{orb}}\) is recorded. If \(|\Delta M\omega_{0}|\leqslant M\omega_{\text{tol}}\) where \(M\omega_{\text{tol}}\) is a user specified tolerance, then the algorithm stops. For the initial data generated in this work the percentage error tolerance of the orbital frequency is specified to be \(M\omega_{\text{tol}}=1\%\). If \(|\Delta M\omega_{0}|>M\omega_{\text{tol}}\) then proceed to the next step. 2. **Second candidate parameters at \(D_{\text{start}}\) (\(n=1\))** \(\phi_{1}\) is chosen to be the difference between the target azimuthal spin angle and the azimuthal spin angle of the EOB spin dynamics at \(M\omega_{\text{orb}}\), given by \(\phi_{c}\equiv\phi-\phi_{\omega,0}\). The EOB spin dynamics are explored again, recording \(\phi_{\omega,1}\) at \(M\omega_{\text{orb}}\), and calculating \(|\Delta M\omega_{1}|\) as in the same way as \(|\Delta M\omega_{0}|\) in the previous step. If \(|\Delta M\omega_{1}|\leqslant M\omega_{\text{tol}}\) the algorithm stops otherwise proceed to the next step. 3. **Third candidate parameters at \(D_{\text{start}}\) (\(n=2\))** Set \(\Delta\phi\) to be \(10^{\circ}\) if \(||\Delta M\omega_{\text{ul}}|-M\omega_{\text{tol}}|>M\omega_{\text{tol}}/2\) otherwise set \(\Delta\phi\) to be \(5^{\circ}\). \(\phi_{2}\) is chosen to be the target azimuthal spin angle \(\phi_{c}+\Delta\phi\). \(\phi_{\omega,2}\) and \(|\Delta M\omega_{2}|\) are calculated in the same way as previous steps. If \(|\Delta M\omega_{2}|\leqslant M\omega_{\text{tol}}\) the algorithm stops otherwise proceed to the next step. 4. **Further candidate parameters at \(D_{\text{start}}\) (\(n>2\))** Set \(\Delta\phi\) to be \(10^{\circ}\) if \(||\Delta M\omega_{n-1}|-M\omega_{\text{tol}}|>M\omega_{\text{tol}}/2\) otherwise set \(\Delta\phi\) to be \(5^{\circ}\). If \(\phi_{\omega,2}>\phi_{\omega,1}\) this indicates that the azimuthal spin angle is being rotated in the wrong direction. As such, if \(\phi_{\omega,2}>\phi_{\omega,1}\) then \(\phi_{n}=\phi_{c}+(2-n)\Delta\phi\), otherwise set \(\phi_{n}=\phi_{c}+(n-1)\Delta\phi\). \(|\Delta M\omega_{n}|\) is calculated in the same way as previous steps. If \(|\Delta M\omega_{n}|\leqslant M\omega_{\text{tol}}\) the algorithm stops otherwise repeat this step until this inequality is satisfied. Once the required tolerance is met and the algorithm stops, the position, linear momentum and spin of each black hole are taken from the EOB dynamics at \(M\Omega_{\text{orb}}\) and used as input for the Bowen-York initial data solver. ## Appendix B NR simulations with cloud computing For this work we have run individual BAM simulations on up to 512 processors, and these require fast inter-processor connections to ensure that inter-processor communication is not the dominant limitation on the calculation speed. This is typical for NR codes, and as such these are usually run on clusters that have been constructed primarily for highly parallelised high-performance computing (HPC), such as the DiRAC Cosma clusters that were used for the majority of the runs presented here. An increasingly popular source of computing resources are cloud services. These have historically been set up with large numbers of independent (high-throughput computing) applications in mind. However, recently some services have improved the speed of inter-processor communication, with the goal of making cloud computing services competitive for HPC. As part of the NR work presented here, we investigated the performance of BAM on the Oracle Cloud Infrastructure (OCI). We performed a series of experiments to determine the optimal performance we could achieve with the hardware available at the time (2018). These tests used a bespoke "bare metal" setup and ran at about 60% of the speed on the DiRAC cosma5 cluster. (Note that since these tests were done, both the cosma clusters and the cores used at OCI have been superseded by newer hardware.) We also completed a set of production simulations: these were the five NR simulations at mass-ratio \(q=2\) and black-hole spin \(S_{2}/m_{2}=0.6\) (CF_31-CF_35). Each run was performed on 128 cores and required approximately 140,000 CPU hours. The production simulations used the "cluster-in-the-cloud" infrastructure [134] to create container-based cluster instances using OCI Terraform on AMD64 128-core BM.Standard.E2.64 nodes, and ran at an average 1.8 \(M\)/hr. Similar runs on cosma6 ran at about 3.7 \(M\)/hr on double the number of cores. These suggest that in terms of computational cost and efficiency, cloud-based resources have the potential to be competitive to standard clusters. ## Appendix C Convergence estimates and Richardson extrapolation ### Richardson Extrapolation A quantity \(q\) calculated at finite resolution or extraction radius can be given by \[q^{*}=q\left(\Delta\right)+e_{i}\Delta^{i}, \tag{28}\] where \(\Delta\) is the expansion parameter (\(\frac{1}{N}\) for resolution or \(\frac{1}{R_{\text{end}}}\) for extraction radius), \(e\) is the finite order error and \(i\) is the order at which the error contributes. In this paper the quantity \(q\) we are considering is the waveform extracted from the numerical simulation. We therefore have that \[q^{*}=q\left(\Delta\right)+e_{n}\Delta^{n}+\mathcal{O}\left(\Delta^{n^{\prime }}\right), \tag{29}\] where \(e_{n}\Delta^{n}\) is the leading order error contribution, \(n\) is the convergence order of the simulation and \(n^{\prime}>n\). Considering two waveforms computed using different expansion parameters \(\Delta_{1}\) and \(\Delta_{2}\), we can solve the two simultaneous equations that arise from Eq. (29) to give \[q^{*}=R\left(\Delta_{1},\Delta_{2}\right)+\mathcal{O}\left(\left(\frac{\Delta _{1}}{\Delta_{2}}\right)^{n^{\prime}}\right), \tag{30}\] where \[R\left(\Delta_{1},\Delta_{2}\right)=\frac{\left(\frac{\Delta_{1}}{\Delta_{2}} \right)^{n}q\left(\Delta_{2}\right)-q\left(\Delta_{1}\right)}{\left(\frac{ \Delta_{1}}{\Delta_{2}}\right)^{n}-1}, \tag{31}\] is the _Richardson extrapolation_[115, 135] of \(q\left(\Delta_{1}\right)\). \(R\left(\Delta_{1},\Delta_{2}\right)\) has a higher order error due to the truncation of the expansion in \(\Delta\) than \(q\left(\Delta_{1}\right)\). \(R\left(\Delta_{1},\Delta_{2}\right)-q\left(\Delta_{1}\right)\) gives the truncation error of the quantity \(q\). ### Convergence Considering now three waveforms computed with expansion parameters \(\Delta_{1}>\Delta_{2}>\Delta_{3}\) we can eliminate \(q^{*}\) in Eq. (30). Neglecting higher order error terms, the ratio of the difference between two sets of numerical waveforms with expansion parameter \(\Delta_{1}>\Delta_{2}>\Delta_{3}\) is then given by \[\mathcal{C}=\frac{q\left(\Delta_{1}\right)-q\left(\Delta_{2}\right)}{q\left( \Delta_{2}\right)-q\left(\Delta_{3}\right)}=\frac{\Delta_{1}^{n}-\Delta_{2}^{ n}}{\Delta_{2}^{n}-\Delta_{3}^{n}}. \tag{32}\] This relation holds for features of a waveform, such as its amplitude and phase, but not for derived quantities such as the match. To understand how the match between a set of waveforms in a convergence series varies, consider a detector response derived from gravitational wave strain solutions of a finite difference approximation numerical relativity code. This can be represented by a _Richardson expansion_[115, 135] as a power series in an expansion parameter \(\Delta\). Consider two detector responses, \(h_{1}\) and \(h_{2}\), at two resolutions with expansion parameters \(\Delta_{1}\) and \(\Delta_{2}\) respectively. As seen in Eq. (29), for an \(i^{\text{th}}\) order accurate finite difference method these can be represented by their truncated Richardson expansions \[h_{1}(f) =h(f)+e_{n}(f)\Delta_{1}^{n}, \tag{33}\] \[h_{2}(f) =h(f)+e_{n}(f)\Delta_{2}^{n},\] (34) \[=h_{1}(f)+e_{n}(f)(\Delta_{2}^{n}-\Delta_{1}^{n}), \tag{35}\] where \(h(f)\) is the detector response of the exact solution, and \(e_{i}(f)\) are the leading order error functions. The match between these two detector responses can be expanded in the expansion parameter (\(\Delta_{2}^{n}-\Delta_{1}^{n}\)). Utilising linearity in the inner product, we have \[M(h_{1},h_{2}) =\max_{\Theta}\left[\frac{\left\langle h_{1}\right|h_{2}\right\rangle }{\left\|h_{1}\right\|\left\|h_{2}\right\|}\right],\] \[=\max_{\Theta}\left[\frac{\left\|h_{1}\right\|^{2}+\left(\Delta_ {2}^{n}-\Delta_{1}^{n}\right)\left\langle h_{1}\right|e_{n}\right\rangle}{ \left\|h_{1}\right\|\left\|h_{1}\right\|^{2}+2(\Delta_{2}^{n}-\Delta_{1}^{n}) \left\langle h_{1}\right|e_{n})+(\Delta_{2}^{n}-\Delta_{1}^{n})^{2}\left\|e_{n }\right\|^{2})^{1/2}}\right],\] \[\approx\max_{\Theta}\left[1-\frac{1}{2}\left(\frac{\|e_{n}\|^{2}} {\left\|h_{1}\right\|^{2}}-\left(\frac{\left\langle h_{1}\right|e_{n}\right\rangle }{\left\|h_{1}\right\|^{2}}\right)^{2}\right)(\Delta_{2}^{n}-\Delta_{1}^{n})^ {2}\right],\] \[\approx 1-\kappa(\Delta_{2}^{n}-\Delta_{1}^{n})^{2}, \tag{36}\] where between the second and third lines we have performed a binomial expansion of the denominator and terms of higher order in the expansion parameter are dropped between steps. The constant coefficient \(\kappa\) is defined as \[\kappa\equiv\min_{\Theta}\left[\frac{1}{2}\left(\frac{\|e_{n}\|^{2}}{\|h\|^{2} }-\frac{\left\langle h|e_{n}\right\rangle^{2}}{\|h\|^{4}}\right)\right], \tag{37}\] where we have used Eq. (33) to re-write \(h_{1}\) in terms of \(h\) and again neglected terms of higher order in the expansion parameter. \(\kappa\) can be seen to be bounded below by zero from the Cauchy-Schwarz inequality. The mismatch may then be approximated as \[\mathcal{M}(h_{1},h_{2})=\kappa(\Delta_{2}^{n}-\Delta_{1}^{n})^{2}. \tag{38}\] It is important to note that the leading order expansion parameter terms in the approximation Eq. (38) are quadratic in the expansion parameters. In addition, while it is likely that the leading order coefficient \(\kappa\) cannot be calculated directly, it is independent of any resolution-specific expressions. As such \(\kappa\) is constant within any convergence series. This leads to the following two results, describing ratios of mismatches in convergence series and the combination of mismatches in convergence series, \[\frac{\mathcal{M}(h_{1},h_{2})}{\mathcal{M}(h_{2},h_{3})} =\frac{(\Delta_{1}^{n}-\Delta_{2}^{n})^{2}}{(\Delta_{2}^{n}- \Delta_{3}^{n})^{2}}, \tag{39}\] \[\mathcal{M}(h_{1},h_{3}) =\left(\sqrt{\mathcal{M}(h_{1},h_{2})}+\sqrt{\mathcal{M}(h_{2},h _{3})}\right)^{2}. \tag{40}\] Eq. (40) holds generally, not just for the case of mismatches between waveforms in a convergence series. This is shown in Appendix D. ## Appendix D Addition of mismatches Consider three waveforms, \(h_{1}\), \(h_{2}\), \(h_{3}\), which are all normalised, \(|h_{1}|=|h_{2}|=|h_{3}|=1\). If \(\langle h_{1}|h_{2}\rangle=A\), and \(\langle h_{2}|h_{3}\rangle=B\), we would like to estimate an upper bound on \(C=\langle h_{1}|h_{3}\rangle\). We write each waveform with reference to one of the others. Choose \(h_{2}\), since that is our "middle" waveform. We can write, \[h_{1} =Ah_{2}+\sqrt{1-A^{2}}h_{2\perp}, \tag{41}\] \[h_{3} =Bh_{2}+\sqrt{1-B^{2}}h_{2\perp}^{\prime}. \tag{42}\] Both \(h_{2\perp}\) and \(h_{2\perp}^{\prime}\) are orthogonal to \(h_{2}\), but are not necessarily the same waveform, and the weights are chosen to ensure that all waveforms are normalised. We can also write this as \(A=\cos(\theta_{12})\) and \(B=\cos(\theta_{23})\), and we therefore have, \[h_{1} =\cos(\theta_{12})h_{2}+\sin(\theta_{12})h_{2\perp}, \tag{43}\] \[h_{3} =\cos(\theta_{23})h_{2}+\sin(\theta_{23})h_{2\perp}^{\prime}, \tag{44}\] and so, \[\langle h_{1}|h_{3}\rangle=\cos(\theta_{12})\cos(\theta_{23})+\sin(\theta_{12 })\sin(\theta_{23})\langle h_{2\perp}|h_{2\perp}^{\prime}\rangle. \tag{45}\] If the two orthogonal contributions are the same, then the combined match will be \(\cos(\theta_{12}-\theta_{23})\), which is the best match we can have; if \(A=B\) then \(C=1\), i.e., \(h_{1}=h_{3}\). Alternatively, if \(\langle h_{2\perp}|h_{2\perp}^{\prime}\rangle=0\), then \(C=AB\). In general we are interested in cases where all matches are close to unity, and so if \(A=1-\mathcal{M}_{12}\) and \(B=1-\mathcal{M}_{23}\), where \(\mathcal{M}_{12}\) and \(\mathcal{M}_{23}\) are the mismatches, then we will have \(C\approx 1-\mathcal{M}_{12}-\mathcal{M}_{23}\), i.e., the mismatches add linearly. The worst combined match, and therefore the upper bound on the combined mismatch, occurs when \(\langle h_{2\perp}|h_{2\perp}^{\prime}\rangle=-1\), which can be thought of geometrically as \(h_{1}\) and \(h_{3}\) differing from \(h_{2}\) in opposite directions, and so we must add their differences to calculate the combined difference. In this case, we have \(C=\cos(\theta_{12}+\theta_{23})\). This allows the extreme case where \(h_{1}=(h_{2}+h_{2\perp})/\sqrt{2}\) and \(h_{3}=(h_{2}-h_{2\perp})/\sqrt{2}\), and in this case we have \(A=B=1/\sqrt{2}\), so the two waveforms are "equally far apart", but the combined match is \(C=0\), and so \(h_{1}\) and \(h_{3}\) are orthogonal to each other. For the situations we are interested in, where the mismatches are small, we recall that \(\cos(\theta)\approx 1-\theta^{2}/2\), and so the mismatches can be approximated as \[\mathcal{M}_{12} =\frac{1}{2}\theta_{12}^{2}, \tag{46}\] \[\mathcal{M}_{23} =\frac{1}{2}\theta_{23}^{2},\] (47) \[\mathcal{M}_{13} =\frac{1}{2}(\theta_{12}+\theta_{23})^{2}, \tag{48}\] and therefore \[\mathcal{M}_{13}\approx\left(\sqrt{\mathcal{M}_{12}}+\sqrt{\mathcal{M}_{23}} \right)^{2}. \tag{49}\] This is not strictly an upper bound on the match that we calculate, because we also optimise over time and phase shifts, and the optimisation is not captured in our calculation, and will have a different effect on each individual match. However, we have found in toy examples that Eq. (49) provides an excellent estimate of the combined mismatch. ## Appendix E Power-weighted precessing mismatch The _match_ between two real valued detector response waveforms \(h_{1}(t)\) and \(h_{2}(t)\) is defined to be the standard inner product weighted by the power spectral density of the detector \(S_{n}\left(f\right)\) maximised over various sets of parameters \(\Theta\)[106], as given by Eq. (11). The _mismatch_ may then by defined by Eq. (12). For precessing waveforms, the set of parameters \(\Theta\) that are maximised over are a relative time shift \(t_{0}\) between the waveforms, a relative phase shift \(\phi_{0}\), and the detector response polarisation angle \(\psi_{0}\)[100]. The precessing matches performed in this work are calculated as described in Appendix B of Ref. [100]. In order to see how the match varied over a range of total masses that might be observed by current ground based detectors, we further calculated the power-weighted match as is described in Ref. [123] To perform matches over a frequency range \(f\in[f_{\min},f_{\max}]\) that extends below the minimum frequency \(f_{\rm NR}\) of one of the waveforms in this catalogue scaled to a specified total mass, the full integral from \(f_{\min}\) to the maximum NR frequency can be approximated using a _power-weighted mismatch_ using the method described in Ref. [123]. This method takes into account the missing inspiral part of the waveform between \(f_{\min}\) and the start of the NR waveform. To perform a power-weighted mismatch the constituent waveforms are first split up into contributions from NR defined over the frequency range \(f\in[f_{\rm NR},f_{\max}]\) and the contributions from the inspiral below the lowest NR frequency defined over the frequency range \(f\in[f_{\min},f_{\rm NR})\), \[h(f)=\begin{cases}h_{\rm ins},&f\in[f_{\min},f_{\rm NR}),\\ h_{\rm NR},&f\in[f_{\rm NR},f_{\max}].\end{cases} \tag{50}\] The power-weighted mismatch is then the mismatch in each region weighted by the fraction of power in each region, \[\mathcal{M}_{\rm pow} \equiv\frac{\|h\|_{(f_{\rm min},f_{\rm NR})}^{2}}{\|h\|^{2}} \mathcal{M}_{\rm ins}+\frac{\|h\|_{(f_{\rm NR},f_{\rm max})}^{2}}{\|h\|^{2}} \mathcal{M}_{\rm NR}, \tag{51}\] \[\mathcal{M}_{\rm ins} \equiv\mathcal{M}_{(f_{\rm min},f_{\rm NR})}(h_{1,{\rm ins}},h_{2,{\rm ins}}),\] (52) \[\mathcal{M}_{\rm NR} \equiv\mathcal{M}_{(f_{\rm NR},f_{\rm max})}(h_{1,{\rm NR}},h_{2,{\rm NR}}), \tag{53}\] where the subscript ranges \((f_{1},f_{2})\) denote the frequency ranges over which the inner product Eq. (12) is evaluated for that expression. The inspiral parts are assumed to perfectly agree which means that \(\mathcal{M}_{\rm ins}\) can be set to 0. This reduces the power-weighted mismatch to, \[\mathcal{M}_{\rm pow}=\frac{\|h\|_{(f_{\rm NR},f_{\rm min})}^{2}}{\|h\|^{2}} \mathcal{M}_{\rm NR}. \tag{54}\] The inspiral contribution \(\|h\|_{(f_{\rm min},f_{\rm NR})}^{2}\) to \(\|h\|\) can be calculated using any appropriate inspiral waveform. For this work the precessing waveform model PhenomPv3 [57] was used as the inspiral waveform. It is important to make clear as described in [123] that \(\mathcal{M}_{\rm pow}\) will be a lower bound to the mismatch \(\mathcal{M}\geqslant\mathcal{M}_{\rm pow}\), however it is a sufficiently accurate approximation for NR accuracy assessment in this context.
2302.05730
Porting numerical integration codes from CUDA to oneAPI: a case study
We present our experience in porting optimized CUDA implementations to oneAPI. We focus on the use case of numerical integration, particularly the CUDA implementations of PAGANI and $m$-Cubes. We faced several challenges that caused performance degradation in the oneAPI ports. These include differences in utilized registers per thread, compiler optimizations, and mappings of CUDA library calls to oneAPI equivalents. After addressing those challenges, we tested both the PAGANI and m-Cubes integrators on numerous integrands of various characteristics. To evaluate the quality of the ports, we collected performance metrics of the CUDA and oneAPI implementations on the Nvidia V100 GPU. We found that the oneAPI ports often achieve comparable performance to the CUDA versions, and that they are at most 10% slower.
Ioannis Sakiotis, Kamesh Arumugam, Marc Paterno, Desh Ranjan, Balsa Terzic, Mohammad Zubair
2023-02-11T15:50:54Z
http://arxiv.org/abs/2302.05730v4
# Porting numerical integration codes from CUDA to oneAPI: a case study+ ###### Abstract We present our experience in porting optimized CUDA implementations to oneAPI. We focus on the use case of numerical integration, particularly the CUDA implementations of PAGANI and \(m\)-Cubes. We faced several challenges that caused performance degradation in the oneAPI ports. These include differences in utilized registers per thread, compiler optimizations, and mappings of CUDA library calls to oneAPI equivalents. After addressing those challenges, we tested both the PAGANI and \(m\)-Cubes integrators on numerous integrands of various characteristics. To evaluate the quality of the ports, we collected performance metrics of the CUDA and oneAPI implementations on the Nvidia V100 GPU. We found that the oneAPI ports often achieve comparable performance to the CUDA versions, and that they are at most 10% slower. ## 1 Introduction Historically, general-purpose GPU programming has been characterized by divergent architectures and programming models. A lack of widely adopted common standards led to the development of different ecosystems comprised of compilers and tools that were practically exclusive to specific GPU architectures. Most importantly, the emergent architectures themselves were not compatible with all ecosystems. Portability could only be achieved through the maintenance of multiple code bases. Traditionally, the proprietary CUDA programming model has been the most popular but is exclusively targeted to Nvidia GPUs. In the absence of universally adopted standards, a viable solution for achieving general portability is to rely on platform-agnostic programming models that target multiple architectures via a unifying interface. This enables the execution of a single code base across various architectures. These programming models would ideally enable the utilization of platform-specific low-level features on their native hardware. This would allow highly-optimized implementations in such portable programming models to remain competitive with platform-specific alternatives. Without these capabilities, use cases with extreme performance requirements would disqualify the use of such portable models. The need for performant multi-platform execution is only increasing with the emergence of exascale supercomputers such as Frontier and Aurora that do not carry Nvidia GPUs. Projects requiring computing cores at that scale must develop new software solutions compatible with non-Nvidia GPUs or port existing CUDA implementations without significant loss of performance. Portable programming models such as RAJA, Kokkos, and oneAPI have been in development and are already available for use. These portable alternatives lack maturity when compared to proprietary alternatives. As such, applications requiring portable solutions must be evaluated to quantify any necessary concessions. In this paper, we discuss the porting process of two numerical integration implementations, PAGANI and m-Cubes, from CUDA to oneAPI's Data Parallel C++ language. The oneAPI ecosystem provides a suite of compilers, libraries, and software tools, including the Data Parallel Conversion tool (DpCT) that automates the majority of the porting process. Reliance on the C++ and SYCL standards as well as the capability to quickly port large CUDA implementations, places oneAPI at the forefront of the portability initiative. We faced challenges during the porting process due to the lack of support for certain libraries utilized by the CUDA implementation. For example, the CUDA implementation of PAGANI uses the Nvidia Thrust library to perform common parallel operations on the host side, such as inner product and min-max. Even though there is a multitude of library options in oneAPI, we encountered difficulties with the DpCT mapping of Nvidia Thrust library calls, which were not fully supported on all backends. We also observed performance degradation for the ported oneAPI implementations. We conducted numerous experiments with integrands of various characteristics to identify the issues. Most of these issues pertained to optimization differences between the NVCC and Clang compilers, and time differences when executing mathematical functions. After addressing these challenges, the oneAPI ports were at most 10% slower than the optimized CUDA versions. We observe that the cases with the highest performance penalties for the oneAPI ports, require significantly more registers than the CUDA originals. This decreases the occupancy in the oneAPI implementation and causes performance degradation. When the number of registers is similar to the CUDA version, we observe penalties lower than 5%. The remainder of this paper is structured as follows. First, we provide background information on oneAPI and other portability solutions in section 2. Then, we discuss the two numerical integration CUDA implementations in section 3. Section 4 details the porting process and challenges we faced using DpCT and the oneAPI platform. In section 5, we present a performance comparison of the CUDA and oneAPI implementations of PAGANI and m-Cubes. We finish in section 6 with a discussion of our conclusions regarding the oneAPI platform's viability and ease of use. We demonstrate that the oneAPI implementation does not induce significant performance penalties and that it is a viable platform for attaining performance on Nvidia GPUs. ## 2 Background There are multiple programming models targeting different architectures. Among the most prominent, are OpenCL [24], [16], OpenACC [5], OpenMP [1], RAJA, Alpaka [30], and Kokkos [10]. The Khronos group was the first to address portability by developing the OpenCL standard to target various architectures. The same group later followed with the SYCL standard. SYCL is a higher-level language that retained OpenCL features but significantly improved ease of use with the utilization of C++ and the adoption of a single-source model. There are multiple implementations of SYCL such as DPC++, ComputeCpp, HipSYCL, and triSYCL [28]. DPC++ is conformant to the latest SYCL and C++ standards and is integrated into the oneAPI ecosystem [2]. ### Intel oneAPI and DPC++ The oneAPI toolkit was released in 2020, providing a programming platform with portability across multiple architectures at the core of its mission. The toolkit included various tools along with the DPC++ language which was based on the SYCL and C++ standards [8]. The reliance on these open standards that are intended to evolve over time is one of the most attractive features of DPC++. Such evolution is facilitated by DPC++ extensions with various features that can be later introduced to the standards after periods of experimentation. Such examples include the use of _Unified Memory_ and _filtered Device selectors_, which were missing from SYCL 1.2.1 but were later included in the SYCL 2020 standard. DPC++ achieves execution platform portability through its use of SYCL and various backends (implemented as shared libraries) that interface with particular instruction sets such as PTX for Nvidia GPUs and SPIR-V for Intel devices. It is worth noting that there is no reliance on OpenCL, which is instead one of several available backends. As such, DPC++ implementations can target various CPUs, GPUs, and FPGAs. This is a similar approach to Kokkos, Alpaka, and RAJA. ### CUDA-backend for SYCL While CUDA is the native and most performant programming model for Nvidia GPUs, Nvidia provided support to the OpenCL API [25]. As a result, non-CUDA implementations could be executed on Nvidia GPUs. The ComputeCpp implementation of SYCL by CodePlay, provided such functionality through OpenCL, but its performance was not comparable to native CUDA as not all functionality was exposed [3]. As such, CodePlay developed the CUDA backend for DPC++, which is part of the LLVM compiler project. CUDA support is not enabled by default and is at an experimental stage. To enable the backend, we must build the LLVM compiler project for CUDA. This can be achieved through easy-to-follow instructions that involve CUDA-specific flags, and the use of _clang++_ instead _dpcpp_ to compile source code. As a result, DPC++ code can generate PTX code by using CUDA directly instead of relying on the OpenCL backend. This approach not only enables the use of Nvidia libraries and profiling tools with DPC++ but also the capability to theoretically achieve the same performance as CUDA. ### Related Work The oneAPI programming model may not be as mature as CUDA but the literature already includes several examples of utilizing DPC++. The authors of [12] validated the correctness of a DPC++ tsunami simulator ported from CUDA. A Boris Particle Pusher port from an openMP version was discussed in [27], where a DPC++ implementation was \(10\%\) slower than the optimized original. In [14], CUDA and DPC++ implementations of a matrix multiplication kernel were compared on different matrix sizes; the execution time on an Nvidia GPU was slower with DPC++ code by \(7\%\) on small problem sizes but as much as \(37\%\) on larger ones. On the contrary, [15] and [13] included experiments where a DPC++ biological sequence alignment code showed no significant performance penalty compared to CUDA, and even a case of \(14\%\) speedup. Spare matrix-vector multiplication kernels and _Krylov_ solvers in [26] reached \(90\%\) of a CUDA version's bandwidth. There were also cases with non-favorable performance for DPC++ ports. In [17] a bioinformatics-related kernel performed twice as fast in CUDA and HIP than in DPC++. In [11] DPC++ versions generally reported comparable performance to CUDA but there were multiple cases where the penalty ranged from \(25-190\%\). There seems to be a deviation in the attainable performance. This is reasonable due to the variety of applications and the relatively early stage of development for the oneAPI ecosystem. We also expect that the level of optimization in CUDA implementations is an important factor. In our experience, highly optimized codes typically yield performance penalties in the range \((5-10\%)\). There are multiple cases displaying approximately \(10\%\) penalty compared to native programming models. This indicates that DPC++ can achieve comparable performance to CUDA, though careful tuning and additional optimizations may be needed. ## 3 Numerical Integration Use Case Numerical integration is necessary for many applications across various fields and especially physics. Important examples include the simulation of beam dynamics and parameter estimation in cosmological models [21][6][9]. Even ill-behaving integrands (oscillatory, sharply peaked, etc.) can be efficiently integrated with modest computational resources, as long the integration space is low dimensional (one or two variables). On the contrary, solving medium to high-dimensional integrands is often infeasible on standard computing platforms. In such cases, we must execute on highly parallel architectures to achieve performance at scale. There are a few GPU-compatible numerical integration algorithms [22][23][7][19][29]. Unfortunately, exploration of execution-platform portability has been limited, with CUDA being the most common choice. Since CUDA is a proprietary language, such optimized implementations cannot be executed on non-Nvidia GPUs. To our knowledge, the only mentions of potential portability in numerical integration libraries are found in [22] where a Kokkos implementation of the PAGANI integrator is briefly mentioned to be in development and in [23] which compares the CUDA implementation of \(m\)-Cubes with an experimental Kokkos version. ### Pagani PAGANI is a deterministic quadrature-based algorithm designed for massively parallel architectures. The algorithm computes an integral by evaluating the quadrature rules, which are a series of weighted summations of the form \(\sum_{i=1}^{f_{eval}}w_{i}\). \(f(x_{i})\). The computation involves an integrand function \(f\) which we invoke at the \(d\)-dimensional points \(x_{i}\). Each point \(x_{i}\) has a corresponding weight \(w_{i}\) and there are \(f_{eval}\) such points in the summation. PAGANI computes an initial integral and error estimate, and it progressively improves its accuracy until reaching a user-specified threshold. The accuracy improvement is achieved by applying the quadrature rules in smaller regions of the integration space and accumulating those values to get the integral estimate. PAGANI, is a memory-hungry algorithm due to its rapid sub-division of the integration space; high-accuracy executions typically generate millions of regions for ill-behaving integrands. This is a design choice that is greatly suited for parallel systems such as GPUs. A significant component of the algorithm involves the careful classification of the regions, with the goal of eliminating certain classes of regions from the limited available memory. While this is a critical step for the progression of the algorithm, it is not the most computationally intensive. In fact, it is the Evaluate method (listed in Algorithm 2 of [22]) that consistently takes more than 90% of total execution time. Its function is to compute an integral/error estimate for each region and select one of the dimensional axes for splitting. As such, it can be viewed as the core of PAGANI, both from an algorithmic and performance standpoint. We describe the Evaluate method in Listing 1, which we will refer to as Pagani-kernel. The input consists of the integrand \(f\), which will be repeatedly evaluated within the boundaries of each region, the number of regions \(n\), and the dimensionality \(d\). Then, we supply the region coordinates \(R\) in the form of two lists \(R.c\) (low boundary of a region on each axis) and \(R.l\) (length of a region on each axis). Next, we use the lists \(I\) and \(E\) to store the integral and error estimate of each region, while the list \(K\) contains an index representing the axis that should be split in the future to improve the estimates. Finally, we supply a collection of read-only lists through the parameter \(q\). Each list in \(q\) contains constant values used to generate and permute over the evaluation points and weights associated with the five quadrature rules. Each thread group processes a single region \(i\) and uses all threads in a group to parallelize the integrand function evaluations. On lines 3 to 6 each thread initializes its variables, while thread zero initializes variables shared in the group. The \(local\) list at line 3 has a single entry for each of the five quadrature rules. All threads compute their assigned function evaluations for each quadrature rule and accumulate the results in the \(local\) list. Each of the five entries in \(local\), contains the partial results computed by the thread for the respective quadrature rule summations. \(shared\) is an array in shared memory. It is used for the temporary storage of function evaluations that need to be retained and not just accumulated. At line 6, thread zero of each thread group will cache the sub-region assigned to its thread group into the \(S\) shared memory location. \(S\) contains the region boundaries, global boundaries of integration space, volume, longest dimensional axis in the boundaries of the region, integral/error-estimate, and axis to split. All of these values are associated with a single region. Then the for-loop at line 7 executes each function evaluation \(f_{id}\) in parallel. At line 8, each thread computes the integrand \(f\) at a \(d\)-dimensional point \(x\). This involves reading \(d\) values from one of the lists in \(q\) (generators list) and scaling those values to the region boundaries of each axis (bounds stored in \(S\)). Then the threads evaluate the integrand at line 9 and store the result at the appropriate index in the list \(shared\). Once that is done, the threads use the weights associated with each quadrature rule (stored in \(q.w\)) to multiply by the function evaluation at line 11. Once the threads exit the for-loop of line 7, all function evaluations have been completed. At line 13, the algorithm accumulates the results from all threads in the group, which yield five integral estimates per region. At the end, thread zero will store the reduced data as the final estimates in line 18. Thread zero also computes the split axis at line 15. This requires reading two values from a list in \(q\), and the serial iteration of several function evaluations which are stored in \(shared\). Thread zero performs a series of simple operations and then stores the index of the axis to split at the shared location \(S.k\). The error estimate is computed at line 16 and involves determining the maximum difference between the estimates reported by four of the five quadrature rules. This requires reading from certain lists in \(q\) and the produced result is the error estimate for the integral estimate of that region. The CUDA implementation was optimized for the Nvidia _V100_ GPU. The kernel is launched in groups of 64 threads and the loop at line 7 in iterated in a strided fashion. This allows the threads to coalesce accesses to global memory (reads in \(q\)) needed for the function evaluations. Additionally, the kernel relies on the "ldg" intrinsic for reading the arrays within \(q\), suggesting to the compiler their placement in the read-only cache. It is worth noting that this is a compute-bound kernel with hundreds and often thousands (depending on the integrand) of floating-point operations per byte. While the algorithm typically minimizes branch divergence, it cannot be avoided entirely due to thread zero performing operations on shared memory and storing the final results in global memory. Due to fast-access shared memory utilization, the implementation uses synchronization points in the following cases: prior to the loop in line 7, between the main loop strides, and before reducing the data at line 13. The reduction utilizes warp-level primitives to accumulate results within the warp, but needs shared memory to reduce the values among the warps. ### \(m\)-Cubes \(m\)-Cubes is a probabilistic Monte Carlo algorithm based on the VEGAS integrator [20]. It operates by randomizing the sample generation across the integration space to solve integrands and relies on the standard deviation of the Monte Carlo estimate to produce error estimates for the computation. Just like VEGAS, \(m\)-Cubes utilizes importance and stratified sampling to accelerate the Monte Carlo rate of convergence. Importance sampling works by progressively approximating a step-wise function that mirrors the shape of the integrand. The step-wise function, allows the algorithm to sample from a probability distribution that concentrates the function evaluations where the integrand varies the most. This approach achieves the adaptive behavior that is necessary for numerical integrators to be efficient in high-dimensional spaces. In terms of stratified sampling, \(m\)-Cubes partitions the integration space into \(m\) sub-cubes that are sampled separately. This improves the convergence rate and eliminates statistical errors and inefficient sampling that could occur on sharp-peaked integrands. The sub-division resolution is dictated by a user-specified number of samples per iteration. \(m\)-Cubes partitions the integration space such that a 2-point Monte Carlo integration can be executed on each partition while generating approximately \(n\) samples. We describe the main kernel in Listing 2, though it is also detailed in [23]. The implementation relies on many thread groups of size \(x\), requiring a total \(\frac{m}{x}\) such groups where \(m\) is the number of sub-cubes resulting from the initial sub-division. Each thread processes its own sub-cube batch of size \(s\). ``` 1:proceduremcubes-kernel(\(f\), \(d\), \(m\), \(s\), \(p\), \(B[500\cdot d]\), \(C[500\cdot d]\), \(r\)) 2:for\(m/(b\) total threads) parallel do 3: Set-Random-Generator(\(seed\)) 4:\(I\), \(E\gets 0\)\(\triangleright\) cumulative estimates of thread 5:for\(t=0\) to \(s\)do 6:\(I_{t}\), \(E_{t}\gets 0\)\(\triangleright\) estimates of sub-cube t 7:for\(k\gets 1\) to \(p\)do 8:\(x[1:d]\leftarrow\textsc{Generate}()\) 9:\(I_{k},E_{k}\leftarrow\textsc{Evaluate}(f,x)\) 10:\(I_{t}\gets I_{t}+I_{k}\)\(\triangleright\) Accumulate sub-cube contributions 11:\(E_{t}\gets E_{t}+E_{k}\) 12:\(b[1:d]\leftarrow\textsc{Get-Bin-ID}(x)\) 13:for\(j\gets 1\) to \(d\)do\(\triangleright\) Store bin contributions 14:\(\textsc{AtomicAdd}(C[b[j]]\), \(I_{k}^{2})\) 15:\(E_{t}\leftarrow\textsc{UpdateVariance}(E_{t}\), \(I_{t}\), \(p)\) 16:\(I\gets I+I_{t}\)\(\triangleright\) update cumulative values 17:\(E\gets E+E_{t}\) 18:\(I\leftarrow\textsc{Reduce}(I)\) 19:\(E\leftarrow\textsc{Reduce}(E)\) 20:if thread 0 within group then 21:\(\textsc{AtomicAdd}(r[0]\), \(I)\) 22:\(\textsc{AtomicAdd}(r[1]\), \(E)\) ``` **Algorithm 2**\(m\)-Cubes The input of mcubes-kernel, consists of the integrand \(f\) of dimensionality \(d\), the number of sub-cubes \(m\), sub-cube batch size \(s\), number of samples per sub-cube \(p\), bin bounds \(B[500\cdot d]\), bin contributions \(C[500\cdot d]\), and result \(r\). The mcubes-kernel outputs an estimate for the integral, variance, and the accumulated bin contributions \(C\) from all samples. All \(m/b\) threads are launched in parallel at line 2. At line 3 the random number generator is initialized. Each thread processes its assigned sub-cubes serially through the loop at line 5, and takes \(p\) samples per sub-cube at line 7. The results from each sample are stored in \(I_{k}\) and \(E_{k}\) (line 9). We accumulated those results to \(I_{t}\) and \(E_{t}\) (line 11) which correspond to the sub-cube estimates. Finally, we accumulate the results from all sub-cubes in \(I\) and \(E\). When evaluating each sample, we first generate a \(d\)-dimensional point \(x\) in line 8. The kernel evaluates the integrand \(f\) at that point(line 9). Then, at line 12, the kernel identifies the IDs of the \(d\) bins used to to generate the point \(x\), and stores their indices in the list \(C\), at the local array \(b\). Those indices are used to update the bin contribution at line 14 through atomic addition. The bin contribution is the square of the integral estimate \(I_{k}\). Then, the variance is updated at line 15 and the error and integral estimate for the \(s\) sub-cubes is accumulated in lines 16 and 17. The kernel then uses block-reduction to accumulate \(I\) and \(E\) from each thread at lines 18 and 19. Finally, two atomic additions from thread 0 of each thread-group leads to the computation of the total integral and error estimate from all thread-groups. These are stored in memory location \(r\) at lines 21 and 22. The CUDA implementation was optimized for the _V100_ GPU. The kernel consisted of 128 threads per block and utilized atomic addition for accumulating results originating from different thread blocks. This is particularly important for updating the random-access bin contributions, which refer to 500 bins per dimensional axis. The reduction operations at lines 18/19 operate on local memory and utilize warp-level primitives, though shared memory is used to accumulate the values from the different warps. ## 4 Porting Process The maturity of the CUDA programming model along with the more widespread utilization of highly performant Nvidia GPUs make CUDA an intuitive choice for high-performance applications. As such, PAGANI and \(m\)-Cubes were designed and optimized for CUDA on a V100 Nvidia GPU [22][23]. This makes DPCT the most appropriate tool to facilitate the porting process from CUDA to DPC++. ### Dpct DPCT is intended to automate the majority of CUDA code migration to DPC++, instead of performing a total conversion [4]. In our experience as well as those reported in [14], [18] and many others, DPCT functions exactly as intended. An easy-to-complete conversion process requires few manual code insertions. When manual editing is needed, DPCT displays helpful suggestions in the form of comment blocks to guide the user. There were certain code segments that were functional but needed simple fixes to improve performance. e.g. use of 3D _nd_item_ instead of 1D equivalent. In our experience, those cases were few and we expect that such effects will be less pronounced as oneAPI and DPCT evolve. Expert users are anticipated to produce higher-quality implementations than automated tools, but even then DPCT greatly facilitates the porting process by automating the tedious and often error-prone translation of API calls and indexing schemes. ### Challenges #### 4.2.1 Porting Issues with Nvidia Thrust Library PAGANI uses Thrust to perform common parallel operations on the host side, such as reduction, dot-product, prefix sum, and finding the minimum/maximum within a list. oneAPI provides an array of libraries to accommodate all of our required operations. While the CUDA implementation defaults to cuRAND to generate the random samples in \(m\)-Cubes, we use a simple custom-made random generator in both the CUDA and oneAPI versions. This facilitates the comparison between the two implementations and avoids a comparison of random-number-generator libraries. The usage of oneAPI libraries is more limited on the CUDA-backend. In our experience, certain calls to library functions inserted by DPCT worked on Intel GPUs such as the P630 but were not viable on the CUDA-backend, yielding compile-time errors in certain cases such as when using _mkl::stats::min_max_. Additionally, DPCT headers provided library calls with CUDA-like syntax in the _dpct_ namespace. This worked on Intel GPUs but not on the CUDA-backend, e.g _dpct::inner_product_, _dpct::get_device_pointer_. Still, we found alternatives from the oneDPL and oneMKL libraries, which allowed us to execute successfully on both the CUDA-backend and on Intel devices. #### 4.2.3 Performance Degradation We encountered more difficulties when attempting to achieve comparable performance to the original CUDA implementation. We observed that the NVCC and Clang compilers optimized differently when using the O3 optimization flag; we needed to set inline thresholds and to manually enable/disable loop-unrolling in certain locations. Furthermore, code elimination in some of our benchmarking kernels that invoked mathematical functions on the device was not equivalent in the two compilers. Additionally, we found that for our implementation, work-group reduction caused approximately 10% slowdown in the oneAPI version compared to manual reduction using shared memory and warp-intrinsics. Another challenge in our attempt to achieve comparable performance to CUDA was deviations in the performance of SYCL and CUDA mathematical functions. Exponential functions displayed comparable performance on benchmark kernels. On the contrary, we observed a slowdown of various degrees in oneAPI when using _power_ or trigonometric functions. This is most likely attributed to the compilers utilizing different optimizations. We did not use any _fast-math_ flags, since high accuracy is critical in numerical integration use cases. Finally, the use of atomic addition in \(m\)-Cubes caused orders of magnitude slowdown on both the mcubes-kernel and benchmark kernels. This was attributed to the lack of an architecture-specific flag that must be set to enable efficient atomics when supported. After setting the Volta architecture flag, atomic addition was as performant as in the native CUDA implementation. #### 4.2.4 Software Engineering Issues Utilization of the Catch2 testing framework and CMake was largely successful but more error-prone in oneAPI than in the case of CUDA. Header-inclusion issues often caused non-intuitive compilation errors if headers for the oneDPL library were placed after oneMKL headers. Setting up an environment to compile for the CUDA backend through CMake was not as easy as a build for fully-supported architectures since we require separate CMake instructions and flags when executing on Intel or Nvidia GPUs. As the CUDA-backend for SYCL support continues, we expect that such features will be addressed. ## 5 Experimental Results We conducted a series of experiments to evaluate the performance and correctness of the oneAPI ports relative to the optimized CUDA implementations of PAGANI and \(m\)-Cubes. We used a single node with a V100 Nvidia GPU and a 2.4 GHz Intel Xeon R Gold 6130 CPU. We also used the Devcloud environment to verify that the DPC++ implementations were portable and could be executed on a P630 Intel GPU. Due to the V100 GPU having significantly more computing cores than the P630, we do not make any performance comparisons between the two GPUs. Instead, we focus on the attainable performance of DPC++ on Nvidia hardware. When executing the CUDA implementations, we used gcc 8.5 and CUDA 11.6. For the CUDA-backend execution, we used the same environment but compiled with clang 15, an inline threshold of 10000, and the following compilation flags: "-fsycl -fsycl-targets=nvptx64-nvidia-cuda -Xsycl-target-backend -cuda-gpu-arch=sm_70". We verified the correctness of our ports, by comparing the results on both the Nvidia (V100) and Intel (P630) GPUs, to the results generated by the CUDA originals on a V100 GPU. In terms of evaluating performance, we chose the same benchmark integrands originally used to evaluate PAGANI and \(m\)-Cubes in [22] and [23]. These functions belong to separate integrand families with features that make accurate estimation challenging. We list those integrands in equations 1 to 6. All experiments use the same integration bounds \(\left(0,1\right)\) on each dimensional axis. Similar to [22] and [23], we perform multiple experiments per integrand. We deviate from [22] and [23] in that we do not execute the PAGANI and \(m\)-Cubes methods in their entirety. Instead, we execute their main kernels pagainkernel and mcubes-kernel, which is where more than 90% of execution is spent. With this approach, we can evaluate the effectiveness of each programming model in terms of offloading workloads to the device. It allows us to separate kernel evaluation from memory management operations (allocations, copies, etc.) and library usage. This comparison of custom kernel implementations is a better indicator of performance implications when porting CUDA codes to DPC++. \[f_{1,d}\left(x\right)=\cos\left(\sum_{i=1}^{d}i\,x_{i}\right) \tag{1}\] \[f_{2,d}\left(x\right)=\prod_{i=1}^{d}\left(\frac{1}{50^{2}}+\left(x_{i}-1/2 \right)^{2}\right)^{-1} \tag{2}\] \[f_{3,d}\left(x\right)=\left(1+\sum_{i=1}^{d}i\,x_{i}\right)^{-d-1} \tag{3}\] \[f_{4,d}\left(x\right)=\exp\left(-625\sum_{i=1}^{d}\left(x_{i}-1/2 \right)^{2}\right) \tag{4}\] \[f_{5,d}\left(x\right)=\exp\left(-10\sum_{i=1}^{d}\left|x_{i}-1/2 \right|\right)\] (5) \[f_{6,d}\left(x\right)=\begin{cases}\exp\left(\sum_{i=1}^{d}\left( i+4\right)x_{i}\right)&\text{ if }x_{i}<\left(3+i\right)/10\\ 0&\text{otherwise}\end{cases} \tag{6}\] ### Offloading Mathematical Computations to Kernels A critical stage in pagani-kernel and m-Cubes-kernel is the invocation of the integrand at various \(d\)-dimensional points. Integrands with trigonometric or exponential functions and table look-ups will have larger execution times compared to other simple integrands that only contain basic mathematical operations. To attain satisfactory performance, both the invocation of the integrand functions and the remaining operations within the kernels must achieve comparable performance to the CUDA implementation. We tested the efficiency of the integrand oneAPI implementations with a simple kernel that performs a series of invocations on many \(d\)-dimensional points. The points are randomly generated on the host and then copied to device memory. Each thread invokes the integrand serially 1 million times and writes its accumulated results to global memory. Writing the results prevents the NVCC and Clang compilers from disregarding the integrand computations due to optimization. We first tested simple integrands that contained only a particular function such as _sin, pow, powf, sycl::exp, sycl::pow, sycl::pown_. We invoked these mathematical functions with \(d\) arguments that comprise each \(d\)-dimensional point. We did not use fast-math flags as accuracy is critical in numerical integration. We observed small but consistent penalties of at most \(2\%\) when invoking the _power_ and _exponential_ functions. On the contrary, trigonometric functions are approximately \(40\%\) slower on the CUDA backend. We performed the same experiment on the six benchmark integrands for dimensions 5 to 8. We summarize the results in Table 1. The timings in CUDA and oneAPI columns are the means of 10 kernel executions per integrand. The ratio of those timings shows that the oneAPI version is at most \(4\%\) slower. The largest penalty is observed in the \(f1\) integrand which makes use of the \(cos\) function. The remaining integrands only make use of _exponential_ and _power_ functions and yield small penalties. These experiments on the execution time of the integrand invocations demonstrate that the user-defined computations do not display significant performance penalties. The one exception is the extended use of trigonometric functions. None of the benchmark integrands make extended use of trigonometric functions (\(f1\) has one call to \(cos\) per invocation). As such, we do not expect any slowdown larger than 5% in either PAGANI or \(m\)-Cubes to be attributed to the integrand implementations. ### Benchmark Integrands Performance Comparison Another set of experiments involved the invocation of the pagani-kernel and mcubes-kernel on the benchmark integrands. To address different degrees of computational intensity, we vary the number of thread-blocks used to launch the kernels. For the mcubes-kernel, we achieve this effect by varying the required number of samples per iteration in the range \((1e8,3e9)\). This leads to different block sizes per kernel. For pagani-kernel, the number of thread blocks corresponds to the number of regions being processed. We perform high-resolution uniform splits to generate region lists of different sizes and supply them to the pagani-kernel for evaluation. We report the penalty of using oneAPI for the benchmark integrands, in the ratio columns of Tables 2 and 3. We used four thread-block sizes for each integrand for the kernel executions. Each kernel configuration (number of thread groups) was repeated 100 times to provide a statistical mean and standard deviation for the execution times. Across our experiments, the average execution time ratio (\(\frac{oneAPI}{CUDA}\)) is in the range \((0-10\%)\). The \(f2\) and \(f4\) integrands which make repeated use of the _power_ function display the largest performance penalties for both PAGANI and \(m\)-Cubes. It is worth noting that both \(f2\) and \(f4\) display the largest execution times among the benchmark integrands for both integrators. ### Simple Integrands Performance Comparison In addition to the benchmark integrands, we also evaluate integrands that only perform a summation of the arguments (\(\sum_{i=1}^{d}x_{i}\)) where \(d\) is the number of \begin{table} \begin{tabular}{l l l l l l} \hline \hline id & \(\mu\) CUDA (ms) & \(\mu\) oneAPI (ms) & \(\sigma\) CUDA & \(\sigma\) oneAPI & \(\frac{\mu\ oneAPI}{\mu\ CUDA}\) \\ \hline f1 & 1866.4 & 1952.4 & 13.3 & 21.4 & 1.04 \\ f2 & 8413.9 & 8487.3 & 5012.5 & 5042.9 & 1.009 \\ f3 & 1812.4 & 1828.3 & 18.5 & 27.1 & 1.009 \\ f4 & 11416.1 & 11410.1 & 2184.9 & 2148.1 & 0.99 \\ f5 & 634.3 & 654.4 & 73.5 & 67.3 & 1.03 \\ f6 & 300.4 & 300.8 & 32.05 & 32.6 & 1.001 \\ \hline \hline \end{tabular} \end{table} Table 1: mean (\(\mu\)) and standard deviation (\(\sigma\)) of execution times for invoking \(5-8D\) benchmark integrands dimensions. This avoids any bias in the comparison by avoiding mathematical functions that could either call different implementations, cause differences in register usage or lead to different optimizations. The ratios in Tables 4 and 5, display timings on addition integrands for dimensions five to eight. Once more, we observe penalties smaller than 10% and for both integrators these penalties decrease on higher dimensionalities. ### Factors Limiting Performance Both pagani-kernel and mcubes-kernel, are compute bound, performing thousands of computations for each byte of accessed memory. For compute-bound kernels, the number of registers per thread is a factor limiting the number of concurrent threads that can be executed; the amount of shared memory and registers per thread limit warp/work-group occupancy, which in turn degrades performance. In most cases, the oneAPI implementations assigned more registers to each thread compared to their CUDA equivalents. We illustrate the magnitude of this \begin{table} \begin{tabular}{l l l l l l} \hline \hline id & \(\mu\) CUDA (ms) & \(\mu\) oneAPI (ms) & \(\sigma\) CUDA & \(\sigma\) oneAPI & \(\frac{\mu\ oneAPI}{\mu\ CUDA}\) \\ \hline f1 & 286.7 & 286.7 & 2.1 & 0.9 & 1.0 \\ f2 & 402.1 & 443.1 & 2.6 & 0.9 & 1.1 \\ f3 & 284.5 & 285.8 & 1.6 & 1.4 & 1.0 \\ f4 & 385.7 & 423.5 & 2.4 & 0.5 & 1.1 \\ f5 & 284.3 & 285.9 & 2.1 & 1.7 & 1.0 \\ f6 & 283.8 & 285.4 & 1.9 & 1.6 & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 2: \(m\)-Cubes: mean (\(\mu\)) and standard deviation (\(\sigma\)) of execution times for 8D benchmark integrands \begin{table} \begin{tabular}{l l l l l l} \hline \hline id & \(\mu\) CUDA (ms) & \(\mu\) oneAPI (ms) & \(\sigma\) CUDA & \(\sigma\) oneAPI & \(\frac{\mu\ oneAPI}{\mu\ CUDA}\) \\ \hline f1 & 172.3 & 177.5 & 0.9 & 1.2 & 1.02 \\ f2 & 1500.4 & 1651.0 & 0.3 & 2.1 & 1.1 \\ f3 & 286.4 & 290.7 & 0.8 & 0.4 & 1.01 \\ f4 & 1434.7 & 1524.9 & 0.4 & 1.9 & 1.06 \\ f5 & 166.5 & 170.7 & 0.6 & 0.4 & 1.03 \\ f6 & 136.8 & 139.4 & 0.4 & 0.2 & 1.02 \\ \hline \hline \end{tabular} \end{table} Table 3: PAGANI: mean (\(\mu\)) and standard deviation (\(\sigma\)) of execution times for 8D benchmark integrands difference in registers per thread in Figures 1 and 2. We observe the largest difference in integrands \(f2\) and \(f4\), which make extended use of the _power function_. It is the same functions that display the two largest execution time penalties for the benchmark integrands in Tables 2 and 3. We observe a similar pattern on the simple addition integrands (Table 4 and 5). In those cases, there are no mathematical functions (_pow_, _exp_, etc.) and the integrands only perform a summation. The difference in registers decreases on higher dimensions, leading to degraded performance on low dimensions. This is evident in tables 4 and 5 where higher-dimensional integrands have smaller values in the \(\frac{oneAPI}{CUDA}\) column. The same pattern is observed for the benchmark integrands, where the high dimensional versions perform better than the low dimension equivalents. It can be seen in Figure 1, that this effect is more prominent in \(m\)-Cubes, since it displays a larger deviation across all dimensions. These observations lead us to believe that register difference and its effect on occupancy is the main reason behind the performance degradation. and Nvidia GPUs. We experimented with various workloads consisting of different mathematical functions. We found that the assigned registers per thread can deviate in oneAPI and CUDA codes. This affects occupancy which in turn can negatively impact performance, particularly in compute-bound kernels. We faced additional challenges with mapping library calls to oneAPI equivalents, matching compiler optimizations of NVCC with Clang, and using build and testing libraries like CMake and Catch2. We addressed those challenges and demonstrated that the performance penalty of using oneAPI ports instead of optimized CUDA implementations can be limited to \(10\%\) on Nvidia GPUs. Additionally, numerous cases exhibited comparable performance to the original CUDA implementations, with execution time differences in the \(1-2\%\) range. We compared oneAPI and CUDA implementations on the same Nvidia V100 GPU. We were able to execute on an Intel P630 GPU but we did not compare these timings with those on the V100 GPU due their significant difference in computing power. In the future, we plan to execute on the high end Intel Ponte Vecchio GPU and compare performance metrics with Nvidia high end GPUs such as A100. The vast array of libraries, ease of portability, and small margin of performance degradation, make oneAPI an appropriate software solution for the use case of numerical integration. ## 7 Acknowledgements The authors would like to thank Intel Corporation and Codeplay for providing technical support in the conversion process. The authors are also grateful for the support of the Intel oneAPI Academic Center of Excellence at Old Dominion University. Work supported by the Fermi National Accelerator Laboratory, managed and operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes. FERMILAB-CONF-23-007-LDRD-SCD. We acknowledge the support of Jefferson Lab grant to Old Dominion University 16-347. Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177 and DE-AC02- 06CH11357.
2301.03611
1+1D Hadrons Minimize their Biparton Renyi Free Energy
We use a variational method to calculate the spectrum and the parton distribution function of ground state hadrons of various gauge theories in 1+1 dimensions. The template functions in our method minimize a free energy functional defined as a combination of free valence partons' kinetic energy on the lightcone and the Renyi entanglement entropy of biparton subsystems. Our results show that hadrons in these theories minimize the proposed free energy. The success of this technique motivates applying it to confining gauge theories in higher dimensions.
Pouya Asadi, Varun Vaidya
2023-01-09T19:00:02Z
http://arxiv.org/abs/2301.03611v1
# 1+1D Hadrons Minimize their Biparton Renyi Free Energy ###### Abstract We use a variational method to calculate the spectrum and the parton distribution function of ground state hadrons of various gauge theories in 1+1 dimensions. The template functions in our method minimize a free energy functional defined as a combination of free valence partons' kinetic energy on the lightcone and the Renyi entanglement entropy of biparton subsystems. Our results show that hadrons in these theories minimize the proposed free energy. The success of this technique motivates applying it to confining gauge theories in higher dimensions. ## I Introduction One of the enduring questions in modern physics is a description of the spectrum of strongly coupled theories such as Quantum Chromodynamics (QCD), which describes the strong nuclear force that holds quarks and gluons together in colorless hadrons. Our information about a hadron structure comes from experimentally accessible structure functions such as the Parton Distribution Function (PDF), which can be isolated using factorization theorems. While no analytical technique exists for calculating these functions, numerical computations in certain regimes are possible with lattice QCD [1; 2; 3; 4; 5; 6; 7]. However, these tools are computationally and economically expensive and do not offer any simple insight into the mechanism of strong interactions. This has motivated novel approaches (e.g. [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]) for understanding confining theories, many of which were first proposed in 1+1 dimensions (1+1D). Here, gluons are not propagating degrees of freedom, there is no spin, and since the gauge coupling is dimensionful, the running effects are power suppressed. The PDF of the quarks inside hadrons of 1+1D theories is simply equal to the absolute square of their wavefunction and does not run with the energy scale, see Ref. [22] for further details. In this letter, we suggest that since a strongly coupled bound state is a complex system with numerous interactions between its partons, notions from complex systems, statistical mechanics, and information theory (see Ref. [23] for connections between the latter two subjects) can be elevated to a more central role in describing its properties. (Notions from quantum information sciences have already been used in studying other aspects of confining gauge theories, e.g. see Refs. [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54].) Similar to generic systems studied in classical statistical mechanics, e.g. a container of gas, that are governed by a minimum free energy principle, we conjecture partons' distribution inside hadrons are governed by a minimum free energy principle that includes a measure of entanglement between pairs of partons. Such a principle should not rely on the special properties of a theory, such as its symmetries, so that it can be readily applied to higher dimensional systems. In Ref. [22], we tested whether the hadron wavefunction in these theories could be described as a thermal ensemble. The ansatz for the wavefunction was derived from minimizing a proposed free energy \[F=E-TS, \tag{1}\] where \(E\) (\(S\)) is free parton kinetic energy on the lightcone (the von Neumann entanglement entropy of biparton subsystems). We observed that this description was exact in the limit of infinite parton mass. The description deviates from exact numerical results as we move to lower quark masses and is no longer even approximately applicable in the deep non-perturbative regime, i.e. when \(m_{q}/g\ll 1\) with \(m_{q}\) denoting the quark mass and \(g\) denoting the coupling of the confining gauge group in 1+1D. This motivates us to explore modifications of the free energy principle, keeping in mind that it should asymptote to the free energy of Ref. [22] in the large quark mass limit. Our proposal in this letter is to replace the von Neumann entropy of Eq. (1) with either Tsallis [55] or Renyi [56] entanglement entropy (both these entropy functions work for our problem); for simplicity, we use the Renyi entropy throughout this letter. This family of entropy functions finds a variety of applications in physics including quantum gravity (e.g. Refs. [57; 58; 59]), cosmology [60], topological phases [61], non-equilibrium many body systems [62], and condensed matter physics [63]. We can interpret our proposal as a variational method for approximating the wavefunction and mass spectrum of the shallowest bound states of 1+1D gauge theories. We find that our proposed free energy correctly predicts the hadrons' mass spectrum and PDF for all quark masses. The success of this approximation in reproducing the existing results motivates us to propose our minimum free energy principle as the first principle governing the properties of bound states of confining gauge groups, including those in higher dimensions. A Variational method for hadron wavefunction We propose replacing the von Neumann entropy in the biparton free energy of Ref. [22] by one of its generalizations, namely the Renyi entropy [56]1 Footnote 1: One can show that for our study, Renyi and Tsallis [55] entropy give rise to exactly the same results. Throughout the text we work with the Renyi entropy, but all our conclusions remain intact if we use Tsallis entropy instead. \[S_{\alpha}(\rho)=\frac{1}{1-\alpha}\ln\left(\text{Tr}\Big{[} \rho^{\alpha}\Big{]}\right), \tag{2}\] where \(\rho\) is a reduced density matrix and \(\alpha\) is the _order_. In the limit of \(\alpha\to 1\), the Renyi entropy \(S_{\alpha}\) reduces to the von Neumann entropy. We therefore propose a new _biparton Renyi free energy_ functional \[F_{\alpha}=E-TS_{\alpha}, \tag{3}\] where \(E\) denotes the kinetic energy of free valence partons on the lightcone and \(S_{\alpha}\) is the order \(\alpha\) Renyi entropy of the biparton subsystems of the hadron, see Ref. [22] for further details. We propose to test whether minimizing this functional will lead to an improved description of a hadron structure. For a meson, in 1+1D the wavefunction is described by a \(q\bar{q}\) state since the higher Fock state contribution is suppressed for the ground state. Following Ref. [22], our modified free energy for the \(q\bar{q}\) biparton system is given by \[F_{\alpha}=\frac{m_{q}^{2}}{P^{-}}\int dx|\phi(x)|^{2}\left( \frac{1}{x}+\frac{1}{1-x}\right)\] \[-\frac{T}{1-\alpha}\ln\left(\int dx|\phi(x)|^{2\alpha}\right), \tag{4}\] where \(P^{-}\) is the large lightcone momentum, \(\phi(x)\) is the meson wavefunction, \(x\) is the momentum fraction of \(P^{-}\) carried by the quark, and \(T\) is a Lagrange multiplier. This functional is minimized by the following ansatz \[|\phi(x)|^{2}=\left(\frac{m_{q}^{2}}{\mathcal{T}^{2}}\right)^{ \frac{1}{\alpha-1}}\Big{[}x(1-x)\Big{]}^{\frac{1}{1-\alpha}}, \tag{5}\] where \[\mathcal{T}^{2}=TP^{-}\frac{\alpha}{(1-\alpha)\int dx|\phi(x)|^{2 \alpha}}, \tag{6}\] whose numerical value will be determined by the normalization condition on the PDF. Similarly, following the prescription from Ref. [22], after minimizing the biparton Renyi free energy of a baryon, we arrive at the ansatz \[|\phi(x,z_{1},z_{2},\cdots,z_{N-2})|^{2}=\left(\frac{m_{q}^{2}} {\mathcal{T}^{2}}\right)^{\frac{1}{\alpha-1}}\left(x\Bigg{[}1-x-\sum_{k=1}^{N -2}z_{k}\Bigg{]}\prod_{k=1}^{N-2}z_{k}\right)^{\frac{1}{1-\alpha}}, \tag{7}\] where \(N\) is number of colors and again \(\mathcal{T}\) will be determined numerically by demanding the right normalization for the wavefunction. We can now write an expression for the quark PDF inside a baryon (see Ref. [22] for further details) \[f_{q}(x)=N\left(\frac{m_{q}^{2}}{\mathcal{T}^{2}}\right)^{\frac{ 1}{\alpha-1}}\left(\prod_{i=1}^{N-2}\int_{0}^{1-x-\sum_{j=1}^{i-1}z_{j}}dz_{i} \right)\ \left(x\Bigg{[}1-x-\sum_{k=1}^{N-2}z_{k}\Bigg{]}\prod_{k=1}^{N-2}z_{k} \right)^{\frac{1}{1-\alpha}}. \tag{8}\] We will use Eqs. (5) and (8) as the template functions in a variational method for finding eigenvalues and eigenfunctions of ground state hadrons in various confining theories in 1+1D. The variational parameter is the order \(\alpha\) and will be calculated numerically by demanding that the wavefunction minimizes the expectation value of the Hamiltonian. The specifics of a particular theory will therefore be reflected in how \(\alpha\) varies as a function of \(m_{q}\). We can now look at specific theories in 1+1D and carry out this calculation. Since the gauge coupling g is dimensionful in 1+1D theories, we normalize all other dimensionful quantities by g. ## III Schwinger model This is a U(1) gauge theory in 1+1D with a fermion of mass \(m_{q}\) coupled to a photon [10]. The photon can be eliminated using gauge redundancy and equations of motion, i.e. it is not a propagating mode in 1+1D. We work in the Infinite Momentum Frame (IMF) with \(P^{-}\rightarrow\infty\), where \(P^{+}\) acts as the Hamiltonian. In this frame the right-handed component of the fermion field, \(\psi_{R}\), is not a propagating degree of freedom either and can be eliminated using equations of motion. Having integrated out these fields, we find an effective four-fermion interaction term in the Hamiltonian written in position space as [64; 65] \[H_{\text{int}}(x^{0})=\textsl{g}^{2}\int dx^{1}dy^{1}J(x^{0},x ^{1})|x^{1}-y^{1}|J(x^{0},y^{1}), \tag{9}\] where \(J\) is the bilinear left handed fermion current \(\psi_{L}^{\dagger}\psi_{L}\). The expectation value of the Hamiltonian for the ground state meson in this model is given by \[M_{\rm hadron}^{2}=\langle\phi|P^{-}P^{+}|\phi\rangle=\langle\phi| P^{-}(H^{0}+H_{\rm int})|\phi\rangle \tag{10}\] \[= m_{q}^{2}\int dx|\phi(x)|^{2}\left(\frac{1}{x}+\frac{1}{1-x} \right)+\frac{\text{g}^{2}}{\pi}\int dxdy\phi(x)\phi^{*}(y)\] \[+ \frac{\text{g}^{2}}{2\pi}\int dxdy\frac{|\phi(x)-\phi(y)|^{2}}{(x -y)^{2}}.\] Here \(M_{\rm hadron}^{2}\) is the invariant mass squared of the meson bound state. We now plug in the ansatz from Eq. (5) and minimize \(M_{\rm hadron}^{2}\) over \(\alpha\) to find the mass and wavefunction of the meson. We show the value of order \(\alpha\) for different quark masses in Fig. 1. The order is a monotonically increasing function of \(m_{q}\), asymptoting to \(-\infty\) as \(m_{q}\to 0\) and approaching 1 as \(m_{q}\to\infty\). What is peculiar is the negative order at low quark masses. We observe that the shift into negative order happens as we transition into the non-perturbative (\(m_{q}\ll\textsl{g}\)) regime. Putting the calculated value of \(\alpha\) into the ansatz of Eq. (5), we find the meson mass (Fig. 2) and PDF (Fig. 3) for different values of \(m_{q}\). We find excellent agreement for all values of quark masses (compared to Ref. [66]) which supports our free energy conjecture. ## IV 't Hooft model Non-abelian gauge groups SU(\(N\)) in 1+1D are collectively referred to as the 't Hooft model, in recognition of 't Hooft's contribution in studying their properties in the large \(N\) limit [11; 14]. The Hamiltonian of the bound states in the lightcone is derived and, by solving a time-independent Schrodinger equation, the hadron mass spectrum (as a function of the quark mass) and the wavefunction of the bound state are calculated [11; 14; 67]. For the rest of this work, we focus on \(N=3\); extending our results to higher values of \(N\) is straightforward. For one flavor of quarks in the fundamental representation of SU(\(N\)), the expectation value of the Hamiltonian of the meson state is \[M_{\rm hadron}^{2} = m_{q}^{2}\int dx|\phi(x)|^{2}\left(\frac{1}{x}+\frac{1}{1-x}\right)\] \[+ \frac{\text{g}^{2}}{2\pi}\frac{3^{2}-1}{3}\int dxdy\frac{|\phi(x )-\phi(y)|^{2}}{(x-y)^{2}},\] while for the baryon it is \[M_{\rm hadron}^{2}=m_{q}^{2}\int dx\int_{0}^{1-x}dz|\phi(x,z)|^ {2}\left(\frac{1}{x}+\frac{1}{1-x-z}+\frac{1}{z}\right)\] \[+ \frac{\text{g}^{2}}{2\pi}\frac{3}{2}\frac{3^{2}-1}{3}\int dz\int _{0}^{1-z}dxdy\frac{|\phi(x,z)-\phi(y,z)|^{2}}{(x-y)^{2}}. \tag{12}\] Here \(\phi(x,z)\) is the joint wavefunction of the three valence partons. Figure 1: The order \(\alpha\) for the Schwinger model (blue), mesons of the ’t Hooft model with \(N=3\) (purple), and baryons (orange) of the ’t Hooft model with \(N=3\). We find that at large \(m_{q}/\textsl{g}\) values \(\alpha\to 1\), i.e. our biparton Renyi free energy approaches the thermal free energy of Ref. [22]. Figure 3: PDF of the Schwinger model ground state meson for two different values of quark mass. Our results (solid) agree perfectly with the existing results (dashed) [66]. Figure 2: Our prediction for the ground state meson mass in the Schwinger model (solid blue) that agrees perfectly with the existing results (blue squares) [66]. In the \(m_{q}/\textsl{g}\gg 1\) limit, \(\alpha\to 1\) and our result converges to the thermal description of Ref. [22] that used the von Neumann entropy instead of the Renyi entropy (golden). We now put the template ansatz from Eq. (5) or (8) in Eqs. (11)-(12) and minimize the eigenvalue as a function of the order \(\alpha\). The resulting \(\alpha\), as a function of the quark mass, is shown in Fig. 1. Similar to the case of the Schwinger model, we find that \(\alpha\) grows monotonically with \(m_{q}/\text{g}\) and in the limit of \(m_{q}\gg\text{g}\) it asymptotes to 1. Note that \(\alpha\to 1\) corresponds to the von Neumann entropy, thus in this limit our free energy functional becomes identical to the free energy used in Ref. [22]. We can now use these values of \(\alpha\) and calculate our approximation for the mass spectrum of the ground state hadrons for different quark masses, see Fig. 4. We are not aware of numerically accurate results for baryons in the low mass quark limit; our findings are in perfect agreement with the existing results. We also show the PDF of the ground state mesons (baryons) for different values of \(m_{q}/\text{g}\) on the top (bottom) plot in Fig. 5. For the case of mesons we again find perfect agreement with the results from exactly solving the lightcone Schrodinger equation, further corroborating our biparton Renyi free energy conjecture. We are not aware of any existing results in the literature for baryons. ## V Discussion In this letter, we have proposed a variational method for calculating ground state wavefunction and spectrum of hadrons in 1+1D. We derive template functions from a physically motivated free energy functional made of free partons' kinetic energy in the lightcone frame and the Tsallis/Renyi entanglement entropy of fixed momentum biparton subsystems in the bound state. The variation parameter is the order of the entropy and is calculated by minimizing the expectation value of the Hamiltonian of a given theory. We found that our method correctly reproduces the existing results for the mass spectrum and for the PDF of hadrons in the Schwinger and the 't Hooft (with \(N=3\) and one flavor of quarks in the fundamental representation) model. Our results show that hadrons in these models minimize our proposed biparton Renyi free energy functional. We also observe that, irrespective of the specific theory or bound state, the order of the entropy increases monotonically with the bare parton mass from \(\alpha\to-\infty\) at zero quark mass to \(\alpha\to 1\) at infinite quark mass limit. We find that the order becomes negative at low parton masses just as we enter the non-perturbative regime. Statistically, we can understand this as a switch from maximizing high probability configurations to minimizing those with low probability. We do not yet have any deeper physical intuition about \(\alpha\) beyond this interpretation. Given our construction, we conjecture that in a biparton subsystem (in _any_ bound state of _any_ confining theory) carrying a fixed total momentum, a single parton reduced density matrix is effectively described by an ensemble which is maximally entangled at zero mass and which approaches a thermal state at large masses. Our method can be extended to other 1+1D models. Our ultimate goal is to extend our method to models in higher dimensions, where a host of new complica Figure 4: Our prediction for the ground state meson (purple) and baryon (orange) mass as a function of the quark mass \(m_{q}\) for \(N=3\) ’t Hooft model. We find perfect agreement between our results and existing ones from solving the lightcone Schrödinger equations (denoted by \(\square\)) for both mesons [11] and baryons [67]. Figure 5: **Top:** The PDF of the ground state meson of ’t Hooft model with \(N=3\) and for a single flavor of quark in the fundamental representation for different quark masses. Our results (solid lines) are in agreement with the existing results, see Ref. [68]. **Bottom:** PDF of the ground state baryon in the same theory for a few different quark masses. In the \(m_{q}\to 0\) limit, we approach the analytic results of Ref. [67]. We are not aware of any numerical calculation of these PDFs for finite masses. tions (e.g. renormalization group evolution, spin, and gluon degrees of freedom) need to be accommodated. We leave such studies, and many other phenomenology explorations, for future work. ## Acknowledgement We thank Chris Akers and Tim Cohen for helpful discussions. The work of PA is supported in part by the U.S. Department of Energy under Grant Number DE-SC0011640. VV is supported by startup funds from the University of South Dakota.
2307.10285
Quantum Query Complexity of Boolean Functions under Indefinite Causal Order
The standard model of quantum circuits assumes operations are applied in a fixed sequential "causal" order. In recent years, the possibility of relaxing this constraint to obtain causally indefinite computations has received significant attention. The quantum switch, for example, uses a quantum system to coherently control the order of operations. Several ad hoc computational and information-theoretical advantages have been demonstrated, raising questions as to whether advantages can be obtained in a more unified complexity theoretic framework. In this paper, we approach this problem by studying the query complexity of Boolean functions under general higher order quantum computations. To this end, we generalise the framework of query complexity from quantum circuits to quantum supermaps to compare different models on an equal footing. We show that the recently introduced class of quantum circuits with quantum control of causal order cannot lead to any reduction in query complexity, and that any potential advantage arising from causally indefinite supermaps can be bounded by the polynomial method, as is the case with quantum circuits. Nevertheless, we find some functions for which the minimum error with which they can be computed using two queries is strictly lower when exploiting causally indefinite supermaps.
Alastair A. Abbott, Mehdi Mhalla, Pierre Pocreau
2023-07-18T13:12:55Z
http://arxiv.org/abs/2307.10285v3
# Quantum Query Complexity of Boolean Functions under Indefinite Causal Order ###### Abstract The standard model of quantum circuits assumes operations are applied in a fixed sequential "causal" order. In recent years, the possibility of relaxing this constraint to obtain causally indefinite computations has received significant attention. The quantum switch, for example, uses a quantum system to coherently control the order of operations. Several _ad hoc_ computational and information-theoretical advantages have been demonstrated, raising questions as to whether advantages can be obtained in a more unified complexity theoretic framework. In this paper, we approach this problem by studying the query complexity of Boolean functions under general higher order quantum computations. To this end, we generalise the framework of query complexity from quantum circuits to quantum supermaps to compare different models on an equal footing. We show that the recently introduced class of quantum circuits with quantum control of causal order cannot lead to any reduction in query complexity, and that any potential advantage arising from causally indefinite supermaps can be bounded by the polynomial method, as is the case with quantum circuits. Nevertheless, we find some functions for which the minimum error with which they can be computed using two queries is strictly lower when exploiting causally indefinite supermaps. _Introduction.--_The query model of computation is a simplified framework that offers a way to evaluate the complexity of different computational problems and to probe and compare different computational frameworks. Notably, it has been used to prove various separations between classical and quantum computations, and to provide non-trivial lower bounds on some important computational problems, such as the optimally of Grover's algorithm or the complexity of graph problems [1; 2; 3]. This model was thus far only studied in the context of quantum circuits, in which queries and unitary operations are performed in a fixed sequential order, implying an underlying fixed causal structure. Several recent works have proposed ways of computing beyond fixed causal structures using, for example, the quantum switch [4], where a quantum control system can put different sequences of operations into superposition, leading to what is called an indefinite causal order. Such scenarios can be studied in the higher order framework of quantum supermaps [4] (or "process matrices" [5]) which describes the most general transformations taking some given input operations into a valid output operation in a consistent manner. Although some quantum supermaps are compatible with a fixed causal order, and thus represent quantum circuits [6], others, like the quantum switch, have indefinite causal order or, more formally, are _causally nonseparable_[7; 5; 8]. Causal indefiniteness was shown to provide advantages over quantum circuits in several _ad hoc_ tasks in quantum information theory [9; 10; 11; 12; 13; 14; 15]. However, much less is known about its potential advantages in a general complexity theoretic sense except that its computational complexity is upper bound by PP (which includes both NP and BQP) [16]. In this letter, we address this problem by proposing query complexity as a natural way of comparing the relative power of quantum circuits and causally indefinite supermaps. After recalling the standard query model of computation in terms of quantum circuits, we formulate three different families of quantum supermaps that can be understood as query computations with different causal structures. We show that allowing for superpositions or coherent control of the order of queries in so-called _quantum circuits with quantum control of causal order_[17] cannot reduce the query complexity of Boolean functions compared to quantum circuits. Furthermore, through a generalisation of the polynomial method to general quantum supermaps, we give a lower bound on any possible reduction that could be obtained with causally indefinite supermaps. Then, using semidefinite programming techniques, we show that an advantage can be obtained with some such supermaps: for some functions, causal indefiniteness can reduce the minimum error with which they can be computed given a fixed number of queries. _The query model of computation.--_In the quantum query model of computation (see [18] or [19] for a survey), the goal is to compute a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) with a quantum circuit having access to an oracle \(O_{x}\). Here without loss of generality we use the phase oracle [19; 20], defined as the unitary that takes as input a state \(\ket{i}\) in an \((n+1)\)-dimensional Hilbert space \(\mathcal{H}^{Q}\), with \(i\) serving as an index, \(0\leq i\leq n\) such that: \[O_{x}\ket{i}=\begin{cases}(-1)^{x_{i}}\ket{i}&\text{ if }i\neq 0,\\ \ket{i}&\text{ otherwise.}\end{cases} \tag{1}\] Note that the special case \(\ket{i}=\ket{0}\) is needed to ensure \(O_{x}\) is equivalent to the more common form oracle \(O_{x}^{\prime}\ket{j,b}=\ket{j,b\oplus x_{j}}\), with \(1\leq j\leq n\), \(b\in\{0,1\}\) and \(\oplus\) denoting addition modulo 2. A quantum circuit with \(T\) queries to the oracle is specified by a series of \(T+1\) unitaries \((U_{1},\ldots,U_{T+1})\) each acting on the space \(\mathcal{H}^{Q\alpha}=\mathcal{H}^{Q}\otimes\mathcal{H}^{\alpha}\), with \(\mathcal{H}^{\alpha}\) an ancillary space of arbi trary dimension. By extending the oracle on the ancillary space, \(\tilde{O}_{x}=O_{x}\otimes\mathds{1}^{\alpha}\), we say the circuit computes \(f\) (exactly) if for all \(x\), measuring the left most qubit of \(U_{T+1}\tilde{O}_{x}U_{T}\cdots U_{1}\tilde{O}_{x}U_{1}\ket{0\cdots 0}\) gives the outcome \(f(x)\) with probability one. We similarly say that the circuit computes \(f\) with a bounded error \(\varepsilon\) if the measurement result gives, for all \(x\), \(f(x)\) with a probability at least \(1-\varepsilon\) with some fixed \(\varepsilon<1/2\); typically one takes \(\varepsilon=1/3\). The exact quantum query complexity of a Boolean function, written \(Q_{E}(f)\), is the minimal value of \(T\) for which there exists a quantum query circuit computing \(f\) exactly, while \(Q_{2}(f)\) is the (two-sided) bounded error quantum query complexity, defined as the minimal \(T\) for which a quantum query circuit exists computing \(f\) with bounded error \(\varepsilon=1/3\). This definition of quantum query complexity is the usual one considered in the literature, and is based on the standard formalism of quantum circuits. In this letter, we show that translating this definition into the framework of quantum supermaps leads to a natural generalisation of the notion of quantum query complexity to study the relative power of causal indefiniteness. Let us first introduce a the tools and notation we will use throughout this letter. _Mathematical tools and notation._--We denote the space of linear operators on a Hilbert space \(\mathcal{H}^{X}\) as \(\mathcal{L}(\mathcal{H}^{X})\). To define quantum supermaps we first introduce _quantum channels_, which are completely positive (CP) trace-preserving (TP) maps \(\mathcal{M}:\mathcal{L}(\mathcal{H}^{X})\rightarrow\mathcal{L}(\mathcal{H}^{Y})\). A channel is conveniently represented as a positive semidefinite matrix \(\mathsf{M}^{XY}\in\mathcal{L}(\mathcal{H}^{XY})\) using the Choi isomorphism [21] (see Appendix A), where we adopt the shorthand \(\mathcal{H}^{XY}=\mathcal{H}^{X}\otimes\mathcal{H}^{Y}\). The composition of two CP maps, potentially over a subset of their input/output systems, can be computed directly via their Choi matrices using the "link product" [6; 22], denoted '\(\ast\)' and defined for any matrices \(\mathsf{M}^{XY}\in\mathcal{L}(\mathcal{H}^{XY})\) and \(\mathsf{N}^{YZ}\in\mathcal{L}(\mathcal{H}^{YZ})\) as \(\mathsf{M}^{XY}\ast\mathsf{N}^{YZ}=\operatorname{Tr}_{Y}[(\mathsf{M}^{XY} \otimes\mathds{1}^{Z})^{\operatorname{Tr}_{Y}}(\mathds{1}^{X}\otimes\mathsf{ N}^{YZ})]\in\mathcal{L}(\mathcal{H}^{XZ})\), where \(\operatorname{Tr}_{Y}\) is the partial trace over the subsystem \(\mathcal{H}^{Y}\) and \(\cdot^{\operatorname{Tr}_{Y}}\) the partial transpose over \(\mathcal{H}^{Y}\). The link product is commutative and associative and reduces to the tensor product on maps acting on disjoint Hilbert spaces (see Appendix B). A \(T\)-input _quantum supermap_\(\mathcal{S}\) is then a \(T\)-linear, completely CP-preserving, and TP-preserving map [23]. That is, a supermap is the most general transformation that transforms any \(T\) input channels \(\mathcal{M}_{t}:\mathcal{L}(\mathcal{H}^{A_{t}^{L}})\rightarrow\mathcal{L}( \mathcal{H}^{A_{t}^{L}})\) into another quantum channel: \(\mathcal{S}(\mathcal{M}_{1},\dots,\mathcal{M}_{T}):\mathcal{L}(\mathcal{H}^{ P})\rightarrow\mathcal{L}(\mathcal{H}^{F})\). Writing \(\mathcal{H}^{A_{t}^{L}}=\mathcal{H}^{A_{t}^{L}}\mathcal{H}^{\Delta_{t}^{L}}\) and, with \(\mathcal{T}:=\{1,\dots,T\}\), \(\mathcal{H}^{A_{T}^{P}}=\bigotimes_{k\in\mathcal{T}}\mathcal{H}^{A_{t}^{L}}\), a supermap can be represented in the Choi picture as a "process matrix" [5], a positive semidefinite matrix \(W\in\mathcal{L}(\mathcal{H}^{PA_{t}^{IO}F})\) belonging to a specific subspace \(\mathcal{L}^{\text{Gen}}\) (see Appendix C) and normalised such that \(\operatorname{Tr}W=d^{P}\prod_{k=1}^{T}d_{k}^{O}\), with \(d^{P}=\dim(\mathcal{H}^{P})\) and \(d_{k}^{O}=\dim(\mathcal{H}^{A_{v}^{O}})\)[24; 8]. We denote the set of all such process matrices \(\mathcal{W}^{\text{Gen}}\). The process matrix \(W\) fully characterises \(\mathcal{S}\), and the Choi matrix of the channel resulting from applying \(\mathcal{S}\) to some input channels \((\mathcal{M}_{1},\dots,\mathcal{M}_{T})\) is obtained as \(\mathsf{S}(\mathsf{M}_{1},\dots,\mathsf{M}_{T})=(\mathsf{M}_{1}\otimes\cdots \otimes\mathsf{M}_{T})\ast W\in\mathcal{L}(\mathcal{H}^{PF})\). _Fixed order and quantum controlled supermaps._--The general framework of quantum supermaps allows us to compare, on an equal footing, specific classes of quantum supermaps of particular interest, and to understand their computational capabilities. Here we present two sub-classes of particular interest: supermaps with a fixed causal order (sometimes termed quantum combs [6] and which are equivalent to quantum circuits), and quantum supermaps with quantum control of causal order [17]. The class of supermaps compatible with a fixed causal order (or _FO-supermaps_) has been extensively studied. Any such \(T\)-input supermap \(\mathcal{S}\) can de described by \(T+1\) channels \(\mathcal{V}_{1}:\mathcal{L}(\mathcal{H}^{P})\rightarrow\mathcal{L}(\mathcal{H }^{A_{1}^{l}\alpha_{1}})\), \(\mathcal{V}_{t}:\mathcal{L}(\mathcal{H}^{A_{t}^{O}\alpha_{t}})\rightarrow \mathcal{L}(\mathcal{H}^{A_{t+1}^{l}\alpha_{t+1}})\) for \(2\leq t\leq T\), and \(\mathcal{V}_{T+1}:\mathcal{L}(\mathcal{H}^{A_{t}^{O}\alpha_{T}})\rightarrow \mathcal{L}(\mathcal{H}^{F})\) composed sequentially in a circuit structure [6]. Here, the \(\mathcal{H}^{\alpha_{t}}\) are an ancillary spaces, and \(\mathcal{H}^{P}\) and \(\mathcal{H}^{F}\) are the input and output spaces of the circuit. In the Choi picture, the action of \(\mathcal{S}\) on \(T\) input channels (\(\mathcal{M}_{1},\dots,\mathcal{M}_{T}\)) is \[\mathsf{S}(\mathsf{M}_{1},\dots,\mathsf{M}_{T})= \mathsf{V}_{T+1}\ast\mathsf{M}_{T}\ast\mathsf{V}_{T}\ast\cdots \ast\mathsf{V}_{2}\ast\mathsf{M}_{1}\ast\mathsf{V}_{1}\] \[= (\mathsf{M}_{1}\otimes\cdots\otimes\mathsf{M}_{T})\ast W\in \mathcal{L}(\mathcal{H}^{PF}), \tag{2}\] where \(W=\mathsf{V}_{T+1}\ast\cdots\ast\mathsf{V}_{1}\in\mathcal{L}(\mathcal{H}^{PA_{T} ^{IO}F})\) is the process matrix of \(\mathcal{S}\). Process matrices of this specific form--which can be decomposed as the link product of quantum channels--belong to a subset \(\mathcal{W}^{\text{PO}}\subset\mathcal{W}^{\text{Gen}}\)[17; 6; 22] (see Appendix D). It is easy to see that quantum query circuits, as defined above, can readily be represented as FO-supermaps. Another family of supermaps, those with quantum control of causal order (or _QC-supermaps_), can be formulated as generalised quantum circuits in which a quantum system controls the order of application of the different input operations [17]. QC-supermaps are of particular interest as they are the most general type of quantum supermap with a clear physical realisation [17; 25; 26], and include supermaps such as the quantum switch that are causally indefinite [4]. A QC-supermap alternates between applying some controlled "internal" operations on a "target" system and some ancillary systems, potentially correlating these systems with the quantum control, and using this control system to control which input operations to apply to the target system at time-step \(t\). More precisely, throughout the computation the control system, at time-step \(t\), has basis states \(\ket{\mathcal{K}_{t-1},k_{t}}\) specifying that operation \(k_{t}\notin\mathcal{K}_{t-1}\) will be applied next and that the operations \(\mathcal{K}_{t-1}\subsetneq\mathcal{T}\) have already been applied. This system controls coherently the application of the different input operations \(\mathcal{M}_{k_{t}}\) at time-step \(t\) and the pure internal operations \(\mathcal{V}_{k_{t-1},k_{t}}^{\rightarrow k_{t+1}}:\mathcal{L}(H^{A_{k_{t}}^{O} \alpha_{t}})\rightarrow\mathcal{L}(H^{A_{k_{t+1}}^{L}\alpha_{t+1}})\) between time-steps, and is the minimal control system required to ensure that no operation is applied more than once, while maintaining the possibility for superpositions of causal orders and interference of different causal histories (i.e., permutations of the operations in \(\mathcal{K}_{t-1}\)); see Appendix E or [17] for a more detailed description of QC-supermaps. Process matrices of QC-supermaps belong to a subset \(\mathcal{W}^{\mathrm{QC}}\subset\mathcal{W}^{\mathrm{Gen}}\)[17], whose characterisation is also given in Appendix E. _Quantum query complexity under indefinite causal order.--_We can now define what it means for a quantum supermap to compute a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\), allowing us to compare the different type of supermaps on an equal footing. Let us consider a \(T\)-input supermap \(\mathcal{S}^{\mathcal{C}}\) of class \(\mathcal{C}\in\{\mathrm{FO},\,\mathrm{QC},\,\mathrm{Gen}\}\), with trivial input space \(\dim(\mathcal{H}^{P})=1\) and output space of dimension \(\dim(\mathcal{H}^{F})=2\). These dimensional constraints reflect the fact that, as in the initial definition of quantum query complexity, any fixed input to the supermap can be absorbed directly into it, and only a single output qubit--that will be measured to define the output of the computation--is needed. To differentiate the different input and output spaces of each query, we label (without imposing a particular order) the \(T\) queries to the phase oracle (1) as \(\mathcal{O}_{x}^{(1)},\ldots,\mathcal{O}_{x}^{(T)}\), with \(\mathcal{O}_{x}^{(i)}:\mathcal{L}(\mathcal{H}^{A_{i}^{I}})\rightarrow\mathcal{ L}(\mathcal{H}^{A_{i}^{G}})\), where \(\mathcal{H}^{A_{i}^{I}}\) and \(\mathcal{H}^{A_{i}^{G}}\) are of dimension \(n+1\). (Here \(\mathcal{O}_{x}\) is the quantum channel corresponding to the unitary oracle \(O_{x}\).) Then for every \(x\), the output of \(\mathcal{S}^{\mathcal{C}}\) is the qubit state \(\mathcal{S}^{\mathcal{C}}(\mathcal{O}_{x}^{(1)},\ldots,\mathcal{O}_{x}^{(T)} )\in\mathcal{L}(\mathcal{H}^{F})\). The probability that one obtains \(f(x)\) when measuring this qubit in the computational basis is then given by the Born rule as \[p\big{(}f(x)\big{)} =\mathrm{Tr}\Big{[}\mathcal{S}^{\mathcal{C}}(\mathcal{O}_{x}^{(1 )},\ldots,\mathcal{O}_{x}^{(T)})\cdot\Pi_{f(x)}\Big{]} \tag{3}\] \[=\mathrm{Tr}\Big{[}(\mathcal{O}_{x}^{\otimes T}*W^{\mathcal{C}}) \cdot\Pi_{f(x)}\big{]}\] (4) \[=\mathrm{Tr}\big{[}(\mathcal{O}_{x}^{T}\otimes\Pi_{f(x)})\cdot W ^{\mathcal{C}}\big{]}, \tag{5}\] with \(\Pi_{f(x)}=|f(x)\rangle\!\langle f(x)|\) and \(\mathcal{O}_{x}^{\otimes T}=\bigotimes_{i=1}^{T}\mathcal{O}_{x}^{(i)}\). We say that \(\mathcal{S}^{\mathcal{C}}\) computes \(f\) with a bounded error \(\varepsilon\) if for all \(x\), \(p(f(x))\geq 1-\varepsilon\). We can now define \(Q_{E}^{\mathcal{C}}(f)\) as the minimum value \(T\) for which there exists a quantum supermap \(\mathcal{S}^{\mathcal{C}}\) computing \(f\) exactly (i.e., with error \(\varepsilon=0\)). Similarly, the corresponding bounded-error complexity \(Q_{2}^{\mathcal{C}}(f)\) is the minimum \(T\) for which there exists a quantum supermap \(\mathcal{S}^{\mathcal{C}}\) computing \(f\) with bounded error \(\varepsilon=1/3\). By definition we have the following inequalities: \[Q_{E}^{\mathrm{Gen}}(f) \leq Q_{E}^{\mathrm{QC}}(f)\leq Q_{E}^{\mathrm{FO}}(f)=Q_{E}(f), \tag{6}\] \[Q_{2}^{\mathrm{Gen}}(f) \leq Q_{2}^{\mathrm{QC}}(f)\leq Q_{2}^{\mathrm{FO}}(f)=Q_{2}(f). \tag{7}\] To compare the power of the different classes of quantum supermaps in terms of query complexity, we need to understand how these inequalities can be refined. _Quantum control over \(T\) copies of the same unitary channel.--_QC-supermaps are known to provide advantages over \(\mathrm{FO}\)-supermaps in several tasks, some of which are sometimes expressed as query complexity advantages (albeit of a different kind than studied here), such as in the discrimination between pairs of commuting and anti-commuting unitaries [9] and generalisations of that task [10, 12]. Here we show that they cannot provide any advantage in query complexity of Boolean functions through the generalisation of a recent result for "switch-like" supermaps, a strict subclass of QC-supermaps which generalises the quantum switch. In particular, we build on a result of Ref. [13] showing that such supermaps are equivalent to \(\mathrm{FO}\)-supermaps when acting on \(T\) copies of the same unitary channel. Here we prove the following theorem. **Theorem 1**.: _For any \(T\)-input QC-supermap \(\mathcal{S}^{\mathrm{QC}}\), there exists an \(\mathrm{FO}\)-supermap \(\mathcal{S}^{\mathrm{FO}}\) such that \(\mathcal{S}^{\mathrm{QC}}\) and \(\mathcal{S}^{\mathrm{FO}}\) have the same action whenever applied to \(T\) copies of the same unitary channel. That is, for all unitary channels \(\mathcal{U}\), \(\mathcal{S}^{\mathrm{QC}}(\mathcal{U},\ldots,\mathcal{U})=\mathcal{S}^{\mathrm{ FO}}(\mathcal{U},\ldots,\mathcal{U})\)._ The proof of Theorem 1 is detailed in Appendix F. A direct corollary is that \(Q_{E}^{\mathrm{QC}}=Q_{E}^{\mathrm{EO}}\) and \(Q_{2}^{\mathrm{QC}}=Q_{2}^{\mathrm{FO}}\). Thus, no advantage in query complexity can be found using QC-supermaps instead of \(\mathrm{FO}\)-supermaps, and for that matter, in any task where \(T\) copies of the same unitary channel are considered, such as the reversal of unknown unitary transformations [27] or unitary channel discrimination [13]. In both these tasks, however, advantages were obtained using more general causally indefinite quantum supermaps beyond QC-supermaps. This raises the prospect of nevertheless obtaining advantages in query complexity from causally indefinite supermaps. In order to better target where to look for such an advantage, we first provide a lower bound on any potential reduction of query complexity of Boolean functions with general supermaps by generalising a well-studied bound for quantum circuits, the polynomial method. _Polynomial bound for general quantum supermaps.--_ The polynomial method makes a connection between the output of an \(\mathrm{FO}\)-supermap and a multivariate polynomial \(g\), and is an important method for proving lower bounds on the quantum query complexity of Boolean functions [28]. A polynomial \(g\) is said to represent a Boolean function \(f\) if for all \(x\in\{0,1\}^{n}\), \(f(x)=g(x)\). As an example, consider the polynomial \(g(x_{1},x_{2})=x_{1}+x_{2}-x_{1}x_{2}\), which represents the Boolean \(\mathrm{OR}\) function. This polynomial has degree \(2\), and we denote \(\deg(f)\) the smallest degree of any polynomial representing \(f\). Similarly, we write \(\widetilde{\deg}(f)\) the degree of the smallest polynomial approximating \(f\) with a bounded error \(\varepsilon=1/3\), i.e., such that for all \(x\), \(|g(x)-f(x)|\leq 1/3\). The polynomial method states that for all Boolean functions we have \(\deg(f)/2\leq Q_{E}^{\mathrm{FO}}(f)\), and likewise for the bounded-error counterpart, that \(\widetilde{\deg}(f)/2\leq Q_{2}^{\mathrm{FO}}(f)\). Here, we generalise this lower bound to general supermaps and thereby bound the potential advantage obtainable with causally indefinite supermaps over standard quantum circuits. **Theorem 2**.: _For any Boolean function \(f\), we have \(\deg(f)/2\leq Q_{E}^{\mathrm{Gen}}(f)\) and \(\widetilde{\deg}(f)/2\leq Q_{2}^{\mathrm{Gen}}(f)\)._ The proof is similar to that of the original result; we provide the full details in Appendix G and simply outline the argument here. Supposing that \(Q_{E}^{\text{Gen}}(f)=T\), there is a \(T\)-input supermap \(\mathcal{S}^{\text{Gen}}\) with process matrix \(W^{\text{Gen}}\) such that \(f(x)=\operatorname{Tr}\big{[}(\mathcal{O}_{x}^{\otimes T}*W^{\text{Gen}})\cdot \Pi_{1}\big{]}\). Proceeding by induction, one finds that the Choi matrix of the \(T\) queries, \(\mathcal{O}_{x}^{\otimes T}\), has coefficients that are multivariate polynomials of degree at most \(2T\). Since both the trace and link product are linear, it immediately follows that \(f(x)\) is also a multivariate polynomial of degree at most \(2T\), completing the proof. An analogous proof gives us the bounded error version of the statement. These results mean that for functions whose polynomial bound is tight for FO-supermaps, i.e., when \(\deg(f)/2=Q_{E}^{\text{EO}}(f)\) or \(\widetilde{\deg}(f)/2=Q_{2}^{\text{FO}}(f)\), causal indefiniteness cannot provide any advantage. This is the case, for example, of the OR function [28] (which is computed with a bounded error by Grover's algorithm). Theorems 1 and 2 thus allows us to refine the inequalities (6) and (7) as follows: \[\deg(f)/2 \leq Q_{E}^{\text{Gen}}(f)\leq Q_{E}^{\text{QC}}(f)=Q_{E}^{ \text{FO}}(f)=Q_{E}(f) \tag{8}\] \[\widetilde{\deg}(f)/2 \leq Q_{2}^{\text{Gen}}(f)\leq Q_{2}^{\text{QC}}(f)=Q_{2}^{ \text{FO}}(f)=Q_{2}(f). \tag{9}\] Still, it is known that some Boolean functions do not have a tight polynomial bound [29], meaning that Theorem 2 does not rule out a potential advantage from causal indefiniteness using general supermaps. This motivates us to study explicit Boolean functions to look for such an advantage. In general, we lack an understanding of general supermaps beyond QC-supermaps, so trying to develop analytically supermaps providing such an advantage for general \(n\)-bit Boolean functions is extremely challenging. Instead, we study exhaustively Boolean functions up to \(4\) bits, building on the extensive literature on the optimisation of quantum supermaps using semidefinite programs (SDPs) [8; 24]. _Advantage of general supermaps over FO-supermaps.--_In the remainder of this letter, we show an advantage of causally indefinite supermaps over fixed order ones by considering the minimum bounded error computation of a function \(f\). Let us denote by \(\varepsilon_{T}^{\text{FO}}(f)\) (resp. \(\varepsilon_{T}^{\text{Gen}}(f)\)) the minimum error \(\varepsilon\) for which there exists a \(T\)-query FO-supermap (resp. a general supermap) computing \(f\) with a bounded error \(\varepsilon\). We prove the following theorem for \(T=2\). **Theorem 3**.: _There exists some function \(f\) for which \(\varepsilon_{2}^{\text{Gen}}(f)<\varepsilon_{2}^{\text{FO}}(f)\)._ To prove Theorem 3, we use SDPs to look for such a gap on all Boolean functions up to \(4\) input bits. A systematic study for quantum circuits of the minimum bounded error computation of functions using SDPs was already performed in Ref. [30] for functions up to \(4\) bits, and for symmetric functions up to \(6\) bits. Their SDP formulation keeps track of the evolution of the state through the quantum circuit by defining \(T\) Gram matrices, one for each additional query [31]. Unfortunately this methods fails to extend to general supermaps as it exploits the sequential structure of a FO-supermap which has no analogue in generic quantum supermaps.1 Here, using the characterisation of supermaps in the Choi picture as process matrices \(W\), we give an SDP formulation of the minimum error bound for both FO-supermaps and general ones. Footnote 1: Indeed, for the same reason a more powerful method of bounding query complexity for quantum circuits, the adversarial method [20; 32], is not readily generalisable to generic supermaps, in contrast to the polynomial bound. Let us consider a Boolean function \(f\) and a \(T\)-query supermap \(\mathcal{S}^{\mathcal{C}}\) with \(\mathcal{C}\in\{\text{FO},\text{Gen}\}\). Because we measure the output of the supermap to obtain the value of \(f(x)\), it is convenient to consider, instead of the process matrix \(W\), the quantum superinstrument \(\{W^{[i]}\}_{i\in\{0,1\}}\) such that \(W^{[i]}=W*\Pi_{i}\)[17], so that \(p(f(x))=\operatorname{Tr}\big{[}W^{[f(x)]}\cdot\mathcal{O}_{x}^{\otimes T}\big{]}\). This has the advantage of reducing the size of the matrices being optimised numerically; for \(T=2\) and \(n=4\), \(W\) is a matrix of size \(1250\), whereas the \(W^{[i]}\) are of size \(625\), a difference crucial for rendering the SDPs we present below tractable. Writing \(F^{[i]}=\{x:f(x)=i\}\), the minimum error bound \(\varepsilon_{T}^{\mathcal{C}}(f)\) is given by the SDP \[\begin{split}\varepsilon_{T}^{\mathcal{C}}(f)&=\max_{ \varepsilon,W^{[i]},W^{[i]}}1-\varepsilon\\ \text{s.t.}&\forall x\in F^{[0]},\ \operatorname{Tr} \Big{[}W^{[0]}\cdot\mathcal{O}_{x}^{\otimes T}\Big{]}\geq 1-\varepsilon,\\ &\forall x\in F^{[1]},\ \operatorname{Tr}\Big{[}W^{[1]}\cdot \mathcal{O}_{x}^{\otimes T}\Big{]}\geq 1-\varepsilon,\\ & W^{[0]}\geq 0,\ W^{[1]}\geq 0,\ \varepsilon\geq 0,\\ & W^{[0]}+W^{[1]}\in\mathcal{W}^{\mathcal{C}}.\end{split} \tag{10}\] Note that the size of the \(W^{[i]}\)'s scale as \((n+1)^{2T}\) making the SDP difficult to solve for even moderate \(n\) and \(T\). We solved it numerically for \(2\) queries and for all Boolean functions up to \(4\) bits (including constant functions and functions depending on fewer bits). By exploiting the symmetries inherent in the SDP, one can further reduce the number of variables and constraints, but we were still unable to solve it for \(3\) queries or \(5\) bits. For \(3\) bits and \(2\) queries it is known that FO-supermaps are sufficient to exactly compute all functions except the AND function [30] which has an exact query complexity \(Q_{E}^{\text{FO}}(\text{AND})=n\). For this function, no advantage was found with \(2\)-query general supermaps. For \(4\)-bit functions, as in Ref. [30], we reduced the number of functions to look at by considering the so-called negate, permute, negate (NPN) equivalence relation. We say that two Boolean functions are equivalent if they are the same up to negation and permutation of the input bits, and negation of the output. These transformations correspond to a relabelling of the input and output bits, leaving unchanged the query complexity of a function. The results are summarised in Table 1 of Appendix J where the ID of a function corresponds to its truth table converted into an integer, and the results of the SDPs for both FO- and general supermaps are rounded to the fifth decimal place. Our results for FO-supermaps coincide, as expected, with those of Ref. [30]. Out of the 222 NPN representatives, we observe a gap between the minimum error bounds \(\varepsilon_{2}^{\text{FO}}(f)\) and \(\varepsilon_{2}^{\text{Gen}}(f)\) for 179 functions, the maximum gap being \(0.00947\) (close to 1%) for the functions with IDs 5783, 5865 and 6630. The simplest of the three functions (ID 5865), can be written as the polynomial \(f(x)=x_{1}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4}+x_{2}x_{3}x_{4}\). This numerical evidence is not an analytical proof of Theorem 3 as the constraints of the SDPs are only satisfied up to numerical precision. However, extracting an FO- or general supermap that rigorously verifies the constraints is possible by rationalising the numerical results and perturbing the SDPs' solutions. Such a method was developed in Ref. [33] and allows us to obtain upper bounds with the primal SDP (10), and lower bounds with the corresponding dual SDP which can be derived with Lagrangian method (see Appendix H). Using the extraction methods (detailed in Appendix I) for the function ID 5865, we find the bounds \[0.0324\leq\varepsilon_{2}^{\text{Gen}}(f)\leq 0.0377<0.0465\leq \varepsilon_{2}^{\text{FO}}(f)\leq 0.0467, \tag{11}\] which proves Theorem 3. _Discussion.--_By generalising the notion of query complexity to general supermaps, we provide a natural tool to probe, in a unified complexity theoretic framework and on equal footing, the power and the potential advantages that different types of causal structure can provide. Until now, it was unclear whether causal indefiniteness could provide any advantage in such a fundamental model of computation. We found a separation between FO-supermaps and general, potentially causally indefinite, supermaps in the minimum error probability with which they can compute a Boolean function using two queries. While this separation does not directly translate into an asymptotic query complexity separation, it is a crucial first step in this direction. One possible such approach would be to explore whether the separation we found can be amplified by recursively composing supermaps in a suitably well-defined manner [34; 35], similar to the advantage in query complexity obtained with quantum circuits in Ref. [29]. These results also raise the question of whether it is possible to find a separation in exact query complexity. Along the way, we also showed some important results limiting the power of causal indefiniteness, generalising the polynomial method, and proving the equivalence of FO-supermaps and QC-supermaps on certain problems, including query complexity. This raises natural questions about the power of specific classes of supermaps beyond QC-supermaps, such as that of purifiable processes [36]. With QC-supermaps being the most general supermaps we currently know how to implement, it will be interesting to study further settings. For instance, can advantages over quantum circuits be obtained when given multiple different oracles, and can our results be used to obtain advantages in quantum communication complexity in more standard settings than those studied with causal indefiniteness previously [37]? ###### Acknowledgements. The authors acknowledge funding from the French National Research Agency projects ANR-15-IDEX-02 and ANR-22-CE47-0012, and the Plan France 2030 projects ANR-22-CMAS-0001 and ANR-22-PETQ-0007. For the purpose of open access, the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
2308.00613
Accelerating Black Holes in $2+1$ dimensions: Holography revisited
This paper studies the holographic description of $2+1-$dimensional accelerating black holes. We start by using an ADM decomposition of the coordinates suitable to identify boundary data. As a consequence, the holographic CFT lies in a fixed curved background which is described by the holographic stress tensor of a perfect fluid. We compute the Euclidean action ensuring that the variational principle is satisfied in the presence of the domain wall. This requires including the Gibbons--Hawking--York term associated with internal boundaries on top of the standard renormalised AdS$_{3}$ action. Finally, we compute the entanglement entropy by firstly mapping the solution to the Rindler--AdS spacetime in which the Ryu--Takayanagi surface is easily identifiable. We found that as the acceleration increases the accessible region of the conformal boundary decreases and also the entanglement entropy, indicating a loss of information in the dual theory due to acceleration.
Gabriel Arenas-Henriquez, Adolfo Cisterna, Felipe Diaz, Ruth Gregory
2023-08-01T15:42:27Z
http://arxiv.org/abs/2308.00613v2
# Accelerating Black Holes in \(2+1\) dimensions: Holography revisited ###### Abstract This paper studies the holographic description of \(2+1-\)dimensional accelerating black holes. We start by using an ADM decomposition of the coordinates suitable to identify boundary data. As a consequence, the holographic CFT lies in a fixed curved background which is described by the holographic stress tensor of a perfect fluid. We compute the Euclidean action ensuring that the variational principle is satisfied in the presence of the domain wall. This requires including the Gibbons-Hawking-York term associated with internal boundaries on top of the standard renormalised AdS\({}_{3}\) action. Finally, we compute the entanglement entropy by firstly mapping the solution to the Rindler-AdS spacetime in which the Ryu-Takayanagi surface is easily identifiable. We found that as the acceleration increases the accessible region of the conformal boundary decreases and also the entanglement entropy, indicating a loss of information in the dual theory due to acceleration. ## I Introduction The C-metric, originally found by Levi-Civita [1] and subsequently by Weyl [2], was first analysed physically by Kinnersley and Walker [3], and Bonnor [4]. It is understood as a pair of causally disconnected black holes that accelerate due to the presence of topological defects, specifically cosmic strings that pull (or struts that push) the black holes away from each other. The spacetime represents a one parameter extension of the Schwarzschild black hole that is subjected to conical defects. The C-metric has been studied extensively [5; 6; 7; 8; 9; 10; 11], not only in General Relativity (GR) but also in Einstein-dilaton-Maxwell [12], braneworld scenarios [13] and in the context of quantum black holes [14; 15; 16; 17]. Even more, recently supersymmetric extensions were constructed in \(D=4\) gauged supergravity [18; 19; 20; 21]. These solutions have been uplifted using a Sasaki-Einstein manifold \(\mathrm{SE}_{7}\) to supergravity in \(D=11\) resulting in a smooth geometry with properly quantised fluxes. The causal structure of the C-metric is fairly well understood, however, its asymptotic structure presents challenges towards a semi-classical analysis of the spacetime. Recently, there has been significant progress in elucidating the thermodynamic behaviour of accelerating black holes [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. A particularly fruitful approach has been where the tensions of the cosmic strings causing the acceleration are considered as thermodynamic variables, which serve as a key ingredient in achieving full cohomogeneity in the first law [26]. Another challenging, but less explored, facet of accelerating black holes is the study of acceleration from a holographic point of view. The holographic dual of an accelerating black hole is not yet fully understood, and it remains an active area of research. Some proposals suggest that the dual theory may correspond to a strongly correlated system living in a black hole background [25; 34]. A significant step towards a formal holographic description has been achieved with the discovery of supersymmetric accelerating black holes and their embedding in supergravity and M-theory [35; 36; 37; 19], providing a promising avenue for studying these solutions by means of the AdS/CFT correspondence; this approach also seems to suggest the existence of higher-dimensional accelerating solutions, which from a classical geometric perspective have not yet been discovered. In this direction, an instructive approach is to consider a simple toy model. In this regard, three-dimensional gravity seems to be the perfect candidate to test the boundary properties of accelerating black holes, as the features of two-dimensional field theories are well understood. The aim of this paper is to study boundary aspects of the accelerating BTZ black holes, building on the previous investigations of the properties of the space of solutions [38; 39; 40]. See [41] for the charged case. Our paper is organised as follows: In Section II we concisely review the three-dimensional C-metric spacetime and its various classes of solutions. Among the solutions, we focus on the case of an _accelerating_ BTZ black hole that is pushed by a strut (negative tension co-dimension one topological defect), as this case exhibits more similarities with the four-dimensional counterpart than the BTZ black hole pulled by a wall (positive tension co-dimension one defect). Note that these solutions include both slow and rapid phases of acceleration, i.e. solutions both without and with, respectively, an accel eration horizon in addition to the black hole horizon. Holographically, it can be viewed as simpler to focus on the non-rapidly accelerating solutions so that thermodynamic quantities are uniquely defined. Next, in Section III, we describe the boundary of the spacetime by employing a radial Arnowitt-Deser-Misner (ADM) foliation. The new "holographic" coordinate is aligned with the conformal boundary, such that the boundary metric is easily identifiable. This makes the construction of the stress tensor straightforward. We rewrite the stress tensor using the fluid/gravity correspondence by identifying the pressure and energy density of the dual theory, which lies in a positively curved background. We compute the total energy making use of the holographic stress tensor and analyse the effect of the acceleration. Section IV is devoted to computing the Euclidean action, showing that the standard renormalised action for AdS\({}_{3}\) contains an additional divergence that originates from the domain wall and that extends from the black hole to the boundary. Nevertheless, this divergence is controlled by the inclusion of boundary terms associated with the internal boundaries of the spacetime. The total Euclidean action satisfies the quantum statistical relation upon proper identification of the contribution from the domain wall. Section V shows the computation of the holographic entanglement entropy by utilising the relationship between these solutions and Rindler-AdS, finding that the total entanglement decreases with acceleration. Finally, we conclude in Section VI with a comprehensive summary and further issues that need to be addressed in the future. Complementary materials are provided in Appendices A and B regarding the explicit details of the FG expansion and of the so-called I\({}_{\rm C}\) class of accelerating BTZ solutions. ## II C-metric in 2+1 dimensions: accelerating black holes We start by describing the three-dimensional C-metric spacetimes presented in [38; 39; 40]. A direct truncation of the four-dimensional C-metric allows us to write, in prolate coordinates, the following metric ansatz \[ds^{2} = \frac{1}{\Omega^{2}}\Big{[}-P(y)d\tau^{2}+\frac{dy^{2}}{P(y)}+ \frac{dx^{2}}{Q(x)}\Big{]}\, \tag{1}\] \[\Omega = A(x-y)\, \tag{2}\] where \(A\) stands for an acceleration parameter. Although these coordinates are not very intuitive, later, we will identify \(y\) as a radial coordinate and \(x\) as an angular coordinate upon a suitable identification of their ranges. The metric polynomials are easily found from the trace of the field equations [42], which yields two cubic polynomials \(P(y)\) and \(Q(x)\) of the corresponding coordinates of which, in principle, all polynomial coefficients are arbitrary constants. Imposing the polynomials onto the field equations implies precise relations between the polynomial coefficients. In addition, making use of the symmetries of the line element (1), it is proven that the remaining arbitrariness of the coefficients represents removable gauge redundancies [40]. In this manner, only sign differences between the polynomial coefficients remain relevant giving rise to three families of accelerating spacetimes, see Table 1. The domain of the \(x-\)coordinate is chosen such that the metric preserves its signature. Generically, Class I represents the geometry of accelerating particle-like solutions, although a particular case dubbed as Class I\({}_{\rm C}\), represents an accelerating black hole solution parametrically disconnected from the standard BTZ geometry. As expected from the three-dimensional AdS spacetime, it is possible to find "naked singularities" (i.e. conical solutions corresponding to a "particle") in the energy range \(-\frac{\pi}{8}\leq M\leq 0\). Due to acceleration, a Rindler horizon can be formed. For our purposes, besides some computations performed in Appendix B regarding the Class I\({}_{\rm C}\), this paper will be devoted to studying the solutions contained in Class II[43]. This class is regarded as a one parameter extension of the standard BTZ black hole [44; 45] and thus we shall denote it as the accelerating BTZ black hole [38; 40]. Taking the parameter \(A\to 0\), we recover the standard geometry of the one parameter family of three-dimensional black holes. This is key when comparing the black hole solutions of Class II and Class I\({}_{\rm C}\). The latter exists only for a limited range of parameters where the acceleration and mass of the black hole are bounded, and the BTZ geometry is not included in this range, although it has a compact horizon. It is worth noting that both Class I and Class II spacetimes possess a well-defined flat limit \[\lim_{\epsilon\rightarrow\infty}ds^{2}=\frac{1}{\Omega^{2}}\left(-P_{\infty}( y)d\tau^{2}+\frac{dy^{2}}{P_{\infty}(y)}+\frac{dx^{2}}{Q(x)}\right)\, \tag{3}\] where \(P_{\infty}(y)=\mp(1-y^{2})\), and where Class I corresponds to a minus sign and Class II for the opposite. Here \(\Omega\) denotes the acceleration conformal factor. These solutions are interpreted as accelerating particles moving on a three-dimensional flat background [46; 47]. In the next subsection, we summarise the main details behind the construction and interpretation of the solutions contained in Class II. As we shall shortly observe, to construct the solutions it is necessary to introduce a domain wall in the spacetime and to use Israel junction conditions to identify the tension (positive or negative) of the wall, which is responsible for the "acceleration" [40]. The wall extends from the horizon to the boundary, therefore affecting the definition of holographic quantities as we will explicitly demonstrate. \begin{table} \begin{tabular}{|c||c|c|c|} \hline Class & \(Q(x)\) & \(P(y)\) & Maximal range of \(x\) \\ \hline \hline I & \(1-x^{2}\) & \(\frac{1}{A^{2}\ell^{2}}+(y^{2}-1)\) & \(|x|<1\) \\ II & \(x^{2}-1\) & \(\frac{1}{A^{2}\ell^{2}}+(1-y^{2})\) & \(x>1\) or \(x<-1\) \\ III & \(1+x^{2}\) & \(\frac{1}{A^{2}\ell^{2}}-(1+y^{2})\) & \(\mathbb{R}\) \\ \hline \end{tabular} \end{table} Table 1: Three different classes of solutions with their maximal range of the transverse coordinate. Inserting a domain wall.In dimension four a straightforward inspection of the axes of symmetry of the C-metric line element reveals the existence of conical singularities. It is the difference in deficit angle of the conical singularity at both the north and south poles that gives rise to the acceleration. This conical singularity can be either an angular deficit or excess, but the singularity can be removed along an axis by choosing the periodicity of the azimuthal coordinate. This transfers the full defect to one polar axis, usually an angular deficit along the south pole. Physically, this is then interpreted as a cosmic string that extends all the way from the horizon to conformal infinity, and that pulls the black hole producing its acceleration [48; 49; 50; 51]. In \((2+1)-\)dimensions the situation is remarkably different. The first difference relies on the fact that with one dimension less the topological defect causing acceleration, which is linear in nature, will now have co-dimension one - i.e. will be a domain wall rather than a cosmic string. The second difference relates to the nature of the angular coordinate. In three dimensions the metric functions depend on an azimuthal angle (not polar as in the four-dimensional case) and therefore the domain of this coordinate also behaves differently. It was shown in [40] that in order to have a compact horizon the \(x-\)coordinate needs to be properly identified, which is precisely determined by the inclusion of the domain wall that is responsible for the black hole acceleration. To construct the accelerating Class II BTZ geometry the following procedure is taken: i) First, we define a finite domain for the \(x-\)coordinate, \([x_{0},x_{\rm max}]\), where \(x_{0}\) is either greater than 1 or smaller than \(-1\), see Table (I). The value of \(x_{\rm max}\) is constrained according to the number of Killing horizons we allow our geometry to contain. Generically, \(P(y)\) provides two Killing horizons located at \[y_{A}=\frac{\sqrt{1+A^{2}\ell^{2}}}{A\ell},\ \ \ y_{h}=-\frac{\sqrt{1+A^{2}\ell^{2}} }{A\ell}. \tag{4}\] Here, \(y_{A}\) and \(y_{h}\) represent the acceleration and black hole horizons respectively. Notice that \(y_{A}>1\). The position of \(x_{\rm max}\) then determines the number of horizons according to whether \(x_{\rm max}<y_{A}\), in which case only the black hole horizon is present, or \(x_{\rm max}>y_{A}\), in which case both horizons are present. ii) After specifying the interval \([x_{0},x_{\rm max}]\), we proceed with the construction of a compact horizon by identifying two copies of the spacetime along two surfaces of constant \(x\), mirroring along \(x_{0}=1\) and \(x_{\rm max}\) (we take \(x_{0}=1\) so that the "wall" along the mirrored surface has zero tension). This is carried out by including a domain wall at \(x_{\rm max}\), of which the induced line element reads, \[ds_{\rm DW}^{2} = \gamma_{MN}dx^{M}dx^{N} \tag{5}\] \[= \frac{1}{A^{2}(x_{\rm max}-y)^{2}}\left(-P(y)d\tau^{2}+\frac{dy^ {2}}{P(y)}\right)\,\] where \(N,M=(\tau,y)\) are the domain wall coordinates. The wall is sourced by a localised energy-momentum tensor, of which the integration over the thin wall configuration, using Israel equations, provides \[4\pi G\int_{-}^{+}T_{MN}=\left[\mathcal{K}_{MN}\right]\big{|}_{-}^{+}-\gamma_ {MN}\left[\mathcal{K}\right]\big{|}_{-}^{+}=4\pi G\sigma\gamma_{MN}\, \tag{6}\] where \(T_{MN}\) is the energy stress tensor of the domain wall. Here \(G\) is the Newton's constant, the brackets \([\mathcal{K}]|_{-}^{+}\) correspond to the difference of the extrinsic curvature along each side of the domain wall, and \(\mathcal{K}_{MN}:=\frac{1}{2}\mathcal{L}_{n}\gamma_{MN}\) is the extrinsic curvature of the hypersurface at \(x=x_{\rm max}\) which is given by the covariant derivative of the outward pointing normal \[n_{x}=\frac{1}{A(x-y)\sqrt{Q}}\frac{\partial}{\partial x}\bigg{|}_{x_{\rm max }}\, \tag{7}\] yielding \[\sigma=-\frac{1}{4\pi G}\left[\mathcal{K}\right]|_{-}^{+}=\pm\frac{A}{4\pi G} \sqrt{Q(x_{\rm max})}. \tag{8}\] With these steps at hand, the construction of the Class II solutions is complete. One sector of solutions is constructed from \(x>1\), while the other follows \(x<-1\). In both cases \(x-y>0\). For the \(x>1\) case the domain wall has negative tension, contrary to its \(x<-1\) cousin for which the domain wall acquires a positive tension. These geometries are dubbed as the accelerating BTZ black hole pushed by a strut and the accelerating BTZ black hole pulled by a wall, respectively. In this paper, we will mostly focus on the spacetime described by \(x>1\). Within these solutions, we note that if \(x_{\rm max}>y_{A}\), the acceleration horizon is included in our spacetime for the region \(x>y_{A}\) this is the situation referred to as _rapid acceleration_. This phenomenon does not occur for the accelerating BTZ pulled by a wall, as the defect on the horizon in this case has a positive energy density. Since the wall pulls the horizon far from the conformal boundary, no rapid accelerating phase takes place. The causal structure of both solutions and their construction by gluing is depicted in figure 1. Accelerating BTZ black hole pushed by a strut.We now focus on the accelerating BTZ black hole pushed by a strut. While the prolate coordinates \((x,y)\) are useful for describing the construction and causal structure of the solutions, it is convenient to move to the more intuitive polar coordinates \((r,\phi)\) to describe the holographic properties of the spacetime. Just as in the four-dimensional C-metric, the coordinate \(y\) is easily identified with the radial polar direction \(r\). Although \(x\) is in principle non-compact, due to the procedure to introduce the domain wall and have a proper black hole interpretation it becomes compact and can be related with an angle. Thus, we introduce a mass parameter and new coordinates via[52] \[r=-\frac{1}{\mathcal{A}y}\,\ \ \ x=\cosh(m\phi)\,\ \ \ t=\frac{m^{2}\mathcal{A}}{ \alpha}\tau\, \tag{9}\] Figure 1: Here, we provide the schematic construction of the three different kinds of black hole solutions contained in class II. They are ordered as follows: Row I shows the construction of the accelerating BTZ black hole pushed by a strut for which the \(x-\)coordinates satisfies \(x>1\) and where both event and accelerating horizons are part of the spacetime causal structure. The second row expresses the construction of the same previous solution, but in the case in which the accelerating horizon is absent. Finally, the third row shows the construction of the accelerating BTZ black hole pulled by a domain wall, solution for which \(x<-1\). All diagrams correspond to a constant time coordinate. The first picture of each row represents the preliminary causal structure, in prolate and polar coordinates, of the given solutions. This allows us to understand the range of the coordinates and the position of the corresponding horizons. The second pictures correspond with the mirroring of the first ones, while the third represents the final form of the spacetime once the gluing has been performed. where \(\mathcal{A}=A/m\), and \(\alpha\) is a constant that will be used to identify the proper time of an asymptotic observer [26]. The resulting metric reads \[ds^{2}=\frac{1}{\Omega^{2}}\left(-f(r)\frac{d\tau^{2}}{\alpha^{2}}+\frac{dr^{2} }{f(r)}+r^{2}d\phi^{2}\right)\, \tag{10}\] where \[f(r) =\frac{r^{2}}{\ell^{2}}-m^{2}(1-\mathcal{A}^{2}r^{2})\,\] \[\Omega =1+\mathcal{A}r\cosh(m\phi). \tag{11}\] The tension of the wall is regulated by the parameter \(m=\mathrm{arcosh}(x_{\mathrm{max}})/\pi\) that is chosen to ensure that the coordinate \(\phi\) lies in the interval \((-\pi,\pi)\). This is consistent with the standard interpretation of mass of three-dimensional black holes [45]. Nevertheless, the non-trivial extrinsic curvature over the \(x=\mathrm{const}\) surface modifies the original identifications used to construct the BTZ black hole. Moving forward, our main focus will be on describing this specific solution, although all of our results can be applied to the positive tension scenario with ease. Considering now the Euclidean version of the solution, the horizon radius, Hawking temperature, and entropy are easily read off as [40] \[r_{h} =\frac{m\ell}{\sqrt{1+m^{2}\mathcal{A}^{2}\ell^{2}}}\, \tag{12}\] \[T =\frac{|f^{\prime}(r_{h})|}{4\pi\alpha}=\frac{m\sqrt{1+m^{2} \mathcal{A}^{2}\ell^{2}}}{2\pi\ell\alpha}\,\] \[S =\frac{\ell}{G}\,\mathrm{arctanh}\left[\left(\sqrt{1+m^{2} \mathcal{A}^{2}\ell^{2}}-m\mathcal{A}\ell\right)\tanh\left(\frac{m\pi}{2} \right)\right]\,\] where Area refers to the area of the black hole horizon and the conformal boundary is now given by \(r_{\mathrm{cb}}=-(\mathcal{A}\cosh(m\phi))^{-1}\). The wall lies along \(\phi=\pm\pi\), with tension \[\sigma=-\frac{m\mathcal{A}\sinh(m\pi)}{4\pi G}. \tag{13}\] As already mentioned, there is a rapid phase when the wall position is \(x_{\mathrm{max}}>\sqrt{1+(m\mathcal{A}\ell)^{-2}}\), implying the appearance of a non-compact accelerating horizon [40]. In order to avoid this feature for the holographic analysis of the solution, one restricts \(m\) to be positive and to satisfy the condition \[m\mathcal{A}\ell\sinh(m\pi)<1\, \tag{14}\] which is referred to as the condition of slow acceleration. Accelerating black holes pulled by a domain wall (positive tension) can be found by changing the sign of the acceleration \(\mathcal{A}\rightarrow-\mathcal{A}\). In that case, there is no rapid accelerating phase, and therefore the only constraint for the acceleration is given by requiring a positive radial coordinate. See Fig. 2. It is also possible to recover the accelerating particle solutions of Class I by taking \(m^{2}\rightarrow-m^{2}\), noting that the hyperbolic cosine becomes a cosine in the conformal factor \(\Omega\) (the Class I\({}_{c}\) solutions require a slightly more subtle transformation). Finally, it is worth mentioning that the solution can be mapped to a three-dimensional Rindler geometry \[ds^{2}=-\left(\frac{R^{2}}{\ell^{2}}-1\right)dT^{2}+\frac{dR^{2}}{\frac{R^{2} }{\ell^{2}}-1}+R^{2}d\Theta^{2}\, \tag{15}\] with \(R\in(\ell,\infty)\), by means of \[\frac{R^{2}}{\ell^{2}}-1=\frac{f(r)}{\alpha^{2}m^{2}\Omega^{2}}\,\quad R\sinh \Theta=\frac{r\sinh(m\phi)}{m\Omega}\, \tag{16}\] provided that \[T=m\tau\,\quad\alpha^{2}=1+m^{2}\mathcal{A}^{2}\ell^{2}. \tag{17}\] This identifies the Rindler time \(T\) and the value of \(\alpha\). In the next section, we will introduce an alternative coordinate system that simplifies the identification of the boundary structure, and use it to characterise the holographic stress tensor. ## III Holographic Stress Tensor ADM-like coordinates.A crucial step towards the identification of the thermodynamic quantities and holographic data is to have at hand a robust description of the spacetime boundary. In this regard, the asymptotic structure of the C-metric poses a challenge, as the conformal boundary is not given by a constant value of the radial coordinate. The standard recipe, when treating asymptotically AdS spacetimes, is to apply a Fefferman-Graham (FG) expansion near the boundary, identifying the holographic coordinate. In the case of the C-metric, however, this process is not trivial but rather complicated. This was first noticed for the four-dimensional accelerating black hole in [26], where an asymptotic expansion for both the radial and angular coordinates was derived in terms of the FG holographic coordinate that is perpendicular to the boundary (see Appendix A for the procedure in the three-dimensional case). Solving Einstein's equations order by order in the expansion reconstructs the spacetime and gives a boundary metric \(g_{0}\), which is defined up to a conformal representative \(\omega\). In four dimensions, the Euclidean action and conserved quantities (such as the mass) are independent of the conformal representative of the boundary metric. However, in three dimensions, the situation is different. The conformal invariance is broken at the quantum level, and the dual two-dimensional CFT has a conformal anomaly. The anomaly itself is independent of the conformal factor and reproduces the value of the Brown-Henneaux central charge [53] for any representative Class [40]. Nevertheless, the choice of \(\omega\) is crucial to identify the mass and the other thermodynamic quantities since the dual stress tensor, which is employed to obtain the holographic mass, is not a primary operator and transforms non-trivially under conformal transformations. For a discussion on how the energy, action, and other holographic quantities depend on the conformal representatives, see [54]. An alternative way of obtaining the boundary data is to follow [20; 34] and define a new coordinate \(z\) according to \[\frac{1}{r}=z-\mathcal{A}\cosh(m\phi). \tag{18}\] In this new coordinate frame, the location of the conformal boundary is at \(z_{\rm cb}=0\). We introduce an infrared cutoff at \(z=\delta\), where \(\delta<<1\). Following the standard prescription [55], we first compute regularised holographic quantities close enough to the boundary at \(z=\delta\), to then take the limit \(\delta\to 0\). In this coordinate system, the horizon is described by a function \(z_{h}=z_{h}(z,\phi)\). The accelerated horizon is plotted in Figure 2, and considering both horizons the considered region can be seen in Figure 3. The slow acceleration condition (14) ensures that the horizon does not touch the conformal boundary \(z_{\rm cb}\). This coordinate transformation sets the metric into an ADM-like decomposition \[ds^{2}=N^{2}dz^{2}+h_{ij}(dx^{i}+N^{i}dz)(dx^{j}+N^{j}dz)\, \tag{19}\] where \(h_{ij}\) is the induced metric in the \(z=\rm const\) hypersurface, \(N\) the lapse function and \(N^{i}\) represents the shift vector. Notice that the induced metric \(h_{ij}\) depends on \(z\) and \(\phi\). We can identify the background in which the holographic CFT lies to be \[ds^{2}_{(0)} = g_{(0)ij}dx^{i}dx^{j}=\lim_{\delta\to 0}\delta^{2}h_{ij}dx^{i}dx^{j} \tag{20}\] \[= G(\xi)\left(-d\tilde{\tau}^{2}+d\xi^{2}\right)\,\] where the corresponding coordinates \[\tau=\alpha\ell\tilde{\tau}\,\quad\xi=\frac{\text{arctanh}\left(\alpha \tanh(m\phi)\right)}{m\alpha}\, \tag{21}\] and conformal factor \[G(\xi)=\frac{2\alpha^{2}}{2+\alpha^{2}\left(1+\cosh(2m\xi\alpha)\right)}. \tag{22}\] The extrinsic curvature associated with the boundary metric \(h_{ij}\) reads \[K_{ij}:=\frac{1}{2}\mathcal{L}_{n}g_{ij}=-\frac{1}{2N}\left(\partial_{z}h_{ij }-\nabla_{i}N_{j}-\nabla_{j}N_{i}\right)\, \tag{23}\] with \(\nabla_{i}\) the covariant derivative respect to \(h_{ij}\), and where the outward-pointing normal to the \(z=\rm const\) hypersurface is \[n=\frac{1}{N}\left(N^{i}\partial_{i}-\partial_{z}\right). \tag{24}\] Finally, its trace is taken with respect to the boundary metric, \(K=h^{ij}K_{ij}\). An important cross-check of the behaviour of the solution consists of analysing the leading order term of the extrinsic curvature near the boundary \[K^{i}_{\ j}\sim\left(\frac{1}{\ell}+\mathcal{O}\left(z\right)\right)\delta^{i }_{\ j}. \tag{25}\] Figure 2: Accelerated horizon radius as a function of the azimuthal coordinate \(\phi\), for different values of the parameter \(\mathcal{A}\) that characterises the acceleration of the black hole. The solid purple line corresponds to the case \(\mathcal{A}=0\), where both horizons coincide, and therefore remain at a constant value of \(z\). The orange dotted curve corresponds to \(\mathcal{A}=\left(2\ell m\sinh m\pi\right)^{-1}\), an intermediate value where the horizon remains smooth throughout. The green dashed curve corresponds to the critical value \(\mathcal{A}=\left(\ell m\sinh m\pi\right)^{-1}\), where the horizon touches the conformal boundary exactly at the endpoints \(\phi=\pm\pi\). The blue dot-dashed curve corresponds to \(\mathcal{A}=3(\ell m\sinh m\pi)^{-1}\), which is below the critical bound and the red dots indicate the points where the horizon meets the conformal boundary. Figure 3: Domain of the \(z\) coordinate in the presence of the two horizons. The upper thick curve corresponds to the black hole horizon and the bottom one corresponds to the accelerating horizon touching the conformal boundary at the two dark dots. The shaded region corresponds to the region where spacetime is defined. The asymptotic behaviour described above represents the minimum requirement in which the gravitational action has a well-posed variational principle, enabling the definition of holographic conserved quantities in AdS\({}_{3}\)[56]. It should be noticed that the coordinate \(z\) defined in (18) matches the FG coordinate only at the leading order, which is sufficient for constructing the holographic stress tensor and the boundary conformal classes \(g_{(0)}\). However, it is important to highlight that the holographic free energy is not guaranteed to match in this coordinate frame due to the discrepancy between the two gauges. In the presence of an odd-dimensional bulk, the conformal freedom in the FG frame transforms the free energy non-trivially and therefore, the free energy depends on the choice of conformal representative of the boundary theory [54]. We explore this issue in section IV. Boundary stress tensor and holographic energy.The holographic energy-momentum tensor is given by the variation of the regularised action with respect to the first term of the FG expansion \(g_{(0)}\). This has been written in terms of quantities depending on the induced metric [57], which in dimension three yields \[\langle T_{ij}\rangle= \ \lim_{z\to 0}-\frac{1}{8\pi G}\left(K_{ij}-Kh_{ij}+\frac{1}{ \ell}h_{ij}\right). \tag{26}\] Using (10) we obtain \[\langle T^{\tau}_{\tau}\rangle = -\frac{m^{2}\ell}{32\pi G}\left(2+m^{2}\mathcal{A}^{2}\ell^{2}-3 m^{2}\mathcal{A}^{2}\ell^{2}\cosh(2m\phi)\right)\,\] \[\langle T^{\phi}_{\phi}\rangle = \frac{m^{2}\ell}{16\pi G}\left(1+m^{2}\mathcal{A}^{2}\ell^{2} \cosh^{2}(m\phi)\right)\, \tag{27}\] which is, indeed, covariantly conserved with respect to the boundary metric \(g_{(0)}\), viz., \(\nabla^{(0)}_{i}T^{ij}=0\). In addition, it should be pointed out that the stress tensor exhibits a non-vanishing trace, indicating the breakdown of Weyl invariance in the quantum theory, and resulting in the emergence of the conformal anomaly \[\langle T^{i}_{\,\,i}\rangle=\frac{c}{24\pi}\mathcal{R}[g_{(0)}]. \tag{28}\] Here, \(c=3\ell/2G\) matches the Brown-Henneaux central charge [53] and \(\mathcal{R}[g_{(0)}]=2m^{4}\mathcal{A}^{2}\ell^{2}\cosh(2m\phi)\) is the curvature of the boundary metric, which is always positive. Additionally, it is worth noting that the stress tensor can be expressed in the form of a perfect fluid [58] \[\langle T_{ij}\rangle=(p+\rho)\,u_{i}u_{j}+pg_{(0)ij}\, \tag{29}\] on a curved background \(g_{(0)}\) and with timelike velocity \(u^{i}\) given by \[u^{i}=\frac{1}{\sqrt{-g_{(0)tt}}}\left(\frac{\partial}{\partial t}\right)^{i} \,\quad u_{i}u_{j}g^{ij}_{(0)}=-1. \tag{30}\] The energy density \(\rho\) and pressure \(p\) read \[\rho = \frac{m^{2}\ell}{32\pi G}\left\{2+m^{2}\mathcal{A}^{2}\ell^{2} \left[1-3\cosh(2m\phi)\right]\right\}\,\] \[p = \frac{m^{2}\ell}{16\pi G}\left[1+m^{2}\mathcal{A}^{2}\ell^{2} \cosh^{2}(m\phi)\right]. \tag{31}\] We observe that the energy density \(\rho\) generates the energy flow of the fluid since \(u_{i}T^{ij}=-\rho u^{j}\). It is noteworthy that this is different from the dual of the four-dimensional accelerating black hole, where the stress tensor cannot be expressed in the form of a perfect fluid, and corrections due to the acceleration parameter cause the boundary metric to be non-conformally flat, leading to non-trivial stress tensor components [25; 26]. Finally, we can compute the energy of the theory with respect to \(\partial_{\tau}\) by integrating the energy density of the fluid, thus \[M = \int_{-\pi}^{\pi}d\phi\sqrt{-g_{(0)}}\langle T^{\tau}_{\,\,\tau}\rangle\] \[= \frac{m^{2}\left[2\pi(2+m^{2}\mathcal{A}^{2}\ell^{2})-3m\mathcal{ A}^{2}\ell^{2}\sinh(2\pi m)\right]}{32\pi G\alpha}\,\] of which the zero-accelerating limit gives \[\lim_{\mathcal{A}\to 0}M=\frac{m^{2}}{8G}. \tag{33}\] This precisely represents the BTZ mass normalised such that the pure vacuum energy corresponds to \(m^{2}\rightarrow-1\). The behaviour of the mass can be seen in Figure 4 and Figure 5. ## IV Euclidean action: counterterms and domain wall Holographic quantities are known to suffer from UV divergences. These are shown to be mapped to IR divergences appearing in the gravitational sector of theories on asymptotically anti-de Sitter spaces. Consequently, defining observables requires having a well-defined renormalised action. This has been achieved in [55; 57; 59; 13] Figure 4: Holographic mass of the accelerated BTZ pushed by a strut with respect to \(\mathcal{A}\ell\), with \(m^{2}=0,0.8,1,1.2\) for the dot-dashed purple, dotted orange, dashed green, and solid blue curves, respectively. by adding counterterms that depend only on intrinsic quantities of the boundary. For AdS\({}_{3}\) gravity, the renormalised Euclidean action consists of the Einstein-Hilbert action supplemented with the Gibbons-Hawking-York (GHY) term and the Balasubramanian-Kraus counterterm, ensuring a well-posed variational principle. However, the identifications performed in section II in order to construct the accelerating BTZ black hole, suggest that we need to identify the contributions coming from the \(x=\) const surface. Following [60; 61; 62], the dynamics of the domain wall can be captured by considering the contributions of the Gibbons-Hawking terms associated with the surface (along each side) and the domain wall action [63], producing the Israel equation (6). Then, the Euclidean action can be separated into two contributions \[I_{\rm E}=I_{\rm ren}+I_{\rm DW}\, \tag{34}\] where \[I_{\rm ren}= - \frac{1}{16\pi G}\int_{\mathcal{M}}d^{3}x\sqrt{g}\left(R-2\Lambda\right) \tag{35}\] \[- \frac{1}{8\pi G}\int_{\partial\mathcal{M}}d^{2}x\sqrt{h}\left(K- \frac{1}{\ell}\right)\,\] is the AdS\({}_{3}\) renormalised action [55; 57; 59], and \[I_{\rm DW}=-\int_{\Sigma}d^{2}y\sqrt{\gamma}\left(\frac{1}{8\pi G}[\mathcal{K }]]^{+}_{-}+\sigma\right)\, \tag{36}\] are the Gibbons-Hawking terms of the internal boundary and wall tension that give the correct equation of motion for the domain wall (6). Here \(\Lambda=-\ell^{-2}\) is the cosmological constant, \(\mathcal{M}\) corresponds to the bulk geometry restricted to some short IR regulator \(\delta\) while \(\partial\mathcal{M}\) is its boundary which is endowed with a metric \(h_{ij}\) evaluated at \(z=\delta\). For the domain wall contribution we use \[\frac{1}{8\pi G}\int\limits_{\Sigma}d^{2}x\sqrt{\gamma}\left[\mathcal{K} \right]\big{|}^{+}_{-}=-2\int_{\Sigma}d^{2}y\sqrt{\gamma}\ \sigma\, \tag{37}\] where the right-hand side is proportional to the action of the domain wall [63], with \(\sigma\) the tension computed in (8). Thus, the contribution from the renomalised AdS\({}_{3}\) action is \[I_{\rm ren}=\beta M-S-\frac{\beta\mathcal{A}m\sinh(\pi m)}{4\pi\alpha G}\left( \frac{1}{z_{h}}-\frac{1}{\delta}\right)\, \tag{38}\] where \(z_{h}:=r_{h}^{-1}+\mathcal{A}\cosh(m\pi)\). On the other hand, the domain wall gives \[I_{\rm DW}=\frac{\beta\mathcal{A}m\sinh(\pi m)}{4\pi\alpha G}\left(\frac{1}{z_ {h}}-\frac{1}{\delta}\right)\, \tag{39}\] such that the total Euclidean action (34) yields the standard quantum statistical relation \[I_{\rm E}=I_{\rm ren}+I_{\rm DW}=\beta M-S\, \tag{40}\] where \(M\) is the black hole energy found in (32), \(\beta\) and \(S\) are the inverse of the temperature and the entropy found in (12), respectively. Note that if one considers only the Balasubramanian-Kraus counterterm on top of the Einstein-Hilbert and GHY terms, the resulting Euclidean action is divergent and the horizon contribution does not recover the black hole entropy. The domain wall extends from the interior (black hole horizon) to the conformal boundary contributing with a divergent term exactly as the one coming from the AdS\({}_{3}\) renormalised action but with an opposite sign, making the on-shell action well-defined and reproducing on-shell the quantum statistical relation between the gravitational Euclidean action and the Gibbs thermodynamic free energy. It is clear that this computation differs from the four-dimensional case [20; 25; 26], where there is no explicit mention of the contribution of the cosmic string. In fact, this has been explained in the context of instantons in braneworld scenarios [60]. The dynamics of the Figure 5: Holographic mass of the accelerated BTZ pushed by a strut with respect to \(m^{2}\), with \(\mathcal{A}\ell=0,1,2,3\) for the dot-dashed purple, dotted orange, dashed green, and solid blue curves, respectively. Figure 6: Class II solution with no accelerating horizon. The upper diagonal denotes the conformal boundary \(\partial\mathcal{M}\), and the lateral lines are the two internal boundaries \(\Sigma_{-}\) and \(\Sigma_{+}\) that are identified in order to obtain a compact horizon. The resulting spacetime corresponds to the accelerated BTZ solution with a domain wall extending from the black hole horizon to the deep interior. cosmic string, or _vortex_, is introduced in the action as a codimension-2 energy density. On-shell, the extrinsic curvature contains an extra delta-2 function supported on the cosmic string, and therefore, the Gibbons-Hawking term cancels out the vortex. As a result, the renormalised Euclidean action in AdS\({}_{4}\) is enough to account for the thermodynamics of the accelerating black hole. ## V Entanglement entropy In recent years, holographic entanglement entropy [64; 65] has been shown to be a fundamental ingredient in the understanding of the AdS/CFT correspondence. For example, it has served as a probe for quantum many-body systems, understanding black hole entropy, and the emergence of spacetime, see for instance [66] and reference therein. It is a powerful tool that provides valuable information on the dual field theory as it measures the degree of correlation between two subsystems. At the same time, it allows one to understand quantum information holographically using the nature of the bulk spacetime. The celebrated Ryu--Takayanagi (RT) formula states that the holographic entanglement entropy can be obtained by minimising the area of a co-dimension two spatial[67] hypersurface \(\Gamma\) (referred to as the RT surface) whose boundary is anchored at the AdS conformal boundary, namely \[S_{\rm E}=\frac{A_{\Gamma}}{4G}. \tag{41}\] The RT surface divides the boundary into two subsystems \(A\) and \(A^{c}\) and (41) indicates the number of states on \(A\) whose measures are consistent with the ones of \(A^{c}\). In general, the Euler-Lagrange problem is not easy to solve, and the profile of the RT surface is usually not obtained analytically as the existence of conservation laws is not guaranteed. Nonetheless, in three-dimensional gravity, as the theory lacks degrees of freedom; all solutions are locally AdS for which it is possible to reduce the complexity of the procedure by finding a good coordinate system. Following [68], we first consider a slice bounded by a line of latitude \(\phi_{0}\) and treat \(\phi\) as time in the minimisation problem. To obtain the RT surface for the accelerating BTZ, we use the mapping (16) which allows us to find the extremal surface for the Rindler observer. The minimal surface is then parametrised by \[R_{\rm e}(\Theta)=\ell\left(1-\frac{\cosh^{2}\Theta}{\cosh^{2}\Theta_{0}} \right)^{-\frac{1}{2}}\, \tag{42}\] where \(\Theta_{0}\) satisfies that the radial coordinates go to infinity at the endpoints. This is mapped to the boundary condition \(r(\phi_{0})=-(\mathcal{A}\cosh(m\phi_{0}))^{-1}\), such that the surface is anchored to the conformal boundary. Now we can map the surface to the coordinates used in (10); for the sake of notation we define \(\mathcal{B}:=\cosh(m\phi_{0})\) and \(\phi_{0}:=m^{-1}\Theta_{0}\), resulting in \[r_{\rm e}(\phi)=\frac{m\ell\left(\alpha\mathcal{A}m\ell\cosh(m\phi)+\mathcal{B }\sqrt{\mathcal{B}^{2}-\alpha^{2}\sinh^{2}(m\phi)-1}\right)}{\alpha\left( \alpha^{2}\cosh^{2}(m\phi)-\mathcal{B}^{2}\right)}\, \tag{43}\] whose expansion for small acceleration is \[r_{\rm e}(\phi) =\frac{m\ell}{\sqrt{1-\frac{\cosh^{2}(m\phi)}{\cosh^{2}(m\phi_{0 })}}}\] \[\quad-\frac{m^{2}\mathcal{A}\ell\cosh(m\phi)}{\cosh^{2}(m\phi_{0} )-\cosh^{2}(m\phi)}+\mathcal{O}(\mathcal{A}^{2}). \tag{44}\] Substituting the parametrisation into the area functional, one obtains the value of the minimal area that is proportional to the holographic entanglement entropy. Despite the simplicity of the last expression, obtaining the area is quite involved. It is divergent at \(\phi=\phi_{0}\), and therefore a short distance cutoff \(\epsilon\) must be introduced. Then, following [64], we consider the integration from \(\epsilon\) to \(\phi_{0}-\epsilon\) and expanding again for small acceleration, we get that the entanglement entropy (41) becomes \[S_{\rm E} =\frac{c}{3}\log\left[\frac{\beta}{\pi\epsilon}\sinh\left(\frac{ \pi L}{\beta}\right)\right]\] \[\quad-2\mathcal{A}\ell^{2}\left(\frac{2\pi\ell}{\beta\epsilon} \sinh\left(\frac{\pi L}{\beta}\right)\right)^{\frac{1}{2}}\tanh\left(\frac{ \pi L}{2\beta}\right)\] \[\quad-\frac{\mathcal{A}^{2}\ell^{4}\pi}{\beta\epsilon}\sinh \left(\frac{\pi L}{\beta}\right)-\ldots\, \tag{45}\] where we have rewritten \(L:=4\ell\phi_{0}\) to relate it with the length of the entangling region. Note that, as the temperature is independent of the acceleration, when mapping to Rindler AdS the leading order corresponds to the usual result for the BTZ black hole [64], However, the next to leading order gives subleading divergences which decrease the amount of entanglement with the acceleration growth. In fact, from the perspective of the black hole solution, as the acceleration - or in other words, the conical deficit - increases, the size of the boundary region that is accessible decreases, as can be seen from Figure 1. Therefore, we can interpret the subleading behaviour of the entanglement as an indication of some information loss due to acceleration. ## VI Discussion In this work, we have described the boundary spacetime associated with accelerating black holes in 2+1 dimensions. Our starting point has been a concise and pedagogical review of the construction of the three-dimensional accelerating geometries presented in [38; 39; 40]. We have analysed the origin of the acceleration in three dimensions and the proper identifications of the geometry that allows for black hole interpretation. Particular emphasis has been given to the case in which the spacetime represents an accelerating BTZ black hole pushed by a strut. This case is particularly appealing, as it allows for an accelerating horizon, however, for thermodynamic purposes, we focussed on the case where there is just a black hole horizon. Since the conformal boundary is defined by a surface that is parameterised as \(r=r(\phi)\), determining the boundary metric becomes a non-trivial task. To address this, we introduced an alternative coordinate system that incorporates a new "holographic coordinate", \(z\), that is normal to the boundary [20; 34]. In this framework, the metric can be expressed in terms of a radial ADM foliation, revealing crossed terms that are typically suppressed when writing the C-metric in the FG fashion [25; 26]. In fact, as noticed in [34], beyond the \(z\) coordinate can be identified with the FG coordinate only up to the leading order. Nonetheless, the first order of the expansion fully determines the variational principle and therefore, the structure of the boundary stress tensor. We obtained the black hole mass by mapping it to the energy of the dual CFT and verified that it recovers the BTZ mass in the zero acceleration limit. Additionally, we formulated the holographic stress tensor using the fluid/gravity correspondence, wherein the dual CFT is interpreted as a perfect fluid with non-constant pressure on a curved background. This is in contrast to the four-dimensional case, which incorporates shears and corrections arising from the non-conformal flatness of the boundary metric. Next, we computed the renormalised action by employing the standard counterterm prescription in AdS/CFT, as developed in [55; 14; 59]. We found that the on-shell action gives the quantum statistical relation which relates the partition function and the Gibbs free energy as expected but with two additional terms. These terms, in principle, contain an extra divergence that comes from the boundary terms of the gravitational action (Gibbons-Hawking-York term and Balasubramanian-Krauss counterterm) and a finite contribution of the black hole horizon. In fact, due to the construction of the accelerating BTZ black hole, it is necessary to include contributions from the internal boundaries which are on the same footing as the GHY term. These terms ensure a well-posed variational problem producing the Israel junction conditions that govern the dynamics of the domain wall. Making use of the Israel equations explicitly, it is possible to trade the extrinsic curvature for the energy density of the wall and therefore, express the additional term as the Nambu-Goto action of the domain wall. In a similar spirit to [69; 70; 71], the higher-codimension defect induces extra contributions into the partition function modifying the thermodynamics of the system under consideration. Geometrically, the domain wall extends from the deep interior to the boundary of the spacetime generating a divergence at \(z=0\). Therefore, its contribution to the total Euclidean action must be considered in order to obtain the correct quantum statistical relation. We closed our study by considering the mapping between the accelerated BTZ black hole and the Rindler observer which allows us to obtain the Ryu-Takayanagui surface and to compute the holographic entanglement entropy on the dual CFT. We found that the well-known logarithmic divergence of the entanglement entropy in a thermal conformal field theory holds in this context. However, we also discovered new subleading divergences that are proportional to acceleration and possess a negative sign. In [40], it is shown that the boundary region of the spacetime is altered by the tension of the domain wall. From Figure 1, it is clear that the access to the boundary depends on the value of the acceleration. In fact, the behaviour of the entanglement is consistent with this interplay between acceleration and boundary: as the acceleration increases, a bigger portion of the AdS boundary is cut out and therefore, there is information that is lost in the dual field theory as suggested by (45). As far as the authors' knowledge, such subleading behaviour has not been observed before in the literature. It is also important to note that the procedure is specific to three dimensions, as only massless four-dimensional accelerating solutions can be mapped to the Rindler patch and the identification of the RT surfaces becomes a highly non-trivial task. This realisation highlights that three dimensions offer a unique yet comprehensive setting for exploring holographic two-dimensional CFTs in the presence of acceleration. In the future, an important aspect that requires further investigation is the establishment of a consistent thermodynamic description of these black holes. This entails studying the first law, Smarr relation, isoperimetric inequality [72] and the whole machinery of black hole thermodynamics. In fact, given the complexity of the mass (32) and entropy (12), verifying whether these black holes adhere to the first law is not straightforward. Upon a simple variation of these quantities, it becomes apparent that there exists a possibility that, unlike slow-accelerating black holes in AdS\({}_{4}\), the system might not conform to a first law and thus may not be in thermal equilibrium. Nonetheless, this is not yet clear, as there are several issues that require consideration before making such a statement. Given that we have obtained the quantum statistical relation, it seems very plausible to have a full Euclidean thermodynamic prescription for accelerating black holes in 2+1 dimensions as it has been done for the four-dimensional counterpart in [25]. Additionally, it would be intriguing to investigate the role of acceleration in the dual theory using the extended first law developed in [24; 31]. This modified first law incorporates new chemical potentials that are conjugate to the cosmic string tension, potentially providing insights into the physical properties associated with these additional chemical potentials. Furthermore, recent work [73; 74; 75] has shown that the extended first law of black hole thermodynamics introduces a new chemical potential respon sible for the change in the central charge of the dual CFT. It would be interesting to see how the domain wall tension plays a role in the first law of thermodynamics of the boundary theory. Another interesting direction that would shed light on the role of acceleration from the dual CFT perspective is to explore the hydrodynamic behaviour of the holographic stress tensor for four-dimensional accelerating black holes. While the stress tensor has been expressed within the framework of fluid/gravity correspondence [25], it remains unclear whether it possesses a valid hydrodynamic description that allows for the identification of associated transport coefficients. It would be interesting to see whether acceleration plays a significant role in determining the transport coefficients and if they can be utilised to describe more realistic field theories. Additionally, an expansion regarding the fluid velocity and acceleration of the dual fluid stress tensor is still an open question. The three-dimensional case studied in this paper serves as a good starting point, as the solution is relatively simple yet rich enough to generate a stress tensor that exhibits non-constant pressure. This enriches the opportunities for studying more realistic systems through the scope of fluid/gravity correspondence. ## Acknowledgments We thank Moh Al Attar, Jose Barrientos, Saghar Hosseini, Viktor Matyas, Olivera Miskovic, and Rodrigo Olea for helpful discussions. Gah would like to thank Aristomenis Donos for his invaluable comments and insightful discussions, which contributed to the improvement of this paper. Gah also thanks Mohamed Anber and Jerome Gauntlett for their useful comments and discussions regarding this work. Ac and Fd would like to thank the hospitality of the Institute of Mathematics of the Czech Academy of Science during the final stage of this project. The work of Gah is funded by Becas Chile (ANID) Scholarship No 72200271. Ac work is partially funded by FONDECYT Regular grant No. 1210500 and Primus grant PRIMUS/23/SCI/005 from Charles University. The work of Fd is supported by Beca Doctorado nacional (ANID) 2021 Scholarship No 21211335, ANID/ACT210100 Anillo Grant "Holography and its applications to High Energy Physics, Quantum Gravity and Condense Matter Systems" and FONDECYT Regular grant No. 1210500. The work of Rg is supported in part by the STFC Consolidated Grant ST/P000371/1. Rg would like to thank the Perimeter Institute for Theoretical Physics for hospitality. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. ## Appendix A FG expansion In [26] it was shown that the metric can be cast in a FG gauge \[ds^{2} = \frac{\ell^{2}}{z^{2}}dz^{2}+\frac{\ell^{2}}{z^{2}}\left(g_{(0)ij }+\ldots\right. \tag{12}\] \[\left.+z^{d}\left(g_{(d)ij}+h_{(d)ij}\log(z)\right)+\ldots \right)dx^{i}dx^{j}\,\] which in three dimensions, the expansion terminates at order \(z^{4}\), and we do not consider the logarithmic contribution explicitly. Applying the same coordinate transformation of [40] \[y = -\tfrac{1}{Ar}=\cos\xi+\sum\limits_{m=1}^{\infty}F_{m}(\xi)z^{m}\,\] \[x = \qquad\cos\xi+\sum\limits_{m=1}^{\infty}G_{m}(\xi)z^{m}\, \tag{13}\] such that, the new coordinate \(\xi\) is now perpendicular to the (conformal) boundary of the spacetime. The functions \(F_{m}(\xi)\) and \(G_{m}(\xi)\) are fixed by requiring the fall of conditions of (12) such that there are no crossed terms \(g_{zi}\). The coefficients can be solved order by order completely, up to \(F_{1}(\xi)\) which cannot be fixed and appears as a conformal factor of the boundary metric \(g_{(0)}\), which is consistent with the fact that the conformal boundary of AdS does not correspond to a fixed metric but to conformal equivalence classes. As explained in section IV, in three dimensions, besides the Weyl anomaly, the holographic quantities are not conformal invariant and a different coordinate system is needed in order to compute on-shell. Considering the expansion for the accelerated BTZ black hole (although the claims and calculation of this appendix hold for the other black hole solutions as well), one gets [40] \[ds_{(0)}^{2}=\omega^{2}\left(-d\bar{t}^{2}+d\xi^{2}\right)\, \tag{14}\] where \(\bar{t}=\alpha\ell t\) and \(\omega=\omega(\xi)\) is an arbitrary function which determine different conformal representatives. The boundary curvature reads \[\mathcal{R}[g_{(0)}] = \frac{2\Upsilon}{\ell^{2}\omega(\xi)}\Bigg{[}\Upsilon\sin^{2}\xi \left(\frac{\omega^{\prime\prime}}{\omega}-\frac{\omega^{\prime 2}}{\omega^{2}}\right) \tag{15}\] \[+\cos\xi\left(1-3A^{2}\ell^{2}\sin^{2}\xi\right)\frac{\omega^{ \prime}}{\omega}\Bigg{]}\,\] where \(\Upsilon=1-A^{2}\ell^{2}\sin^{2}\xi\). Other quantities such as the stress tensor and Weyl anomaly have been computed with this method in [40]. But as aforementioned, only the Weyl anomaly is conformal invariant and, indeed, gives the Brown-Henneaux central charge for any representative. Other features are also independent of \(\omega\), such as that the holographic stress tensor is covariantly constant with respect to \(g_{(0)}\) and its transformation properties. Nonetheless, the two-dimensional stress tensor is a quasi-primary operator and to compute conserved quantities, such as the energy, depends on the Schwarzian derivative of it. In [40] it has been chosen \(\omega(\xi)=1\), which renders the boundary metric to be flat, as can be seen from (101). Comparing with the boundary metric found in (102), taking \(\omega(\xi)=G(\xi)\) in (22) equals both backgrounds. Therefore, we shall not compare the energy found via holographic methods. Finally, the holographic stress tensor can be cast in the same fashion as in (29) with velocity \[u_{i}=\omega(\xi)\delta_{i}^{t}\, \tag{103}\] pressure \[p=\frac{(1+m^{2}{\cal A}^{2}\ell^{2})\omega^{2}+(\xi^{2}-1)\Upsilon^{2}\omega^ {\prime 2}}{16\pi G\ell\alpha^{2}\omega^{4}}\, \tag{104}\] and energy density \[\rho =\frac{m}{16\pi\alpha G\Upsilon\omega(\xi)^{2}\sin\xi}\Bigg{[}2 \Upsilon\omega(\xi)\times\] \[\quad\left(\omega^{\prime}(\xi)\left(3m^{2}{\cal A}^{2}\ell^{2} \sin^{2}\xi+1\right)\cos\xi-\Upsilon\omega^{\prime\prime}(\xi)\sin^{2}\xi\right)\] \[\quad+\omega(\xi)^{2}\left(m^{2}{\cal A}^{2}\ell^{2}+1\right)+3 \Upsilon^{2}\omega^{\prime}(\xi)^{2}\sin^{2}\xi\Bigg{]}. \tag{105}\] Finally, we can now compute the Euclidean action with the standard counterterms coming from holographic renormalisation with no need to include the domain wall action as the metric is explicitly in the FG gauge. For the accelerated BTZ pulled by a wall one gets that for the action to become finite, one must consider the extra counterterm that accounts for the extra logarithmic divergence that appears in odd dimensions [55; 59; 13] of the form \[I_{\rm log}=\log\delta\int dtd\xi\sqrt{g_{(0)}}{\rm tr}\ g_{(2)}\, \tag{106}\] where \(\delta\) is the IR regulator, and traces are taking respect to the boundary metric \(g_{(0)}\). However, in \(d=2\) boundary dimensions, this term fails to furnish any contribution to the holographic stress tensor since this term is proportional to the boundary curvature in the renormalised action, which is a topological invariant in two dimensions [55]. Then, the Euclidean action becomes finite, but one is not able to identify the thermodynamic quantities as integration is not possible for an arbitrary \(\omega(\xi)\). The finite value of the action is rather lengthy and we shall not present it here. ## Appendix B Results for class I\({}_{\rm C}\) Generically, Class I describes the geometry of accelerating particle-like solutions. Nevertheless, a particular case dubbed Class I\({}_{\rm C}\), represents an accelerating black hole solution with no continuous limit to the standard BTZ geometry. The novel solution was found in [40] by considering Class I geometries in a rapid phase, \(A^{2}\ell^{2}\geq 1\), in which there is a Killing horizon at \(y_{h}=\sqrt{1-A^{-2}\ell^{-2}}\). Then, following the procedure of section II to include a domain wall at some \(x_{\rm max}\), with tension \[\sigma=\frac{A}{4\pi G}\sqrt{1-x_{\rm max}^{2}}. \tag{107}\] Then, using the coordinate transformation \[t=\frac{{\cal A}m^{2}}{\alpha}\tau\,\qquad y=\frac{1}{{\cal A}r}\,\qquad x =\cos(m\phi)\, \tag{108}\] renders (1) (for Class I) into \[ds^{2}=\frac{1}{\Omega^{2}}\left(-f(r)\frac{d\tau^{2}}{\alpha^{2}}+\frac{dr^{ 2}}{f(r)}+r^{2}d\phi^{2}\right)\, \tag{109}\] where \[f(r) =\frac{r^{2}}{\ell^{2}}-m^{2}({\cal A}^{2}r^{2}-1)\,\] \[\Omega ={\cal A}r\cos(m\phi)-1\, \tag{110}\] and the tension \[\sigma=\frac{{\cal A}m}{4\pi G}\sin(m\pi). \tag{111}\] As before, the conical deficit is regulated by the parameter \(m\), which relates with the upper bound of the \(x\) coordinate as \[x_{\rm max}=\cos(m\pi). \tag{112}\] As explained in [40], \(x_{\rm max}\in(y_{h},1)\), with \(y_{h}\geq 0\) in order to have a single compact horizon. This implies that there is a maximum value for the mass parameter as \[m<\frac{\arccos(y_{h})}{\pi}\, \tag{113}\] and a minimum value \(m>0\). Otherwise, the solution would present a non-compact horizon. Moreover, as now the mass parameter \(m\) is bounded, there is also a constraint in the possible values of the acceleration in order that geometry exists, given by \[\frac{1}{m}\leq{\cal A}\ell<\frac{1}{m\sin(m\pi)}. \tag{114}\] This implies that \({\cal A}\ell\geq 2\) in order to have a black hole, showing explicitly that the zero-acceleration limit is not well-defined, and the solution is not continuously linked with the BTZ black hole. The thermodynamic properties of the horizon, namely, horizon radius, Hawking temperature and entropy are \[r_{h} =\frac{m\ell}{\sqrt{m^{2}{\cal A}^{2}\ell^{2}-1}}\, \tag{115}\] \[T =\frac{1}{\beta}=\frac{|f^{\prime}(r_{h})|}{4\pi\alpha}=\frac{ \sqrt{m^{2}{\cal A}^{2}\ell^{2}-1}}{2\pi\ell\alpha}\,\] \[S =\frac{\ell}{G}\operatorname{arctanh}\left[\left(\sqrt{m^{2}{\cal A }^{2}\ell^{2}-1}+m{\cal A}\ell\right)\tanh\left(\frac{m\pi}{2}\right)\right]\,\] \[\alpha =\sqrt{m^{2}{\cal A}^{2}\ell^{2}-1}\.\] Finally, \(\alpha\) is the constant used in the transformation (16) to identify the Rindler proper time. In order to compute the mass, we apply the same ideas as in section IV. Firstly, we introduce \[\frac{1}{r}=z+\mathcal{A}\cos(m\phi)\, \tag{14}\] such that the conformal boundary is now located at \(z=0\). This implies that the horizon is no longer located on a constant surface. Nonetheless, the inequality (13) ensures that the horizon does not intersect with the conformal boundary, as can be seen from Figure 7. Using (26) to compute the holographic stress tensor, one obtains \[\langle T^{\tau}_{\ \tau}\rangle =\frac{m^{2}\ell}{32\pi G}\left(2-m^{2}\mathcal{A}^{2}\ell^{2}+3m ^{2}\mathcal{A}^{2}\ell^{2}\cos(2m\phi)\right)\,\] \[\langle T^{\phi}_{\ \phi}\rangle =\frac{m^{2}\ell}{16\pi G}\left(m^{2}\mathcal{A}^{2}\ell^{2} \cos^{2}(m\phi)-1\right)\, \tag{15}\] whose trace recovers exactly (28), with \(c=3\ell/2G\) the Brown-Henneaux central charge. Furthermore, the boundary stress tensor is covariantly conserved with respect to the boundary metric \(g_{(0)}\), and can also be written as the one of a perfect fluid (29) with non-constant pressure and density. Moreover, using (32), one gets \[M =\frac{m^{2}\left[2\pi m^{2}\mathcal{A}^{2}\ell^{2}-3m\mathcal{A} ^{2}\ell^{2}\sin(2\pi m)-4\pi\right]}{32\pi G\alpha}\, \tag{16}\] corresponding to the I\({}_{\rm C}\) black hole mass. Finally, the Euclidean action shares the same properties as the Class II black holes; the Balasubramanian-Krauss action has an extra divergence and an extra horizon finite contribution due to the acceleration that is removed by the inclusion of the domain wall action (36). Then, its on-shell value \[I_{\rm E}=\beta M-S\, \tag{17}\] satisfies the standard quantum statistical relation with the Gibbs free energy.
2305.14842
Exploring Sentiment Analysis Techniques in Natural Language Processing: A Comprehensive Review
Sentiment analysis (SA) is the automated process of detecting and understanding the emotions conveyed through written text. Over the past decade, SA has gained significant popularity in the field of Natural Language Processing (NLP). With the widespread use of social media and online platforms, SA has become crucial for companies to gather customer feedback and shape their marketing strategies. Additionally, researchers rely on SA to analyze public sentiment on various topics. In this particular research study, a comprehensive survey was conducted to explore the latest trends and techniques in SA. The survey encompassed a wide range of methods, including lexicon-based, graph-based, network-based, machine learning, deep learning, ensemble-based, rule-based, and hybrid techniques. The paper also addresses the challenges and opportunities in SA, such as dealing with sarcasm and irony, analyzing multi-lingual data, and addressing ethical concerns. To provide a practical case study, Twitter was chosen as one of the largest online social media platforms. Furthermore, the researchers shed light on the diverse application areas of SA, including social media, healthcare, marketing, finance, and politics. The paper also presents a comparative and comprehensive analysis of existing trends and techniques, datasets, and evaluation metrics. The ultimate goal is to offer researchers and practitioners a systematic review of SA techniques, identify existing gaps, and suggest possible improvements. This study aims to enhance the efficiency and accuracy of SA processes, leading to smoother and error-free outcomes.
Karthick Prasad Gunasekaran
2023-05-24T07:48:41Z
http://arxiv.org/abs/2305.14842v1
# Exploring Sentiment Analysis Techniques in Natural Language Processing: A Comprehensive Review ###### Abstract **Sentiment analysis (SA) is the automated process of detecting and understanding the emotions conveyed through written text. Over the past decade, SA has gained significant popularity in the field of Natural Language Processing (NLP). With the widespread use of social media and online platforms, SA has become crucial for companies to gather customer feedback and shape their marketing strategies. Additionally, researchers rely on SA to analyze public sentiment on various topics. In this particular research study, a comprehensive survey was conducted to explore the latest trends and techniques in SA. The survey encompassed a wide range of methods, including lexicon-based, graph-based, network-based, machine learning, deep learning, ensemble-based, rule-based, and hybrid techniques. The paper also addresses the challenges and opportunities in SA, such as dealing with sarcasm and irony, analyzing multi-lingual data, and addressing ethical concerns. To provide a practical case study, Twitter was chosen as one of the largest online social media platforms. Furthermore, the researchers shed light on the diverse application areas of SA, including social media, healthcare, marketing, finance, and politics. The paper also presents a comparative and comprehensive analysis of existing trends and techniques, datasets, and evaluation metrics. The ultimate goal is to offer researchers and practitioners a systematic review of SA techniques, identify existing gaps, and suggest possible improvements. This study aims to enhance the efficiency and accuracy of SA processes, leading to smoother and error-free outcomes.** **Keywords: Sentiment Analysis, Natural Language Processing Mining, Emotion Classification, Ethical Considerations** ## I Introduction Sentiment analysis is the process of recognizing and extracting subjective information from textual data. It includes analyzing opinions, attitudes, emotions, and feelings articulated in a text and categorizing them as positive, negative, or neutral sentences [1]. SA has gained a lot of popularity in recent years due to the abundance of user-generated content on online platforms, which includes social media, blogs, reviews, and discussion forums. Businesses use this sentiment analysis to learn about customer feedback, monitor brand reputation, and make strategic decisions accordingly. This survey reviews the top existing research techniques for SA in NLP [2]. Researchers have discussed the most recent strategies for sentiment analysis in NLP in this survey study. An overview of the many approaches used for sentiment analysis, including deep learning techniques [3], rule-based methods [4], machine learning techniques, sentiment strength detection methods [5], swarm intelligence based methods, sentiment lexicon expansion methods, Bayesian methods, and pattern-based methods, is reviewed. Difficulties and limits of SA approaches [6], such as the absence of context, the existence of sarcasm and irony, have also been addressed in this research. The difficulty of dealing with multilingual and domain-specific data. Furthermore, researchers demonstrate SA applications in a variety of disciplines, including marketing, finance, politics, healthcare, and social media. The report concludes with a discussion of future research areas and remaining issues in sentiment analysis. This study begins with the fundamental ideas of SA, such as sentiment classification, feature extraction, and sentiment lexicons. The numerous forms of sentiment analysis are then discussed, including document-level, sentence-level, and aspect-based sentiment analysis. The survey compares rule-based, machine learning, and deep learning techniques for SA and explores their merits and demerits. Research also focuses on the difficulties of dealing with multilingual and cross-cultural sentiment analysis, as well as the ethical and privacy problems that come with sentiment analysis. This survey paper presents a detailed and state-of-the-art assessment of SA in NLP. This paper intends to be a valuable asset for NLP practitioners, researchers, and scholars. ### Ii Sentiment Analysis: Sentiment analysis is the technique in NLP for extracting and fetching subjective information out of textual data. It also includes the categorization of text into negative, neutral or positive sentences. It involves deciphering the underlying sentiment of thoughts, attitudes, emotions, and sentiments portrayed in any natural language. SA involves deciphering the underlying sentiment of thoughts, attitudes, emotions, and sentiments portrayed in any natural language. Sentiment analysis may be used on a variety of textual data sources, including social media poles, buyer reviews, stories, and product descriptions. Figure 1 explains the general framework of SA process where the first step is to input the text data, secondly there are three steps for data pre-processing which includes tokenization, stopword filtering and stemming of the textual data. After that the most crucial part comes which is the choice of the classification method. The purpose of SA is to comprehend both the entire sentiment of a textual information and the sentiment regarding individual features or entities referenced in the text. The terms feeling, view, opinion, and belief are used interchangeably, there are distinctions between them. Opinion is the conclusion which is subject to dispute (because of diverse expert perspectives), view is a personal opinion by any individual, belief refers to conscious acceptance and intellectual whereas sentiment is a view that expresses one's sentiments. Sentiment analysis may be used on a variety of textual data sources, including social media poles, buyer reviews, news stories, and product descriptions. ### Sentiment analysis applications: Sentiment analysis has emerged to be an important field to explore in NLP due to its wide range of applications and ability to reveal significant insights from unstructured textual data [7]. Sentiment analysis has been applied in opinion mining, brand monitoring, market research, customer service, political analysis, and many more fields. The rise of social media platforms has made sentiment analysis even more essential, as individuals share their ideas and experiences on a massive scale. Opinion analysis is also used in healthcare to analyse patient feedback, education to evaluate student opinion, and finance to forecast market trends.SA has variety of applications in real world including social media monitoring, product development, analysis of political situations, education, finance and e-commerce. It is also applicable in healthcare sector in gauging patients' feedback and identifying the areas of improvement. SA may be used in financial services to analyse customer reviews and discover problems with financial possessions and services. It may also be used to analyse finance-related social media conversations and determine public opinion towards financial institutions and legislation. It may be used in e-commerce in order to detect productor service flaws and enhance customer experience. Figure 2 represents some real-world applications of SA. Sentiment analysis may be used in education to examine student remarks and find areas for improvement in educational methodologies. It may also be used to analyse twitter talks regarding educational subjects in order to determine public opinion about educational policies and institutions. In sports, SA may be used to examine public opinion towards sports teams, individuals, and events. This may help sports organisations and teams enhance their marketing strategies and more effectively communicate with fans. ### I.III Sentiment analysis approaches: SA is the branch of NLP and machine learning (ML) which includes analysing text using various approaches and algorithms. Rule-based methods, ML algorithms including Naive Bayes and Support Vector Machines (SVM), and deep learning models such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are examples of these techniques. Twitter, has emerged as a popular medium for sentiment research due to its real-time nature and large user community, it has over 330 million active users across the globe which makes it a valuable source of data for SA researchers. The researchers have reviewed a variety of methodologies and techniques for sentiment analysis on Twitter, such as lexicon-based methods, deep learning models, and machine learn (ML) algorithms. However, due to social media's quick expansion and innovation, there are several obstacles and possibilities related with sentiment Figure 1: General framework of SA process. Figure 2: Real world applications of Sentiment Analysis. analysis on Twitter data. This study has compared two most commonly used approaches for SA on twitter data. These methods are 1. Lexicon-Based Approach 2. Machine Learning (ML)Approach Detailed analysis of above-mentioned approaches is being formed during the study. Process, training time, computational complexity, evaluation measures and accuracy have been compared carefully. The study also focuses on the detailed assessment of the datasets available for SA. Figure 3 showcases subcategories of two most widely SA approaches. In recent years, Twitter sentiment analysis has been the topic of substantial research, with multiple publications suggesting various methodologies and ways to increase its accuracy. With research concentrating on feature selection, domain adaption, and aspect-based sentiment analysis, the employment of ML algorithms, lexicon-based techniques, and deep learning models has grown in popularity. They are widely used in recent researches on SA to measure the behaviour of the common people. ### 1.3.1.1.1.2 Lexicon-Based Approach A lexicon-based method to sentiment analysis includes analysing the sentiment of a piece of text using a pre-defined list of terms with associated sentiment scores [8]. Typically, the sentiment score is a numerical number that measures the strength of the feeling, such as a positive score for positive statement and a negative score for negative statement. The vocabulary can be built manually or automatically using machine learning techniques. Typically, the lexicon is created by picking a collection of words that are known to indicate positive or negative sentiment and assigning a score to each word depending on the intensity of sentiment [9]. The sentiment score for a piece of text is determined using lexicon-based sentiment analysis by adding the sentiment scores [10]. Lexicon-based techniques is easier to deploy and requiring less training data than machine learning approaches. They may not be as accurate as machine learning algorithms, because they do not consider the context of the words or the links between words in a phrase. These techniques are mainly used in social media monitoring and political analysis applications. ### 1.3.1.2 Machine Learning Approach ML approaches have gain popularity for semantic analysis because they can learn from vast volumes of data without being explicitly trained [11]. Supervised learning, unsupervised learning, transfer learning and reinforcement learning are few sub categories of ML approaches. There are some examples of typical ML approaches for semantic sentiment analysis: 1. Supervised learning: This is the training of a model using a labelled dataset in which the input text is linked with its associated sentiment label [12]. The model learns to recognise textual patterns associated with various sentiment labels and can subsequently categorise new, unlabelled material accordingly. 2. Unsupervised learning: The model is not provided labelled input and must recognise patterns and structures in the data on its own. This may be used for activities like grouping related papers or themes. Figure 3: Widely used Sentiment analysis approaches. * Deep learning: Here the models are multi-layered neural networks that can learn complicated patterns and correlations in data. They/ve been utilised to do tasks including sentiment analysis, natural language processing, and speech recognition. ## II Comparative Analysis Twitter is vastly used social media platforms where people from all walks of life share their opinion on various topics. There are numerous researches being performed on SA using twitter information exchange. As twitter a vast range of provides point of views from people across the world. In this section, a comprehensive comparison is being performed on the past researches. Data available on social networking websites is one of the most imperative inputs for projecting automobile sales. Furthermore, other factors like stock market valuations, have an impact on automobile purchasing power. To estimate monthly total car sales, Pai et al. [13] used multivariate regression models incorporating both data from social media and stock market standards and time series models. While dealing with multivariate regression data, least squares support vector regression (LSSVR) models were applied and three data types including sentiment scores, stock market values and hybrid data are used to predict the market trends and sales of vehicles in united states of America. Furthermore, seasonal factors were also taken care of as they also play an important role in vehicle sales. Results show that LSSVR models employing hybrid data with desonalexplaining processes can produce precise results than other models using different dataset. Liao et al. [14] presented an SA method that is capable of predicting user satisfaction about a product, views of public on current scenarios and their sentiments on joyous or disastrous situations. The main rationale for using CNN in image analysis and classification is that it can extract an area of features from global information and examine the link between these characteristics. It can improve analysis and classification accuracy. For NLP, text data components may be retrieved piece by piece and the connection between these aspects considered, but without considering context or the entire phrase, the emotion may be misinterpreted. Currently, CNN is one of the most successful approaches for picture classification; It contains a convolutional layer to dig information from a bigger piece of text, thus the technique employs a convolutional neural network for SA. A basic CNN is created and tested on pre-set standards. The results exhibit a remarkable performance in comparison with the existing techniques such as Naive Bayes and SVM methods. Stock market is one of the common application areas of SA. In the recent times social media specially, twitter is impeccable representative of public sentiment and opinion about current scenarios. Pagolu et al.[15] presented a paradigm to monitor stock market trends and predict the upcoming fluctuations using public sentiments on twitter. The study observed how well the changes in stock prices of an organization, the ups and downfalls are linked with the public opinions being expressed in tweets about that organization. The objective is to understand the authors' opinion from a piece of text. Two textual representation i-e Word2vec and Ngram are used for SA. The research applied SA and supervised learning framework to assess correlation between a company's stock trends as well as their opinions in tweets published by them. Additionally, favourable talks and tweets about a company on twitter or other social sites can undoubtably motivate individuals to invest in the company's stocks. This results in an increase in the company's reputation and financial standing which ultimately increases the stocks. Kang et al. [16] presented a Naive Bayes(NB) SA architecture for analysis of twitter posts sharing. The architecture aimed to analyse the restaurant reviews using the Senti-Lexicon approach. As the previous methods used for this purpose do not provide the adequate insight of the customer sentiments. In this study, restaurant review was categorized into two main categories, positive and negative using improved NB approach. The improved NB method decreased the gap between positive & negative predictions to 3.6%. Moreover, when compared to SVM, the accuracy of this algorithm based on the senti-lexicon improved by 10.2% in recall and of 26.2% in precision, and it is improved by 5.6% in recall 1.9% in precision when Naive Bayes approach was applied. Kouloumpis et al. [17] presented a research study on sentiment analysis of twitter messages using linguistic features. The study evaluated the expediency of present lexical resources and features that gather information about creative as well as informal language used in twitter messages. A supervised learning approach is used for solving the problem and it turned out to be very useful in terms of accuracy. The researchers also catered hashtags in order to build the training data. A paradigm of extracting twitter sentiments is presented by Agarwal et al. [18] in a research article where they explored individual sentiments of people on twitter regarding different topics by using data mining approaches including ML and SA. Part of speech (POS) prior-polarity features are used to categorize the sentiments into positive, negative an neutral ones. Secondly, A tree kernel is employed to eliminate the requirement for time-consuming feature engineering. The novel features (when combined with previously suggested features) and the tree kernel performed in the same manner with minor differences in the outcomes. The proposed technique outdoing the state-of-the-art approaches, shown by extensive experimentations and analysis. Qiu et al. [19] have suggested a new advertising method called Dissatisfaction-oriented Advertising based on Sentiment Analysis (DASA) to solve the problem of traditional context based advertisement or correlating advertisements on web pages. According to the researchers, DASA will increase ad relevancy and user experience side by side. The algorithm used a rule-based method for extracting subject terms from opinion sentences linked with negative emotion, which are used as advertising keywords. A sample system for submitting product details for ad choice is also presented in the study. The trial findings on promotional keyword extraction and ad choice have shown that the suggested technique is successful. Neviarouskaya et al. [20] aimed to develop a framework to classify text using fine-grained attitudescriptors.The created system is based on the complexity principle and a unique methodology based on semanticically separate verb classes. This technique performed well on 1000 phrases describing individual experiences, with an average accuracy of 62% on the finegrained level (14 labels), 71% on the intermediate level (7 labels), and 80 % on the top level (3 labels). Table 1 represents performance of different techniques used previously for SA. It also includes the datasets and algorithms being used in chronological order. ## III Discussions and findings As discussed in the previous section, multiple techniques have been used to analyse sentiments in the posts published on twitter. Various application areas being stock exchange, monitoring sales of vehicles, restaurant reviews, social and political reviews and healthcare. Twitter data is often noisy and unstructured, data preparation is a critical step in sentiment analysis. Text normalization, stemming, and stop word removal are all techniques that can assist enhance the accuracy of sentiment analysis models. The research shows that the choice of the algorithm depends on the type of application it is being used for. Different techniques behave differently on the data and have different performance accordingly. This study focuses on the various ML algorithms used in sentiment analysis and opinion mining. SA integrated with ML techniques can be in useful for forecasting product appraisals and customer attitudes towards freshly introduced products. This study provides a detailed examination of several machine learning approaches and then compares their accuracy, benefits, and limits. While compared with unsupervised learning approaches, the supervised learning technique obtained 85% accuracy. Many more studies may be conducted, for instance, using the word2vec tool [21], a multilayer convolutional neural network, a bigger training datasets, and additional scenario or state assessments. Deep learning algorithms are sophisticated enough to address CV or speech recognition challenges, and CNN is the bes t choice for image analysis and classification [14]. SA, often known as opinion mining, is the art of automating identifying and extracting subjective information from text, such as attitudes, viewpoints, and emotions [18]. By assessing the mood of social media posts, businesses, governments, and individuals may gain exclusive insights into how people feel regarding their products, policies, and ideas. A tree kernel [18] is used to avoid to the requirement for time taking feature engineering in SA process. The tree kernel and novel features when paired together, performed really well. The extensive experimentation was performed to validate the finding. Another research study on SA shows that forecasting monthly car sales requires the data both, from twitter as well as stock market [13]. This will help to accurately predict the sales according to the market trends. The results of SA research can be used in a variety of fields, including marketing, politics, healthcare, and customer service. The results of the experimental process tells that there is higher association between rise or fall of stocks and word of mouth. Businesses, for example, might use sentiment analysis to enhance their products and services by analysing client comments on social media and online review sites. attitude analysis may be used in politics to measure public opinion and attitude towards political candidates or policies. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Year** & **Dataset** & **Model** & **Performance** \\ \hline \multirow{4}{*}{2018} & Yahoo finance data, Twitter dataset and hybrid data & \multirow{4}{*}{LLSVR [13]} & Hybrid data can improve accuracy. \\ \cline{3-4} & & & \\ \hline \multirow{2}{*}{2017} & MR training STS Gold dataset & \multirow{2}{*}{CNN [14]} & 74.5\% \\ & & & 68\% \\ \hline \multirow{2}{*}{2016} & Tweets collected using twitter API & \multirow{2}{*}{Logistic Regression Lib SVM [15]} & 69.01\% \\ & & & 71.82\% \\ \hline \multirow{2}{*}{2012} & Twitter dataset & Improved Naive bayes Senti-Lexicon method [16] & 3.6\% improved \\ & & & 26.2\% improved \\ \hline \multirow{4}{*}{2011} & EMOT & \multirow{2}{*}{Supervised learning approach [17]} & 2 classes (+ve, -ve) \\ & HASH & & +ve, -ve, Neutral \\ & ISIEVE & & +ve, -ve, Neutral \\ \hline \multirow{2}{*}{2011} & Twitter dataset & Tree kernel model [18] & 64\% \\ \hline \multirow{2}{*}{2010} & Real-time twitter messages & \multirow{2}{*}{DASA (Rule-based approach) [19]} & \multirow{2}{*}{55\%} \\ & & & \\ \hline \multirow{4}{*}{2010} & Fine-grained & \multirow{4}{*}{Lexicon-based semantic approach [20]} & 62\% \\ & Middle-level & & 71\% \\ \cline{1-1} & Top-level & & 88\% \\ \hline \end{tabular} \end{table} Table 1: Performance comparison of various Sentiment Analysis on Twitter data in previous decade. ## IV Conclusion Availability of data on digital platforms and the need to analyse it for various purposes has gained a lot of popularity. Sentiment analysis is widely applicable in different industries. This study focused on SA using the sentiments showcased on twitter platform from various fields of life. The study mainly focused on ML techniques and Lexicon based methods used for SA on twitter data. The datasets that were analyzed are EMOT, HASH, ISIEVE, Twitter dataset, and some real time data as well. The survey discusses use cases, benefits, and limitations of sentiment analysis, as well as examples of effective employment of techniques. Study also talks about the limitations and constraints of using sentiment analysis in certain sectors like political analysis and healthcare, where the results might be difficult. It can be concluded that when SA is used rightly, it gives useful information of public opinion and which can result in making better business decisions. It also helps in predicting market trends. It is also concluded that the choice of keyword for twitter has a substantial impact on tweet search results and prediction accuracy. Twitter sentiment analysis can also be used for collecting geographical information and explore the unexplored areas. Furthermore, Lexicon based approaches can be utilized to assess public sentiments on ongoing situations whereas ML approaches can be applied to investigate market trends.
2307.02859
Emergence of half-metallic ferromagnetism in transition metal substituted Na$_{0.5}$Bi$_{0.5}$TiO$_3$
The multifunctional materials with prominent properties such as electrical, ferroelectric, magnetic, optical and magneto-optical are of keen interest to several practical implications. In the roadmap of designing such materials, in the present work, using density functional theory based first-principles calculations, we have investigated the functional properties of transition metal substituted-NBT. Our calculations predict the emergence of half-metallic ferromagnetism in the system. A nonzero magnetic moment of 1.49 $\mu_{\rm B}/{\rm f.u.}$ is obtained for 25\% concentration of Ni. Our data on optical properties for pure NBT is in excellent agreement with available theory and experiments. For Ni-NBT, we observed a diverging nature of static dielectric constant, which could be attributed to the induced metallic character in the material. Our simulations on MOKE predict a significant Kerr signal of 0.7$^\circ$ for 6.25\% Ni-concentration.
Chandan Kumar Vishwakarma, B. K. Mani
2023-07-06T08:51:12Z
http://arxiv.org/abs/2307.02859v1
Emergence of half-metallic ferromagnetism in transition metal substituted Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) ###### Abstract The multifunctional materials with prominent properties such as electrical, ferroelectric, magnetic, optical and magneto-optical are of keen interest to several practical implications. In the roadmap of designing such materials, in the present work, using density functional theory based first-principles calculations, we have investigated the functional properties of transition metal substituted-NBT. Our calculations predict the emergence of half-metallic ferromagnetism in the system. A nonzero magnetic moment of 1.49 \(\mu_{\text{B}}\)/f.u. is obtained for 25% concentration of Ni. Our data on optical properties for pure NBT is in excellent agreement with available theory and experiments. For Ni-NBT, we observed a diverging nature of static dielectric constant, which could be attributed to the induced metallic character in the material. Our simulations on MOKE predict a significant Kerr signal of 0.7\({}^{\circ}\) for 6.25% Ni-concentration. ## I Introduction The development of multifunctional materials with two or more properties, such as magnetic, ferroelectric, piezoelectric, and optical, has received a lot of interest in recent years [1; 2; 3]. These materials have the potential to revolutionize various industry applications, including healthcare, energy and electronics [4; 5; 6]. In the search for such materials, sodium bismuth titanate, Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) (NBT), has received a remarkable attention than any other lead-free ferroelectrics due to its tendency to show multimitonality by various mechanisms [7; 8; 9]. NBT is a complex perovskite oxide with two cations (Na\({}^{+}\) and Bi\({}^{3+}\)) on the A-site and one cation (Ti\({}^{2+}\)) on B-site with a rhombohedral symmetry at room temperature [10]. It exhibits various anomalous properties associated with site-specific substitutions, including improved ferroelectricity and piezoelectricity, magnetism and optoelectronic properties [11; 12]. The presence of Ti at the B site provides a strategy to introduce ferromagnetism by substituting transition-metal (TM) at B-site. In experimental studies, Refs. [13] and [14], ferromagnetism at room temperature was observed for Fe and Co-doped NBT, respectively. In a similar experimental work by Dung _et al.,_ a room-temperature ferromagnetism was reported for Ni-doped NBT [15]. The maximum magnetization value reported was around 0.91 \(\mu_{\text{B}}\)/Ni for 9% of Ni concentration at 5 K. Moreover, it was also observed in the same study that the optical bandgap decreases with Ni-concentration. However, in a different experimental study, Pradhan _et al.,_ the optical band gap was observed to increase with Ni concentrations [16]. The contradictory trend of experimental data suggest the lack of understanding for optical behavior of TM-doped NBT. In addition, to the best of our knowledge, there are no data from theory simulations on probing magnetism in TM-doped NBT. It can thus be surmised that there is a need for a systematic theoretical study to understand the underlying mechanism behind the multifunctional properties in TM-doped NBT. The present study aims to probe, with the help of the state-of-the-art of first-principles calculations, the electronic, magnetic, optical and magneto-optical properties of NBT and TM-substituted NBT. More precisely, we aim is to address the following questions: i) What is the impact of Ni substitution on the electrical and optical properties of NBT? ii) Assimilate the mechanism behind the advent of magnetic degrees of freedom in Ni-substituted NBT iii) How this introduced ferromagnetism couples with the dielectric properties of NBT? To assess the coupling between magnetic and optical degrees of freedom, we have examined the linear magneto-optic Kerr effect in the polar geometry, in which the spin and incident photons are perpendicular to the sample surface. This configuration of the Kerr effect is the most favorable way to trace the magneto-optical properties experimentally [17; 18; 19]. The texts in the paper are organized in four sections. In Section II, we provide a brief description of the computational methods used in our calculations. In Section III, we present and analyze our results on electronic structure, magnetic, optical, and magneto-optical properties for NBT and Ni-substituted NBT. The summary of our findings is presented in the last Section. ## II Computational methodology Probing transition metal-substituted NBT structure and emerging properties requires an accurate treatment of interstitial effects in the material at atomic scale. For this, we have performed _ab-initio_ spin-polarized calculations using density functional theory (DFT) as implemented in the Vienna _ab-initio_ simulation package (VASP)[20; 21]. To account for the exchange correlation among electrons, we used Perdew-Burke-Ernzerhof (PBE) [22] variation of generalized-gradient approxima tion pseudopotential. And, to account for the strongly correlated 3\(d\)-electrons of Ni we have incorporated the Hubbard U correction [23] in our calculation. The value, 11.57, of U is computed self-consistently using density functional perturbation theory (DFPT) employing theococcioni's _et al._[24] approach. A rhombohedral supercell of size \(2\times 2\times 2\) with 80-atoms is used to incorporate various concentrations of Ni. All the structures were optimized using full relaxation calculations up to \(10^{-4}\) eV A\({}^{-1}\) force tolerance. For this, we used conjugate gradient algorithm with Monkhorst-Pack [25]\(k\)-mesh of \(5\times 5\times 5\). For the self-consistent-field (SCF) calculations, the Brillouin zone was sampled with \(9\times 9\times 9\)\(k\)-mesh. The energy convergence criterion is maintained at 0.001 meV, whereas the plane wave energy cutoff used was 600 eV. The real and imaginary parts of the dielectric function is calculated using DFPT as implemented in VASP. ## III Results and Discussions ### Crystal Structure The structural parameters for pure NBT were taken from the experimental crystal structure (space group \(R3c\)) data [8] and optimized further through the full relaxation calculations to achieve a minimum energy configuration. Our computed lattice parameters and Wyckoff positions are given in Table 1(a). The Icomputed lattice parameters and Wyckoff positions for NiBT along with the data from the literature for comparison [26] As we observed from the table, our computed lattice parameter 5.65 is in good agreement with the experimental value 5.51 [8]. The reason for slightly larger value could be attributed to the use of GGA functional our calculation [27]. To incorporate various Ni concentrations in Na\({}_{0.5}\)Bi\({}_{0.5}\)[Ti\({}_{1-x}\)Ni\({}_{x}\)]O\({}_{3}\) (Ni-NBT), we used optimized NBT structure and created a \(2\times 2\times 2\) supercell. And, we investigated the properties for x = \(0.0625,0.125,0.1875\), and 0.25 concentrations of Ni. The Ni-NBT structures were fully optimized again using the force tolerance up to the \(10^{-5}\) eV A\({}^{-1}\). From our simulations we find that all Ni-NBT structures crystallize in rhombohedral (R3m) phase. In Fig. 2(c) calculated electronic band structure of Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) (panels (a)) we have shown the crystal structures for NBT (panel(a)) and 0.25Ni-NBT (panel(b)). The optimized lattice parameters for the chosen concentrations of Ni are given in Table 1(computed lattice parameters and Wyckoff positions for NBT. Values to the best of our knowledge, there are no experimental or other theory data for lattice parameters for Ni-NBT available in the literature for comparison. ### Electronic Structure and Ferroelectric Properties In Fig. 2Calculated electronic band structure of Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) (panels (a)) we have shown the spin-polarized electronic band structures of NBT (panels (a) and (b)) and 25Ni-NBT crystals (panels (c) and (d)). We chose to report the data for the highest concentration of Ni as it has the largest effect on the computed properties. The corresponding data for other concentrations are, however, provided in the supplementary material. As we observed from the NiBT and (a) spin-(b) different synthesis NBT results from the experimental band gap electronic structure at \(\Gamma\) point for both the spin channels. The calculated band gap, 2.57 eV, is in good agreement with the other theoretical value, 2.82, reported in [28]. The observed wide band gaps for both spin channels suggests the semiconducting nature of the NBT crystal, and is consistent with the data reported in the literature [26; 29; 30]. For 25Ni-NBT, however, we \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{Lattice Parameters} \\ \multicolumn{4}{c}{\(a=b=c=5.647(5.51)\) (Å)} \\ \multicolumn{4}{c}{\(\alpha=\beta=\gamma=58.758(59.803)(\lx@math@degree)}\) \\ \hline \multicolumn{4}{c}{Wyckoff Positions} \\ Na1 (1a) & 0.25644 & 0.25644 & 0.25644 \\ Bi1 (1a) & 0.77324 & 0.77324 & 0.77324 \\ Ti1 (1a) & -0.00056 & -0.00056 & -0.00056 \\ Ti2 (1a) & 0.48900 & 0.48900 & 0.48900 \\ O1 (3b) & 0.23013 & 0.72562 & -0.76987 \\ O2 (3b) & 0.72771 & 0.21016 & -0.27229 \\ \hline \hline \end{tabular} \end{table} Table 1: Computed lattice parameters and Wyckoff positions for NBT. Values given in the parenthesis are the data from the experiment [26]. observe an asymmetry in the majority and minority spin channels (panels (c) and (d)). For majority spin, the Fermi level lies in the valence band and shows a metallic nature. Whereas, for minority spin sub band, a large band gap of \(\sim 2.56\) eV is obtained, which resembles the electronic structure of NBT shown in panel (b). This mixed nature of electronic structure indicates a half-metallic character of 25Ni-NBT. A similar electronic structures we also obtain for other concentrations of Ni. To get further insight into the half-metallicity in Ni-NBT, we examined the atom-projected and orbital-projected electronic structures of NBT and 25Ni-NBT. The data from this for bands and density of states (DoS) are shown in Figs. 3The atom-projected band structure of Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)). The density of states and orbital-projected density of states for Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)). The density of states and orbital-projected density of states for Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)). The density of states and orbital-projected density of states for Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)). The density of states and orbital-projected density of states for Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)). The density of states and orbital-projected density of states for Na\({}_{0. function could be calculated using the linear response theory [38] as \[\epsilon_{2}(\omega)=\frac{2\pi e^{2}}{\epsilon_{0}\Omega}\sum_{k,v,c}\delta(E_{ k}^{c}-E_{k}^{v}-\hbar\omega)\mid\langle\Psi_{k}^{c}|\left(\hat{\mathbf{n}}. \mathbf{r}\right)|\Psi_{k}^{v}\rangle\mid^{2}. \tag{1}\] Here, the indices \(k\), \(v\) and \(c\) represent the wave vector, valence and conduction bands, respectively. The states \(|\Psi_{k}^{v}\rangle\) and \(|\Psi_{k}^{c}\rangle\) are the wavefunctions associated with valence and conduction bands, respectively, and, \(E_{k}^{v}\) and \(E_{k}^{c}\) are the corresponding energies. The constants, \(\epsilon\), \(\Omega\) and \(\epsilon_{0}\) are the charge of the electron, volume of the cell and the permittivity of the free space, respectively. The operator \(\hat{\mathbf{n}}\) represents the direction of the applied electric field. The real component of the dielectric constant can be derived from the imaginary component using the Kramers-Kronig relation [38; 39] \[\epsilon_{1}(\omega)=1+\frac{2}{\pi}P\int_{0}^{\infty}\frac{\epsilon_{2}( \omega^{\prime})\omega^{\prime}d\omega^{\prime}}{\omega^{\prime 2}-\omega^{2}-i \eta}, \tag{2}\] where P is the principal value and \(\eta\) is an infinitesimal broadening associated with the adiabatic switching of the dielectric perturbation. The \(\epsilon_{1}(\omega)\) and \(\epsilon_{2}(\omega)\) for NBT and Ni-NBT from our calculations along with the available experimental data are shown in Fig. 5The real and imaginary components of the diagonal permittivity of For NBT, as discernible from panels (a) and (b), our calculated real and imaginary components of dielectric function are in good agreement with the experiment [36]. The slight deviation could be attributed to the temperature effects in experiment, as the reported experimental data are at room temperature. Inspecting the real component more closely, the value of static dielectric constant, \(\epsilon(0)\), is 6.5. This is consistent with the value reported in previous theory calculation [40] and experiment [36]. This relatively higher value of \(\epsilon(0)\) suggests NBT as a potential candidate for light-harvesting applications [41]. The other important characteristic of real spectrum is, negative values at higher energies. This trend is consistent with the previously reported theoretical data [29], and suggests NBT as a potential candidate for plasmonic applications [42]. Examining the imaginary component more closely, we observe one preeminent peak at \(\sim\) 4.2 eV and four low-intensity secondary peaks at \(\sim\) 5.7, \(\sim\) 6.4, \(\sim\) 7.7, and \(\sim\) 9.0 eV energies. The primary peak originates from the interband transitions from O-2\(p\) to Ti-3\(d\) and Bi-6\(p\) states. The secondary peaks, however, embed major contributions from the O-2\(p\) to Na-2\(s\)/2\(p\) transitions. Our calculated \(\epsilon_{2}(\omega)\) spectrum is in qualitative agreement with the reported theoretical data [29]. The onset of \(\epsilon_{2}(\omega)\) spectrum suggests the optical band gap of NBT as \(\approx\) 2.57 eV, which is consistent with the direct electronic bandgap discussed in previous section. Considering the case of Ni-NBT, as discernible from the panels (c) and (d), \(\epsilon_{1}(\omega)\) and \(\epsilon_{2}(\omega)\) show a similar trend as NBT at higher energies Figure 4: The density of states and orbital-projected density of states for Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) (panels (a) and (b)) and Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) (panels (c) and (d)). Figure 3: The atom-projected band structure of Na\({}_{0.5}\)Bi\({}_{0.5}\)(Ti\({}_{0.75}\)Ni\({}_{0.25}\))O\({}_{3}\) for O and Ni atoms for majority (panels (a), (b)) and minority (panels (c), (d)) spin electrons. (above 2.5 eV). And, as can be inferred from Fig. 4The density of states and orbital-projected density of states of states of the reason for the observed peaks is attributed to the same interband transitions, O-\(2p\) to Ti-\(3d\) and Bi-\(6p\). In the low energy regions (below 0.5 eV), however, we observe a diverging nature of \(\epsilon_{1}(\omega)\). The reason for this could be attributed to the half-metallic nature of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of states of the states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of the states of the states of states of the states of states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of the states of states of states of the states of the states of the states of states of the states of the states of states of the states of the states of states of the states of the states of the states of states of the states of states of the states of the states of states of the states of states of the states of states of the states of the states of the states of states of the states of the states of states of the states of the states of states of the states of the states of states of the states of the states of states of states of states of the states of the states of the states of states of the states of states of the states of the states of states of the states of states of the states of the states of states of states of the states of the states of states of states of states of the states of the states of states of the states of the states of states of states of the states of states of the states of the states of states of states of the states of states of the states of states of the states of states of states of states of states of states of the states of states of states of states of the states of the states of states of the states of states of the states of states of the states of states of the states of states of states of the states of the states of the states of states of the states of states of the states of the states of states of the states of states of the states of states of the states of states of the states of states of the states of the states of the states of states of the states of states of the states of states of the states of the states of the states of states of the states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of the states of states of the states of the states of states of the states of the states of the states of states of the states of the states of states of the states of the states of states of the states of the states of states of the states of states of the states of the states of the states of states of the states of states of the states of states of the states of the states of the states of states of the states of the states of the states of states of the states of states of the states of the states of states of the states of the states of the states of the states of the states of states of the states of states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of states of the states of the states of the states of states of the states of the states of states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of the states of the states of the states of the states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of the states of the states of the states of states of the states of the states of states of the states of states of the Ni-concentration. To get more insight and compare with experimental observations, we have examined the absorption coefficient, \(\alpha\), for NBT and Ni-NBT. In addition, we have extracted the optical bandgap, \(E_{g}\), for minority spin channel of Ni-NBT at different concentrations using the Tauc's plot [43]. The data from this are shown in Fig. 6The absorption coefficient of NBT and Ni-substituted NBT. As discernible from the panel (a) of the figure, for NBT, our simulation is in good agreement with the experiment, with a slight shift in the onset of the peak. The reason for this shift could be attributed to the temperature effects in experimental data. For Ni-NBT, on contrary, we observe nonzero peaks in the IR region, which is consistent with the electronic structure data suggesting the half-metallic nature Ni-NBT. Panel (c) shows the optical bandgap for different Ni-concentrations. Consistent with the trend in experiment our computed \(E_{g}\) decreases with Ni concentration. This could be explained in terms of inverse relationship between static dielectric constant and \(E_{g}\) using the Penn model [44]. The increased \(E_{g}\) for 25% concentration could be attributed the decrease in \(\epsilon_{1}(0)\) to 61 from 81 for 18%. Our computed \(E_{g}\) for NBT is close to the previous calculations [40]. The experimental values, Refs. [15; 36], are however on the higher side as they are at finite temperatures. ### Magnetic Properties Next, as a probe to magnetic degrees of freedom introduced in the system, we examined the magnetic moments of NBT and Ni-NBT. And, to find the actual ground state magnetic configuration of the system, we probed both ferromagnetic (FM) and antiferromagnetic (AFM) orientations of magnetic moments. From our calculations, we find FM phase as the actual ground state for all concentrations Ni. This is evident from the relative energies of FM and AFM phases given in Table 2(Hcalite energies of FM and AFM configurations, and the for 25% of Ni, where the AFM energy is observed to be larger \(\approx\) 25 meV. In Fig. 7(a) Magnetic moment as function of Ni concentration in Ni-NBT we have shown total magnetic moment as function of Ni-concentration. Consistent with literature, and as to be expected due to pure ferroelectric nature, we obtained a zero magnetic moment for NBT. For Ni-NBT, however, we observed a trend of increasing (nonzero) magnetic moments as function of Ni-concentrations. The maximum magnetic moment observed is 1.48 \(\mu_{B}\)/f.u., for the highest concentration of 25%. The increase in the magnetic moment with concentration could be attributed to the increasing ferromagnetic exchange between neighboring Ni ions at higher concentrations. Our calculated magnetic moment, 0.76 \(\mu_{B}\)/f.u., for concentration 12.5% is in good agreement with the experimental value 0.91, reported for 9 % of Ni [15]. To get more insight into the origin of nonzero magnetic moments in Ni-NBT, we examined separate contributions from each ion. The data from this is tabulated in Table 2(Relative energies of FM and AFM configurations, and the atom rest from the chemical-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperature-temperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperature-temperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperature-temperaturetemperature-temperaturetemperature-temperature-temperaturetemperature-temperaturetemperature-temperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperature-temperaturetemperaturetemperature-temperaturetemperature-temperaturetemperaturetemperature-temperaturetemperature-temperaturetemperaturetemperature-temperaturetemperaturetemperature-temperaturetemperaturetemperature-temperaturetemperaturetemperature-temperaturetemperaturetemperature-temperaturetemperaturetemperaturetemperature-temperaturetemperaturetemperaturetemperature-temperaturetemperaturetemperaturetemperaturetemperature- ### Magneto-optical Properties The presence of magnetic degrees of freedom in Ni-NBT leads to an anisotropy in the dielectric tensor due to the breaking of time-reversal symmetry. The dielectric tensor for a magnetized material could be written as \[\epsilon_{ij}=\epsilon_{ij}^{(0)}+\epsilon_{ij}^{(1)}, \tag{3}\] where \(\epsilon_{ij}^{(0)}\) is dielectric tensor in absence of magnetization and \(\epsilon_{ij}^{(1)}\) represents the contribution due nonzero magnetization. Within linear in magnetization \(M\), \(\epsilon_{ij}^{(1)}\) could be expressed as \(\epsilon_{ij}^{(1)}=K_{ijk}M_{k}\), where \(K\) is the magneto-optical coefficient. To examine the magneto-optical properties of Ni-NBT, we computed MOKE spectra in the polar configuration (Fig. 8Real and imaginary components of magneto-optical permittivity where both the incident linearly-polarized wave and magnetization are considered perpendicular to the surface. The polar configuration is one of the most common setups used to trace the magneto-optical properties experimentally. The Kerr rotation angle, \(\theta_{k}\), and Kerr ellipticity, \(\eta_{k}\), can be extracted from the diagonal and off-diagonal dielectric response as [45; 46] \[\theta_{k}+i\eta_{k}=-\frac{K}{\sqrt{\epsilon^{(0)}}(1-\epsilon^{(0)})}, \tag{4}\] where \(\epsilon^{(0)}\) is the diagonal component of the dielectric tensor in the absence of magnetization. Separating real and imaginary components in Eq. (4equation.3.4), we can derive \[\theta_{k} = -\frac{[K_{1}^{2}+K_{2}^{2}]^{1/2}}{[\epsilon_{1}^{2}+\epsilon_{ 2}^{2}]^{1/4}[(1-\epsilon_{1})^{2}+\epsilon_{2}^{2}]^{1/2}}\cos\Theta\ \ \text{and} \tag{5}\] \[\eta_{k} = -\frac{[K_{1}^{2}+K_{2}^{2}]^{1/2}}{[\epsilon_{1}^{2}+\epsilon_{ 2}^{2}]^{1/4}[(1-\epsilon_{1})^{2}+\epsilon_{2}^{2}]^{1/2}}\sin\Theta, \tag{6}\] where \(\Theta=\tan^{-1}(\frac{K_{2}}{K_{1}})-\frac{1}{2}\tan^{-1}(\frac{\epsilon_{2}} {\epsilon_{1}})-\tan^{-1}(\frac{\epsilon_{2}}{1-\epsilon_{1}})\). The calculated real and imaginary parts of K and complex Kerr rotation angles for different concentrations of Ni-NBT are shown in Fig 8Real and imaginary components of magneto-optical permittivity. As discernible from the panel (d), Kerr rotation shows the same qualitative behavior for all concentrations at higher energies, above \(\sim 2\) eV. There are substantial peaks in both negative and positive \(y\)-axes, which are signatures of clockwise and anticlockwise polarizations, respectively, in the material. In the low energy range, below \(\sim 2\) eV, however, we observed a mix trend for \(\theta_{k}\) at different concentrations. In the negative \(y-\)axis, the most significant peak of amplitude \(0.58^{\circ}\) is observed at 3.3 eV for 6.5% concentration. The amplitude of the peaks is observed to decrease with Ni-concentration. In the positive \(y\)-axis, however, a \(\theta_{k}\) reaching up to \(0.7^{\circ}\) around 10.8 eV is observed for 25% concentration. Unlike the trend for negative \(\theta_{k}\), the peak amplitude for positive \(\theta_{k}\) increases with Ni-concentrations. Fig. 8Real and imaginary components of magneto-optical permittivity shows the Kerr ellipticity data as function of energy. Like the trend of \(\theta_{k}\), all concentrations show a similar qualitative behavior at higher energies, whereas a mix trend for amplitudes is observed in the low energy range. We observed a significant peak of amplitude \(0.72^{\circ}\) around 10 eV energy in the negative \(y\)-axis for 6.25% concentration. Unlike \(\theta_{k}\), there is not much variation in the amplitude of this peak with Ni-concentrations. Consistent with the trend of \(\theta_{k}\), apart from this primary peak, we also observe few secondary peaks of amplitudes \(0.33^{\circ}\), \(0.56^{\circ}\), and \(0.54^{\circ}\) at 4.0, 5.25 and 6.7 eV, respectively. The significant Kerr signals obtained from our simulations for Ni-NBT suggest it as a potential candidate for magneto-optical applications. In conclusion, with the help of density functional theory based first-principles calculations, we examined the effect of transition metal substitution on electronic, ferroelectric, magnetic, optical and magneto optical properties of NBT. In agreement with literature, our simulations on electronic properties show NBT as a direct band semiconductor. Our computed bandgap 2.56 eV is within the range of previous theory calculations and experiments. For transition metal substituted-NBT, we observed an emergence of half-metallic ferromagnetism in the system. Our simulation show, while minority spin exhibits a wide bandgap, there are nonzero states at Fermi energy for majority spin. The reason for this could be attributed to the shift in the energy levels of majority spin states due to hybridization between O \(2p\) and Ni \(3d\) states. This asymmetry in the two spin channels lead to an emergence of nonzero permanent magnetic moment in the material. We obtained a magnetic moment of 1.5 \(\mu_{\text{B}}\)/f.u. for 20% of Ni concentration. For optical properties of NBT, our simulation results are consistent with the available experimental and other theory results. For Ni-NBT, however, we observed a diverging nature of static dielectric constant in the infrared region, which could be attributed to the metallic nature of the metal conducting phase. In the MOKE show significant values of Kerr angles in Ni-NBT, which suggests transition metal substituted-NBT as potential candidates for magneto-optical applications. ###### Acknowledgements. The authors wish to thank Ravi Kumar, Mohd Zeeshan and Indranil Mal for useful discussions. C. K. V. acknowledges the funding support from Council of Scientific & Industrial Research, India (Grant No. 09/086(1297)/2017-EMR-I). B. K. M. acknowledges the funding support from SERB, DST (CRG/2022/003845). The results presented in the paper are based on the computations using the High Performance Computing clus ter, Padum, at the Indian Institute of Technology Delhi, New Delhi
2308.14666
Learning to Predict 3D Rotational Dynamics from Images of a Rigid Body with Unknown Mass Distribution
In many real-world settings, image observations of freely rotating 3D rigid bodies may be available when low-dimensional measurements are not. However, the high-dimensionality of image data precludes the use of classical estimation techniques to learn the dynamics. The usefulness of standard deep learning methods is also limited, because an image of a rigid body reveals nothing about the distribution of mass inside the body, which, together with initial angular velocity, is what determines how the body will rotate. We present a physics-based neural network model to estimate and predict 3D rotational dynamics from image sequences. We achieve this using a multi-stage prediction pipeline that maps individual images to a latent representation homeomorphic to $\mathbf{SO}(3)$, computes angular velocities from latent pairs, and predicts future latent states using the Hamiltonian equations of motion. We demonstrate the efficacy of our approach on new rotating rigid-body datasets of sequences of synthetic images of rotating objects, including cubes, prisms and satellites, with unknown uniform and non-uniform mass distributions. Our model outperforms competing baselines on our datasets, producing better qualitative predictions and reducing the error observed for the state-of-the-art Hamiltonian Generative Network by a factor of 2.
Justice Mason, Christine Allen-Blanchette, Nicholas Zolman, Elizabeth Davison, Naomi Ehrich Leonard
2023-08-24T17:47:32Z
http://arxiv.org/abs/2308.14666v2
Learning to predict 3D rotational dynamics from images of a rigid body with unknown mass distribution ###### Abstract In many real-world settings, image observations of freely rotating 3D rigid bodies, may be available when low-dimensional measurements are not. However, the high-dimensionality of image data precludes the use of classical estimation techniques to learn the dynamics. The usefulness of standard deep learning methods is also limited because an image of a rigid body reveals nothing about the distribution of mass inside the body, which, together with initial angular velocity, is what determines how the body will rotate. We present a physics-informed neural network model to estimate and predict 3D rotational dynamics from image sequences. We achieve this using a multi-stage prediction pipeline that maps individual images to a latent representation homeomorphic to \(\mathbf{SO}(3)\), computes angular velocities from latent pairs, and predicts future latent states using the Hamiltonian equations of motion. We demonstrate the efficacy of our approach on new rotating rigid-body datasets of sequences of synthetic images of rotating objects, including cubes, prisms and satellites, with unknown uniform and non-uniform mass distributions. ## 1 Introduction Whether a freely rotating 3D rigid body tumbles unstably or spins stably depends on the distribution of mass inside the body and the body's initial angular velocity (compare Figure 1(a) and (b)). This means that to predict the body's rotational dynamics, it is not enough to know the external geometry of the body. That would be insufficient, for instance, to predict the different behavior of two bodies with the same external geometry and different internal mass distribution. Even if the bodies start at the same initial angular velocity, one body could tumble or wobble while the other spins stably; compare Figure 1(b) and (d). Predicting 3D rigid body rotational dynamics is possible if the body's mass distribution can be learned from observations of the body in motion. This is easier if the observations consist of low-dimensional data, e.g., measurements of the body's angular velocity and the rotation matrix that defines the body's orientation. It is much more challenging, however, if the only available measurements consist of images of the body in motion, as in the case of remote observations of a satellite. We address the challenge of learning and predicting 3D rotational dynamics from image sequences of a rigid body with unknown mass distribution and unknown initial angular velocity. To do so we design a neural network model that leverages Hamiltonian structure associated with 3D rigid body dynamics. We show how our approach outperforms applicable methods from the existing literature. Deep learning has proven to be an effective tool to learn dynamics from images. Previous work [55; 48; 4] has made significant progress in using physics-based priors to learn dynamics from images of 2D rigid bodies, such as a pendulum. Learning dynamics of 3D rigid-body motion has also been explored with various types of input data [11; 45; 18]. We believe our method is the first to use the Hamiltonian formalism to learn 3D rigid-body rotational dynamics from images. In this work we introduce a model, with architecture depicted in Figure 2, that (1) learns 3D rigid-body rotational dynamics from images, (2) predicts future image sequences in time, and (3) generates a low-dimensional, interpretable representation of the latent state. Our model incorporates the Hamiltonian formulation of the dynamics as an inductive bias to facilitate learning the moment-of-inertia matrix, \(\mathbf{J}\), and an auto-encoding map between images and \(\mathbf{SO}(3)\). The efficacy of our approach is demonstrated through long-term image prediction on synthetic datasets. Due to scarcity of appropriate datasets, we have created publicly-available, synthetic datasets of rotating objects (e.g. cubes, prisms, and satellites) applicable for evaluation of our model as well as other tasks on 3D rigid-body rotation such as pose estimation. ## 2 Related work A growing body of work incorporates Hamiltonian and Lagrangian formalisms to improve the accuracy and interpretability of learned representations in neural network based dynamical systems forecasting [23; 13; 16]. Greydanus et al. [23] predict symplectic gradients of a Hamiltonian system using a Hamiltonian parameterized by a neural network. They show that the Hamiltonian neural network predicts the evolution of conservative systems better than a baseline black-box model. Chen et al. [13] improve the long-term prediction performance of [23] by minimizing the MSE over predicted state trajectories rather than one-step symplectic gradients. Cranmer et al. [16] propose parameterization of the system Lagrangian by a neural network arguing that momentum coordinates may be difficult to compute in some settings. Each of the aforementioned learn from sequences of phase-space measurements, our model learns from images. The authors of [55; 4; 48] leverage Hamiltonian and Lagrangian neural networks to learn the dynamics of 2D rigid bodies (e.g., the planar pendulum) from image sequences. Zhong and Leonard [55] introduce a coordinate-aware VAE with a latent Lagrangian neural network (LNN) which learns the underlying dynamics and facilitates control. Allen-Blanchette et al. [4] use a latent LNN Figure 1: Simulations illustrating how mass distribution and initial angular velocity determine behavior. (a) Tumbling prism: uniform mass distribution (\(\mathbf{J}=\mathbf{J}_{1}\)) and initial angular velocity near an unstable solution. (b) Spinning prism: same uniform mass distribution (\(\mathbf{J}=\mathbf{J}_{1}\)) as (a) but with initial angular velocity near stable solution yields very different behavior from (a). (c) Spinning CALIPSO satellite: same mass distribution (\(\mathbf{J}=\mathbf{J}_{1}\)) and same initial angular velocity as (b) yields same behavior as (b) despite different exterior. (d) Wobbling prism: same initial velocity as (b) but different (nonuniform) mass distribution (\(\mathbf{J}=\mathbf{J}_{3}\)) yields different behavior from (b). in an autoencoding neural network to learn dynamics without control or prior knowledge of the configuration-space structure. Toth et al. [48] use a latent Hamiltonian neural network in a VAE to learn dynamics without control, prior knowledge of the configuration-space structure or dimension. Similarly to Toth et al. [48], we use a latent Hamiltonian neural network to learn dynamics. Distinctly, however, we consider a 3D rigid body dynamics and incorporate prior knowledge of the configuration-space structure to ensure interpretability of the learned representations. Others have considered the problem of learning 3D rigid-body dynamics [11; 45; 18]. Byravan and Fox [11] uses point-cloud data and action vectors (forces) as inputs to a black-box neural network to predict the resulting \(\mathbf{SE}(3)\) (rotation and translation) transformation matrix, which represents the motion of objects within the input scene. Peretroukhin et al. [45] create a novel symmetric matrix representation of \(\mathbf{SO}(3)\) and incorporate it into a neural network to do orientation prediction on synthetic point-cloud data and images. Duong and Atanasov [18] use low-dimensional measurement data (i.e. the rotation matrix and angular momenta) to learn rigid body dynamics on \(\mathbf{SO}(3)\) and \(\mathbf{SE}(3)\) for control. Figure 2: A schematic of the model’s forward pass at training time and inference. The architecture combines an auto-encoding neural network with Hamiltonian dynamics for 3D rigid bodies. At training time, the encoder maps a sequence of images to a sequence of latent states in \(\mathbf{SO}(3)\). We estimate body angular momenta (a), then predict future latent orientation (\(\mathbf{R}\)) and momentum states (\(\mathbf{II}\)) using the learned Hamiltonian (d). Each encoded and predicted latent orientation is decoded into an image using orientation states predicted by the learned dynamics (c, d). At inference, we use the encoder to map two images to latent states in \(\mathbf{SO}(3)\) and estimate body angular velocities and momenta (b), then we decode those predicted latent orientations into images (d). The combination of deep learning with physics-based priors allows models to learn dynamics from high-dimensional data such as images [55; 48; 4]. However, as far as we know, our method is the first to use the Hamiltonian formalism to learn 3D rigid-body rotational dynamics from images. ## 3 Background The \(\mathcal{S}^{2}\times\mathcal{S}^{2}\) parameterization of 3D rotation group \(\mathbf{SO}(3)\) The \(\mathcal{S}^{2}\times\mathcal{S}^{2}\) parameterization of the 3D rotation group \(\mathbf{SO}(3)\) is a surjective and differentiable mapping with a continuous right inverse [19]. Define the \(n\)-sphere: \(\mathcal{S}^{n}=\left\{\mathbf{v}\in\mathbb{R}^{(n+1)}|\ v_{1}^{2}+v_{2}^{2}+ \cdots+v_{n+1}^{2}=1\right\}\). The \(\mathcal{S}^{2}\times\mathcal{S}^{2}\) parameterization of \(\mathbf{SO}(3)\) is given by \((u,v)\mapsto(w_{1},w_{2},w_{3})\) with \(w_{1}=u,\ w_{2}=v-v(u,v),\ w_{3}=w_{1}\times w_{2}\), where \(w_{i}\) are renormalized to have unit norm. Intuitively, this mapping constructs an orthonormal frame from the unit vectors \(u\) and \(v\) by Gram-Schmidt orthogonalization. The right inverse of the parameterization is given by \((w_{1},w_{2},w_{3})\mapsto(w_{1},w_{2})\). Other parameterizations of \(\mathbf{SO}(3)\) such as the exponential map (\(\mathfrak{so}(3)\mapsto\mathbf{SO}(3)\)) and the quaternion map (\(\mathcal{S}^{3}\mapsto\mathbf{SO}(3)\)) do not have continuous inverses and therefore are more difficult to use in deep manifold regression [19; 34; 9]. ### 3D rotating rigid-body kinematics The orientation of a rotating 3D rigid body \(\mathbf{R}(t)\in\mathbf{SO}(3)\) changing over time \(t\) can be computed from angular velocity \(\mathbf{\Omega}(t)\) using the kinematic equations given by the time-rate-of-change of \(\mathbf{R}(t)\) shown in equation (7). For computational purposes, 3D rigid-body rotational kinematics are commonly expressed in terms of the quaternion representation \(\mathbf{q}(t)\in\mathcal{S}^{3}\) of the rigid-body orientation \(\mathbf{R}(t)\). The kinematics (7) written in terms of quaternions [5] are \[\frac{d\mathbf{q}(t)}{dt}=\mathbf{Q}(\mathbf{\Omega}(t))\mathbf{q}(t),\ \ \ \mathbf{Q}(\mathbf{\Omega})=\begin{pmatrix}-\mathbf{\Omega}_{\chi}&\mathbf{ \Omega}\\ -\mathbf{\Omega}^{T}&0\end{pmatrix}. \tag{1}\] ### 3D rigid-body dynamics in Hamiltonian form The canonical Hamiltonian formulation derives the equations of motion for a mechanical system using only the symplectic form and a Hamiltonian function, which maps the state of the system to its total (kinetic plus potential) energy [22]. This formulation has been used by several authors to learn unknown dynamics: the Hamiltonian structure (canonical symplectic form) is used as a physics prior and the unknown dynamics are uncovered by learning the Hamiltonian [23; 57; 48]. Consider a system with configuration space \(\mathbb{R}^{n}\) and a choice of \(n\) generalized coordinates that represent configuration. Let \(\mathbf{z}(t)\in\mathbb{R}^{2n}\) represent the vector of \(n\) generalized coordinates and their \(n\) conjugate momenta at time \(t\). Define the Hamiltonian function \(\mathcal{H}:\mathbb{R}^{2n}\mapsto\mathbb{R}\) such that \(\mathcal{H}(\mathbf{z})\) is the sum of the kinetic plus potential energy. Then the equations of motion [22] derive as \[\frac{d\mathbf{z}}{dt}=\Lambda_{\mathrm{can}}\nabla_{\mathbf{z}}\mathcal{H}( \mathbf{z}),\ \ \ \Lambda_{\mathrm{can}}=\begin{pmatrix}\mathbf{0}_{n}&\mathbf{I}_{n}\\ -\mathbf{I}_{n}&\mathbf{0}_{n}\end{pmatrix} \tag{2}\] where \(\mathbf{0}_{n}\in\mathbb{R}^{n\times n}\) is the matrix of all zeros and \(\Lambda_{\mathrm{can}}\) is the matrix representation of the _canonical symplectic form_. The Hamiltonian equations of motion for a freely rotating 3D rigid body evolve on the 6-dimensional space \(T^{*}\mathbf{SO}(3)\), the co-tangent bundle of \(\mathbf{SO}(3)\). However, because of rotational symmetry in the dynamics, i.e., the invariance of the dynamics of a freely rotating rigid body to the choice of inertial frame, the Hamiltonian formulation of the dynamics can be reduced using the Lie-Poisson Reduction Theorem [40] to the space \(\mathbb{R}^{3}\sim\mathfrak{so}^{*}(3)\), the Lie co-algebra of \(\mathbf{SO}(3)\). These reduced Hamiltonian dynamics are equivalent to (6), where body angular momentum is \(\mathbf{\Pi}(t)\in\mathfrak{so}^{*}(3)\), for \(t\geq 0\). The invariance can be seen by observing that the rotation matrix \(\mathbf{R}(t)\), which describes the orientation of the body at time \(t\), does not appear in (6). \(\mathbf{R}(t)\) is calculated from the solution of (6) using (7). The reduced Hamiltonian \(h:\mathfrak{so}^{*}(3)\mapsto\mathbb{R}\) for the freely rotating 3D rigid body (just kinetic energy): \[h(\mathbf{\Pi})=\frac{1}{2}\mathbf{\Pi}\cdot\mathbf{J}^{-1}\,\mathbf{\Pi}. \tag{3}\] The reduced Hamiltonian formulation [40] is \[\frac{d\mathbf{\Pi}}{dt}=\Lambda_{\mathfrak{so}^{*}(3)}(\mathbf{\Pi})\nabla_{ \mathbf{\Pi}}h(\mathbf{\Pi}),\quad\Lambda_{\mathfrak{so}^{*}(3)}(\mathbf{\Pi})= \mathbf{\Pi}_{\times}, \tag{4}\] which can be seen to be equivalent to (6). The equations (4), called _Lie-Poisson equations_, generalize the canonical Hamiltonian formulation. The generalization allows for different symplectic forms, i.e., \(\Lambda_{\mathfrak{so}^{*}(3)}\) instead of \(\Lambda_{\mathrm{can}}\) in this case, each of which is only related to the latent space and symmetry. Our physics prior is the generalized symplectic form and learning the unknown dynamics means learning the reduced Hamiltonian. This is a generalization of the existing literature where dynamics of canonical Hamiltonian systems are learned with the canonical symplectic form as the physics prior [23; 16; 13; 48]. Using the generalized Hamiltonian formulation allows extension of the approach to a much larger class of systems than those described by Hamilton's canonical equations, including rotating and translating 3D rigid bodies, rigid bodies in a gravitational field, multi-body systems, and more. ## 4 Learning Hamiltonian dynamics on \(T^{*}\mathbf{SO}(3)\) In this section we outline our approach for learning and predicting rigid-body dynamics from image sequences. The multi-stage prediction pipeline maps individual images to an \(\mathbf{SO}(3)\) latent space where angular velocities are computed from latent pairs. Future latent states are computed using the generalized Hamiltonian equations of motion (4) and a learned representation of the reduced Hamiltonian (3). Finally, the predicted latent representations are mapped to images giving a predicted image sequence. ### Notation \(N\) denotes the number of image sequences in the dataset, and \(T+1\) is the length of each image sequence. Image sequences are written \(\mathbf{x}_{k}=\{x_{0}^{k},\ldots,x_{T}^{k}\}\), embedded sequences are written \(Z_{k}=\{z_{0}^{k},\ldots,z_{T}^{k}\}\), \(\mathbf{SO}(3)\) latent sequences of rotation matrices are written \(\mathbf{R}_{k}=\{R_{0}^{k},\ldots,R_{T}^{k}\}\) with \(R_{t}^{k}\in\mathbf{SO}(3)\), and quaternion latent sequences are written \(\mathbf{q}_{k}=\{q_{0}^{k},\ldots,q_{T}^{k}\}\) with \(q_{t}^{k}\in\mathcal{S}(3)\). Each element \(\mathbf{x}_{t}^{k}\) represents the quantity \(\mathbf{x}\) at time step \(t\) for sequence \(k\) from the dataset, where \(k\in\{1,\ldots,N\}\). Quantities generated with the learned dynamics are denoted with a hat (e.g., \(\hat{q}\)). ### Embedding to an \(\mathbf{SO}(3)\) latent space We embed image observations of a rotating rigid body to an \(\mathbf{SO}(3)\) latent space using the composition of functions \(f\circ\pi\circ E_{\theta}:\mathcal{I}\mapsto\mathbf{SO}(3)\). The encoder network \(E_{\theta}:\mathcal{I}\mapsto\mathbb{R}^{6}\) is learned during training, the projection \(\pi:\mathbb{R}^{6}\mapsto\mathcal{S}^{2}\times\mathcal{S}^{2}\) is defined as \(\pi(z)=(u/\|u\|,\,v/\|v\|),\,\,\,u,v\in\mathbb{R}^{3}\) where \(z=(u,\,v)\), and the function \(f:\mathcal{S}^{2}\times\mathcal{S}^{2}\mapsto\mathbf{SO}(3)\) denotes the surjective and differentiable \(\mathcal{S}^{2}\times\mathcal{S}^{2}\) parameterization of \(\mathbf{SO}(3)\) (see Section 3.1) which constrains embedded representations to the \(\mathbf{SO}(3)\) manifold where we do reconstruction of the dynamics. ### Our approach In our method, we to encode images into our latent, predict future latent states using the learned Hamiltonian dynamics and kinematics, and decode latent states predicted using learned dynamics into images (see Figure 2). The leaned dynamics are a function of learned moment-of-inertia tensor, \(\mathbf{J}_{\psi}\). The tensor \(\mathbf{J}_{\psi}\) is parameterized using two learned vectors (diagonal and off-diagonal components) that produce a symmetric positive definite (SPD) tensor using Cholesky decomposition [36]. TrainingAt training time, a sequence of images given as input to our model. The input sequence length is 10 images. Each image is encoded into an \(\mathbf{SO}(3)\) latent state, \(\mathbf{R}_{k}\) (see Figure 2(a)). We compute each angular momentum vector for each latent pair \((R_{t}^{k},R_{t+1}^{k})\) as \(\Pi_{t}^{k}=\mathbf{J}_{\psi}\Omega_{t}^{k}\), where each angular velocity \(\Omega_{t}^{k}\) is estimated using the algorithm proposed in [27] (see Appendix A.7). We construct the initial condition \(s_{0}^{k}=(q_{0}^{k},\Pi_{0}^{k})\in(\mathcal{S}^{3},\mathbb{R}^{3})\). A modified Shepperd's algorithm[39] is used to convert rotation matrix \(R_{0}^{k}\) to the quaternion \(q_{0}^{k}\). Using the latent initial condition, the reduced Hamiltonian dynamics (4) and kinematic equations (1) are integrated forward in time using a Runge-Kutta fourth-order numerical solver (RK45) (with a normalization step [5]) to compute the sequence of latent orientations and angular momenta given by the learned dynamics. Both sequences of latent orientations (produced by the encoder and dynamics) are decoded into a sequence of images (see Figure 2(c) and (d)). The model also outputs the angular momentum estimates and latent orientations given by the encoder (\(\Pi_{ae}\) and \(R_{ae}\)) and learned dynamics (\(\Pi_{d}\) and \(R_{d}\)). InferenceAt inference, the model is given two images as input. The images are encoded into our \(\mathbf{SO}(3)\) latent space as \(R_{0}^{k}\) and \(R_{1}^{k}\), respectively (see Figure 2 (b)). Similar to training time, the latent states \(R_{0}^{k}\) and \(R_{1}^{k}\) are used to estimate the initial angular momentum, \(\Pi_{0}^{k}\). We construct the initial condition \(s_{0}^{k}=(q_{0}^{k},\Pi_{0}^{k})\in(\mathcal{S}^{3},\mathbb{R}^{3})\) and use the initial condition to integrate the dynamics (4) and kinematics equations (1) forward in time. The sequence of predicted latent orientations are decoded into a sequence of images (see Figure 2(d)). ### Loss functions In this section we describe each component of our loss function: the auto-encoder reconstruction loss \(\mathcal{L}_{\text{ae}}\), dynamics-base reconstruction loss \(\mathcal{L}_{\text{dyn}}\), and latent losses \(\mathcal{L}_{\text{latent, R}}\) and \(\mathcal{L}_{\text{latent, \Pi}}\). The function \(\mathcal{L}_{\text{ae}}\) ensures the embedding to \(\mathbf{SO}(3)\) is sufficiently expressive to represent the entire image dataset, and \(\mathcal{L}_{\text{dyn}}\) ensures agreement between input image sequences and the predicted images sequences produced using the learned dynamics. Both \(\mathcal{L}_{\text{latent, R}}\) and \(\mathcal{L}_{\text{latent, \Pi}}\) ensure consistency between the encoded latent states given by the encoding pipeline and the predicted latent states given by the learned dynamics. For notational convenience we denote the embedding pipeline \(\mathcal{E}:\mathcal{I}\mapsto\mathcal{S}^{3}\) and the decoding pipeline \(\mathcal{D}:\mathcal{S}^{3}\mapsto\mathcal{I}\) Reconstruction lossesThe auto-encoding reconstruction loss is the mean square error (MSE) between the ground-truth image sequence and the reconstructed image sequence: \[\mathcal{L}_{\text{ae}}=\frac{1}{NT}\sum_{k=1}^{N}\sum_{t=0}^{T-1}\big{\|}\,I_ {t}^{k}-(\mathcal{D}\circ\mathcal{E})\big{(}I_{t}^{k}\big{)}\,\big{\|}_{2}^{2}\,.\] The prediction-based reconstruction loss function is the MSE between the ground-truth image sequence and the predicted image sequence: \[\mathcal{L}_{\text{dyn}}=\frac{1}{NT}\sum_{k=1}^{N}\sum_{t=1}^{T}\big{\|}\,I_ {t}^{k}-\mathcal{D}\,\big{(}q_{d,t}^{k}\big{)}\,\big{\|}_{2}^{2}\,.\] Latent lossesOne of the latent state losses is a distance metric on \(\mathbf{SO}(3)\)[27], defined as the MSE (using the Frobenius norm \(\|\cdot\|_{F}\)) between the \(3\times 3\) identity matrix and right-difference of the encoded latent states and the latent states predicted using the learned dynamics: \[\mathcal{L}_{\text{latent, R}}=\frac{1}{NT}\sum_{k=1}^{N}\sum_{t=1}^{T}\big{\|} \,\mathbf{I}_{3}\,-\,\big{(}R_{\text{ae, t}}^{k}\big{)}^{T}\,R_{d,t}^{k}\big{\|}_{F}^{2}\,.\] The other latent state loss is computed as the MSE of the angular momenta estimated using the encoded latent states (see Figure 2) and the predicted angular momenta: \[\mathcal{L}_{\text{latent,\Pi}}\,=\,\frac{1}{NT}\sum_{k=1}^{N}\sum_{t=1}^{T} \big{\|}\Pi_{\text{ae, t}}^{k}\,-\,\Pi_{d,t}^{k}\big{\|}_{2}^{2}\,.\] ## 5 3D rotating rigid-body datasets To evaluate our model, we introduce six synthetic datasets of freely rotating objects. Previous efforts in learning dynamics from images [23; 48; 4; 55] consider only 2D planar systems (e.g., the simple pendulum, Acrobot, and cart-pole); and existing datasets of freely rotating rigid bodies in 3D such as SPEED [31; 47], SPEED+ [44], and URSO[46], contain random image-pose pairs rather than sequential pairs needed for video prediction and dynamics extraction. Our datasets showcase the rich dynamical behaviors of 3D rotational dynamics through images, capable of being used for 3D dynamics learning tasks. Furthermore, we acknowledge the need for applying our method to application-relevant images such as satellites. Accordingly, we have created the following new datasets on which we empirically test the performance of our model: * **Uniform mass density cube**: Multi-colored cube of uniform mass density * **Uniform mass density prism**: Multi-colored rectangular prism with uniform mass density * **Non-uniform mass density cube**: Multi-colored cube with non-uniform mass density * **Non-uniform mass density prism**: Multi-colored prism with non-uniform mass density * **Uniform density synthetic-satellites**: Renderings of CALIPSO and CloudSat satellites with uniform mass density The uniform mass density cube and prism datasets demonstrate baseline capabilities of our approach to do image prediction while the non-uniform density prism and uniform density satellite datasets are used to demonstrate capabilities of our model in scenarios of more complicated dynamics as explained in Section 1. For the uniform mass density cube dataset, inspired by Falorsi et al. [19], the angular momentum vector is constant for any initial condition. For every other dataset, including the uniform mass density prism, the full gamut of rigid rotational dynamics is possible, including stable spin as shown in Figure 1(b) and (c), wobble as shown in Figure 1(d) as well as unstable tumble as shown in Figure 1(a). The non-uniform mass density datasets validate the model's capability to predict a change in mass distribution that may not be visible from the exterior, e.g., as part of failure diagnostics on a satellite where there may be broken or shifted internal components. For each dataset, \(N=1000\) trajectories are created. Each trajectory consists of an initial condition \(\mathbf{x}_{0}=(\mathbf{R}_{0},\mathbf{\Pi}_{0})\) that is integrated forward in time using a Python-based Runge-Kutta solver for \(T=100\) timesteps with spacing \(\Delta t=10^{-3}\). Initial conditions are chosen such that \((\mathbf{R}_{0},\mathbf{\Pi}_{0})\sim\text{Uniform}\left(\mathbf{SO}(3)\times S ^{2}\right)\) with \(\mathbf{\Pi}_{0}\) scaled to have \(\|\mathbf{\Pi}_{0}\|_{2}=50\). The orientations \(\hat{\mathbf{q}}\) from the integrated trajectories are passed to Blender [15] to render images of \(28\times 28\) pixels (as shown in Figure 1). At training time, each trajectory is snipped into sequences of length \(\tau=10\) using a sliding window, so that the model is trained on a batch of image sequences of length \(\tau\). ## 6 Results ### Image prediction A key contribution of this work is image prediction for freely rotating rigid bodies using the learned dynamics model. The model is capable of high-accuracy future prediction across six datasets. Figures 3 and 4 show the model's performance on the datasets for both short and long-term predictions. The complete performance of our model on the cube and prism datasets in shown in Figure 5 in Appendix A.4. The model's performance on the datasets is indicative of its capabilities to predict dynamics and map them to image space. Specifically, the model's prediction performance on the Figure 3: Predicted sequences for uniform mass density prism dataset given by our model. At prediction time, the model takes the first two images of a sequence, encodes them into the \(\mathbf{SO}(3)\) latent space and estimates the body angular momentum A.7. The model predicts future latent states (using the estimated body angular momentum and first latent state as initial conditions), then the model decodes the predicted latent states into images. The prediction results show that the model is capable of predicting into the future at least five fold the time horizon used at training time. non-uniform mass density datasets demonstrates its successful prediction when the mass distribution is visually ambiguous. This is particularly important because mass density (and thus rotational dynamics) is not something that can easily be inferred directly from images as we have discussed in Section 1. The model is compared to three baseline models: (1) an LSTM-baseline, (2) a Neural ODE [12]-baseline, and (3) the HGN [48] model. Recurrent neural networks like the LSTM-baseline provide a discrete dynamics model. Neural ODE can be combined with a multi-layer perceptron to model and predict continuous dynamics. HGN is a generative model with a Hamiltonian inductive bias. Architecture and training details for each baseline is given in Appendix A.5. The prediction performances of our model and the baselines are shown in Table 1. Our model outperforms the baseline models on the majority of the datasets with a more interpretable latent space, continuous dynamics, and fewer model parameters -motivating principles for this work. ### Latent space analysis and interpretability Another contribution of this work is the interpretability of the latent space generated by our model from the image datasets. Black-box models with high-dimensional latent states make it very difficult to interpret and inspect the behavior of the latent space. Because our approach encodes all images into a latent space homeomorphic to \(\mathbf{SO}(3)\), we have convenient ways to interpret this low-dimensional space. The form of our latent space provides us a way of inspecting the behavior of our model that previous works lack. In this sense, our approach provides a step towards producing new frameworks for interpreting deep learning models, analyzing failure modes, and using control for dynamic systems with significant structure from prior knowledge. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c}{Ours} & \multicolumn{3}{c}{LSTM-baseline} & \multicolumn{3}{c}{Neurologic-baseline} & \multicolumn{3}{c}{HGN} \\ & TRAIN & TEST & TRAIN & TEST & TRAIN & TEST & TRAIN & TEST \\ \hline Uniform Prism & **2.66 \(\pm\) 0.10** & **2.71 \(\pm\) 0.08** & 3.46 \(\pm\) 0.59 & 3.47 \(\pm\) 0.61 & 3.96 \(\pm\) 0.68 & 4.00 \(\pm\) 0.68 & 4.18 \(\pm\) 0.0 & 7.80 \(\pm\) 0.30 \\ Uniform Cube & **3.54 \(\pm\) 0.17** & **3.97 \(\pm\) 0.16** & 2.15 \(\pm\) 1.98 & 2.64 \(\pm\) 2.12 & 4.98 \(\pm\) 1.93 & 4.20 & 17.43 \(\pm\) 0.00 & 18.69 \(\pm\) 0.12 \\ Non-uniform Prism & **4.27 \(\pm\) 0.18** & **6.16 \(\pm\) 0.88 & 4.50 \(\pm\) 1.31 & **4.52 \(\pm\) 1.34** & 4.67 \(\pm\) 0.58 & 4.75 \(\pm\) 0.59 & 6.16 \(\pm\) 0.08 & 8.33 \(\pm\) 0.26 \\ Non-uniform Cube & **6.24 \(\pm\) 0.29** & **4.85 \(\pm\) 0.35** & **3.74 \(\pm\) 0.51** & 7.51 \(\pm\) 0.50 & 7.89 \(\pm\) 0.50 & 7.94 \(\pm\) 1.59 & 1.41 \(\pm\) 0.13 & 18.14 \(\pm\) 0.36 \\ CALIPSO & 0.79 \(\pm\) 0.53 & 0.87 \(\pm\) 0.50 & **0.62 \(\pm\) 0.21** & **0.65 \(\pm\) 0.22** & 6.69 \(\pm\) 0.26 & 0.71 \(\pm\) 0.27 & 1.18 \(\pm\) 0.02 & 1.34 \(\pm\) 0.05 \\ CloudSat & **0.64 \(\pm\) 0.45** & **0.65 \(\pm\) 0.29** & 0.89 \(\pm\) 0.36 & 0.93 \(\pm\) 0.43 & 0.65 \(\pm\) 0.22 & 0.66 \(\pm\) 0.25 & 1.48 \(\pm\) 0.04 & 1.66 \(\pm\) 0.11 \\ \hline Number of Parameters & \multicolumn{3}{c}{6} & \multicolumn{3}{c}{52400} & \multicolumn{3}{c}{11400} & \multicolumn{3}{c}{-} \\ \hline \hline \end{tabular} \end{table} Table 1: Average pixel mean square error over a 30-step prediction on the train and test data on six datasets. All values are multiplied by 1e+3. We evaluate our model and compare to three baseline models: (1) recurrent model (LSTM [26]), (2) NeuralODE ([12]), (3) HGN ([48]). Our model outperforms all baseline models in the prediction task across the majority of the datasets. The number parameters for the dynamics models of each baselines are given in the last row of the table. Our model uses significantly fewer parameters to estimate the latent dynamics. Figure 4: Predicted sequences for the CALIPSO satellite [1] (top) and CloudSat satellite [2] (bottom) with uniform mass densities given by our model. The prediction results show that the model is capable of predicting into the future with images of more complex geometries and surface properties at least ten fold the time horizon used at training time. Conclusions ### Summary In this work, we have presented the first physics-informed deep learning framework for predicting image sequences of 3D rotating rigid-bodies by embedding the images as measurements in the configuration space \(\mathbf{SO}(3)\) and propagating the Hamiltonian dynamics forward in time. We have evaluated our approach on a new dataset of free-rotating 3D bodies with different inertial properties, and have demonstrated the ability to perform long-term image predictions. By enforcing the representation of the latent space to be \(\mathbf{SO}(3)\), this work provides the advantage of interpretability over black-box physics-informed approaches. The extra interpretability of our approach is a step towards placing additional trust into sophisticated deep learning models. This work provides a natural path to investigating how to incorporate--and evaluate the effect of--classical model-based control directly to trajectories in the latent space. This interpretability is essential to deploying ML algorithms in safety-critical environments. ### Limitations While this approach has shown significant promise, it is important to highlight that this has only been tested in an idealized setting. Future work can examine the effect of dynamic scenes with variable backgrounds, lighting conditions, and multiple bodies. Perhaps more limiting, this approach currently relies on the ability to train the model for each system being examined; however future efforts can explore using transfer/few-shot learning between different 3D rigid-body systems. ### Potential Negative Societal Impacts While we do not believe this work directly facilitates injury to living beings, the inconsistency between the predicted latent representation and ground truth data may lead to unexpected results if deployed in real world environments. ### Future work Although our approach so far has been limited to embedding RGB-images of rotating rigid-bodies with configuration spaces in \(\mathbf{SO}(3)\), it is important to note that there are natural extensions to a wider variety of problems. For instance, this framework can be extended to embed different high-dimensional sensor measurements--such as point clouds--by only modifying the feature extraction layers of the autoencoder. Likewise, depending on the rigid-body system, the latent space can be chosen to reflect the appropriate configuration space, such as generic rigid-bodies in \(\mathbf{SE}(3)\) or systems in more complicated spaces, such as the \(n\)-jointed robotic arm on a restricted subspace of \(\Pi_{i=1}^{n}\left(\mathbf{SO}(3)\right)\).
2301.00318
Effect of Edge Roughness on resistance and switching voltage of Magnetic Tunnel Junctions
We investigate the impact of edge roughness on the electrical transport properties of magnetic tunnel junctions using non-equilibrium Greens function formalism. We have modeled edge roughness as a stochastic variation in the cross-sectional profile of magnetic tunnel junction characterized by the stretched exponential decay of the correlation function. The stochastic variation in the shape and size changes the transverse energy mode profile and gives rise to the variations in the resistance and switching voltage of the magnetic tunnel junction. We find that the variations are larger as the magnetic tunnel junction size is scaled down due to the quantum confinement effect. A model is proposed for the efficient calculation of edge roughness effects by approximating the cross-sectional geometry to a circle with the same cross-sectional area. Further improvement can be obtained by approximating the cross-sectional area to an ellipse with an aspect ratio determined by the first transverse eigenvalue corresponding to the 2D cross section. These results would be useful for reliable design of the spin transfer torque-magnetic random access memory (STT-MRAM) with ultra-small magnetic tunnel junctions.
Rachit R. Pandey, Sutapa Dutta, Heston A. Mendonca, Ashwin A. Tulapurkar
2023-01-01T01:29:00Z
http://arxiv.org/abs/2301.00318v1
# Effect of Edge Roughness on resistance and switching voltage of Magnetic Tunnel Junctions ###### Abstract We investigate the impact of edge roughness on the electrical transport properties of magnetic tunnel junctions using non-equilibrium Green's function formalism. We have modeled edge roughness as a stochastic variation in the cross-sectional profile of magnetic tunnel junction characterized by the stretched exponential decay of the correlation function. The stochastic variation in the shape and size changes the transverse energy mode profile and gives rise to the variations in the resistance and switching voltage of the magnetic tunnel junction. We find that the variations are larger as the magnetic tunnel junction size is scaled down due to the quantum confinement effect. A model is proposed for the efficient calculation of edge roughness effects by approximating the cross-sectional geometry to a circle with the same cross-sectional area. Further improvement can be obtained by approximating the cross-sectional area to an ellipse with an aspect ratio determined by the first transverse eigenvalue corresponding to the 2D cross section. These results would be useful for reliable design of the spin transfer torque- magnetic random access memory (STT-MRAM) with ultra-small magnetic tunnel junctions. Magnetic Tunnel Junction, spin transfer torque, circular edge roughness, non-equilibrium Green's function ## I Introduction Magnetic tunnel junction (MTJ) comprises two ferromagnetic layers (free layer and pinned layer) separated by a tunneling barrier. Binary information can be stored in MTJs corresponding to parallel (P) and anti-parallel (AP) configurations of the magnetizations. The information can be read by measuring the resistance which is low for P and high for AP configurations respectively. Spin transfer torque (STT) produced by application of large positive and negative voltages to free layer with respect to the fixed layer, stabilizes P and AP configurations respectively, and thus can be used for writing the memory. As ferromagnetic layers with perpendicular magnetic anisotropy (PMA) have lower threshold switching voltage (\(V_{c}\)) with enhanced thermal stability, they are preferred over in-plane magnetized layers [1]. Reliability analysis of STT-MRAM (Magnetic Random Access Memory) in terms of write error, tunnel oxide breakdown, temperature variations, etc. has been carried out before [2, 3, 4, 5, 6, 7]. In this paper, we have investigated the effect of lithographic imperfections on the performance of MTJ, which becomes more evident with the technology scaling. The circular edge roughness (CER) is defined as the straying of a pattern from its expected circular shape and is used to characterize the unwanted sidewall roughness emerging during fabrication processes [8, 9, 10]. The threshold voltages and resistances of the MTJ have been calculated using non-equilibrium Green's function (NEGF) method, for 250 realizations of the sidewalls for fixed CER parameters. CER affects the area as well as the shape of MTJ, which in turn changes the transverse mode energies of the electrons tunneling across the barrier and thus gives rise to variations in the resistance and threshold voltage. We have calculated these variations for a range of CER parameters. The NEGF calculation needs transverse energy eigenvalues which were obtained by solving the Schrodinger equation for a 2d potential well with a random boundary corresponding to each realization. ## II Simulation Methodology In the first step, charge current and spin current for a given applied voltage across MTJ is calculated using 1d NEGF formalism, as a function of transverse mode energy at 300 K temperature. The device Hamiltonian matrix is modeled using an effective mass tight binding approach. The transport of electrons across the device is assumed to be coherent. The effect of the contacts is taken into account as self-energy contributions to the Hamiltonian. Charge and spin currents are calculated from the energy-resolved electron correlation matrix [11, 12]. We used CoFeB as the ferromagnet for both the fixed and free layers with the Fermi energy \(E_{F}\) =2.25 eV and the exchange splitting, \(\Delta\) = 2.15 eV. The barrier height from the Fermi level is taken as \(U_{B}\) = 0.76 eV. The effective mass of MgO (tunnelling barrier) and FM are taken as 0.16 \(m_{e}\) and 0.38 \(m_{e}\), respectively, where \(m_{e}\) is the free electron mass. The thickness of the oxide layer is set to 0.9 nm. The charge and parallel spin current (spin current along the fixed layer direction) as a function of the transverse mode energy are tabulated for a range of voltage values ranging from -0.6 to 0.6 V for both P and AP configurations. In the second step, the transverse energy modes are found from the solution of the Schrodinger equation for 2d infinite well with a boundary given by the cross-section of the MTJ. If the cross-section is a perfect circle, the eigenvalues of Hamiltonian are known analytically. For an arbitrary cross-section, the eigenvalues can be found numerically using finite difference method by discretizing the area into a square grid. In the third step, the charge current and spin current for each transverse mode are summed up to get the net charge and spin current for a range of voltage values ranging from -0.6 to 0.6 V for both P and AP configurations. The resistance-area (RA) product calculated at 0.01 V, for MTJ with elliptical cross-section for different aspect ratios as a function of corresponding areas is shown in Fig. 1b. From this figure, we can see that as the area reduces, the RA product shows dependence on area as well as shape [13]. The critical spin current can be calculated from the Gilbert damping (\(\alpha_{G}\)) and the energy barrier between P and AP states (\(\Delta E\)), as \(I_{sc}=(4q\alpha_{G}/\hbar)\Delta E\). Further, the energy barrier is given by, \(\Delta E=(1/2)\mu_{0}M_{s}A_{tFM}H_{K}\), where \(M_{s}\), \(A\), \(t_{FM}\), \(H_{K}\) denote the saturation magnetization, cross-sectional area, free layer thickness and effective perpendicular anisotropy respectively. The critical voltage can be found by interpolating spin current vs voltage data. If the radius of MTJ is 10 nm, assuming \(\Delta E=40k_{B}T\) (T=300 K), \(\alpha_{G}\)=0.08, \(t_{FM}\)=2 nm and \(M_{s}=1.2\times 10^{6}A/m\), the \(H_{K}\) comes out to be \(3.5\times 10^{5}A/m\). The critical voltage for P to AP and AP to P switching as a function of area assuming circular cross-section and the same \(H_{K}\) is shown by the magenta curve in Fig. 1c. Similar calculations for 8 nm and 6 nm radii are shown by green and blue curves respectively. Fig. 1d shows the critical voltage (assuming \(H_{K}\) of 6 nm radius MTJ) for elliptical cross-section of different aspect ratios as a function of the area. We can see that as the area reduces, the threshold voltage shows dependence on area as well as shape. For incorporation of circular edge roughness into a circular cross-section of radius \(R_{0}\), we make a random line segment of length \(2\pi R_{0}\) with auto-correlation function (R) given by the equation, \(R(x)=\sigma^{2}e^{-(d/\xi)^{2n}}\), where the chord length \(d\) is given by \(d=2R_{0}|sin(x/2R_{0})|\). \(\xi\), \(\alpha\), and \(\sigma\) denote the correlation length, roughness parameter and standard deviation respectively [14, 15]. A realization of random line segment is obtained as follows [16]: We numerically generate white noise series with unit power spectral density (PSD) and take its Fourier transform. This is then multiplied by the PSD of the correlation function. The inverse FT of the product gives us a random line segment. The random shape is constructed by taking \(R_{0}+x\) as the radii distribution for angles from 0 to \(2\pi\). The coefficient of variation (CV=standard deviation/mean) for the quantities to be analyzed is obtained from 250 samples. ## III Results and Discussions Variation in the area and shape of MTJ cross-section due to the CER produces variation in the transverse energy mode profile. This in turn produces variation in the charge current and spin current flowing across the MTJ for a given applied voltage. The coefficient of variation of resistance and switching voltage as a function of \(\sigma\) and \(\xi\) for \(\alpha=0.5\) and average radius 6 nm obtained from detailed calculation is shown as a 2D plot in Fig. 2. We can see that the variations become larger as \(\sigma\) and \(\xi\) increase. CV for different parameters at the centre of 2D plot (\(\sigma=0.67nm\), \(\xi=15nm\)) are shown in the table I for different average radii of the cross-section under "detailed calculation" column heading. We can see that the variations increase as the MTJ size is scaled down. To find out the influence of area variation, for each of the 250 samples, we mapped the random shape to a perfect circle of the same area and found out the resistance and switching voltage (See Fig. 1c). The CV obtained from this procedure is shown in table I under "circle approximation" column heading and it matches well with values obtained from detailed calculation. Fig. 1: (a) Schematic of MTJ without edge roughness. (b) RA product vs area for ellipse with different aspect ratios (AR=1 corresponds to a circle).(c) \(V_{c}\) vs area for circle with energy barrier of \(40k_{B}T\) for radii=6 nm (blue), 8nm (green) and 10 nm (magenta). (d) \(V_{c}\) vs area for ellipse with different aspect ratios for energy barrier of \(40k_{B}T\) for radius=6 nm. Fig. 3: (a) Schematic of MTJ with edge roughness. (b) comparison of \(V_{c}\) (AP) for 20 trials obtained from detailed calculation(blue), circle approximation (red), ellipse approximation (green). Fig. 2: Coefficient of Variation plots for: (a) Resistance (P) (b) Resistance (AP) (c) \(V_{c}\) (P) (d) \(V_{c}\) (AP). The circle approximation is expected to work well when the ratio, \((\sigma/R_{0})\) is small. Further, for the approximation to work well, the minimum normalized correlation function \(e^{-(2R_{0}/\xi)^{2\alpha}}\) should be close to 1 i.e.\((2R_{0}/\xi)^{2\alpha}\) should be small. If area variation due to CER plays a dominant role, we can estimate the variance in a quantity Q as, \[var(Q)\approx(dQ/dA)^{2}[2\int_{0}^{L}(L-x)R(x)dx] \tag{1}\] where \(L=2\pi R_{0}\) is the average perimeter. The term in the square bracket in the above equation is the area variance. The CV of various parameters estimated with above equation is given under "estimated" column heading in table I. We can see that values estimated from area variation are fairly close to the numerically calculated values. These equations imply that the area variance is proportional to \(\sigma^{2}\) and it is an increasing function of \(\xi\), which is consistent with trends seen in the 2d plots in Fig. 2. (area variance saturates at large values of \(\xi/L\)). To see if the circle approximation can be further improved, we mapped a given random shape to an ellipse. This is done as follows: We first note down the area. We calculate numerically the ground state energy of the 2d infinite well with boundary given by the random edge. We then compare ground state energy with the tabulated ground state energies of ellipses with the same area and different aspect ratios. An aspect ratio is assigned to the random figure by interpolation. Using tabulated data of \(V_{c}\) and resistance as a function of area for different aspect ratios (see Fig. 1), we can calculate the switching voltage and resistance of the random cross-section MTJ by interpolation. Fig. 3 b shows the \(V_{c}\) for AP to P state for 20 different realizations (out of 250). The blue bar corresponds to \(V_{c}\) calculated by numerically "exact" way i.e. getting all the transverse energy modes to form the numerical solution of 2d Schrodinger equation and summing up transverse currents for each mode. The green bar corresponds to the calculation by mapping the shape to an ellipse which needs only the ground state energy calculation and is hence faster. However, for large values of \(\sigma/R_{0}\) and \(R_{0}/\xi\), the contribution from the non-elliptical shape variation should be taken into account. It should be also noted that the area variation arising from CER gives rise to variation in the thermal stability as the energy barrier \(\Delta E\), depends on the area. ## IV Conclusion We have demonstrated that edge roughness gives rise to variance in the area and shape of a magnetic tunnel junction. This in turn produces variance in the resistance and switching voltage. The variance becomes larger as the MTJ size is scaled down. These results would be useful for designing reliable MRAM cells.
2310.11554
Properties and Deviations of Random Sums of Densely Dependent Random Variables
A classical problem of statistical inference is the valid specification of a model that can account for the statistical dependencies between observations when the true structure is dense, intractable, or unknown. To address this problem, a new variance identity is presented, which is closely related to the Moulton factor. This identity does not require the specification of an entire covariance structure and instead relies on the choice of two summary constants. Using this result, a weak law of large numbers is also established for additive statistics and common variance estimators under very general conditions of statistical dependence. Furthermore, this paper proves a sharper version of Hoeffding's inequality for symmetric and bounded random variables under these same conditions of statistical dependence. Put otherwise, it is shown that, under relatively mild conditions, finite sample inference is possible in common settings such as linear regression, and even when every outcome variable is statistically dependent with all others. All results are extended to estimating equations. Simulation experiments and an application to climate data are also provided.
Shane Sparkes, Lu Zhang
2023-10-17T19:53:31Z
http://arxiv.org/abs/2310.11554v1
# Properties and Deviations of Random Sums of Densely Dependent Random Variables ###### Abstract A classical problem of statistical inference is the valid specification of a model that can account for the statistical dependencies between observations when the true structure is dense, intractable, or unknown. To address this problem, a new variance identity is presented, which is closely related to the Moulton factor. This identity does not require the specification of an entire covariance structure and instead relies on the choice of two summary constants. Using this result, a weak law of large numbers is also established for additive statistics and common variance estimators under very general conditions of statistical dependence. Furthermore, this paper proves a sharper version of Hoeffding's inequality for symmetric and bounded random variables under these same conditions of statistical dependence. Put otherwise, it is shown that, under relatively mild conditions, finite sample inference is possible in common settings such as linear regression, and even when every outcome variable is statistically dependent with all others. All results are extended to estimating equations. Simulation experiments and an application to climate data are also provided. keywords: Dependent random variables, linear statistics, concentration inequalities, finite sample inference, estimating equations, variance identity, uniform convergence in probability Msc: [2020] Primary 62E99, Secondary 62F12 + Footnote †: journal: Journal of Multivariate Analysis ## 1 Introduction Popular methods for statistical inference model probabilistic dependencies as limited and schematic in nature. The latter notion is used to justify asymptotic normality, while the former reduces an unknowable picture to one that is tame and mathematically pliable. However, in the now everyday words of George Box, "... all models are wrong" [7]. In many research contexts, there is little reason to believe that the system of statistical dependencies governing the joint distribution of outcome variables behaves in accordance with a tractable sequence, or that it is sparse even conditionally. Sociological, climate, or clinical settings are but a few examples. Box of course went on to state that'some are useful.' However, this final addendum, while by construction irrefutable, requires great caution when cited besides models of statistical uncertainty. Invalid models for the expected values of outcome variables still possess salient interpretations as approximations. On the other hand, an invalid model for statistical dependence will furnish untrue statements about error, or statements in general that are exceptionally vulnerable to doubt. This undercuts the cogency of knowledge construction. This paper addresses this problem for additive statistics, which we now define. Let \(I=\{1,\ldots,n\}\) be an indexing set for a sample \(\zeta=\{Y_{i}\}_{i\in I}\) of random variables. Furthermore, let \(\{w_{i}\}_{i\in I}\) be any set of constants. We define \(S_{n}=\sum_{i=1}^{n}w_{i}Y_{i}\) as an additive statistic, although for simplicity, we will often set \(w_{i}=1\) without loss of generality (WLOG) since we can simply say \(Z_{i}=w_{i}Y_{i}\) and reason about \(\{Z_{i}\}_{i\in I}\). It is important to note that \(Y_{i}\) (\(Z_{i}\)) is also general and can be any measurable function of \(k\in\mathbb{N}\) random variables. Ultimately, we are interested in establishing some basic properties and behaviors of \(\mathbf{S_{n}}\), a vector of random sums, under very general but unfavorable conditions of dependence. We do so without any particular theory of how this looks, which makes these results widely applicable. Lots of work exists on the topic of dependence and additive statistics. The generalized least squares approach encapsulates common classes of estimators. Ultimately, this method replaces \(\text{Var}(\mathbf{S_{n}})\) with a model \(\mathbf{\Sigma}\) in an attempt to induce the conditions of the Gauss-Markov theorem [1; 2]. Generalized linear models and the generalized estimating equation approach of Liang and Zeger [26] produce estimators related to this type, as do hierarchical linear models [13; 30]. The approach of Liang and Zeger, however, also makes use of cluster-robust standard errors in conjunction with specified covariance models [41; 42]. These strategies posit that a user-specified partition of the sample results in independent clusters, the variances of which can be identified. A thorough review of the theory behind cluster-robust variance estimation is available elsewhere [27]. Spatial and time series methods are also important in this universe. They adopt objects--such as variograms or auto-regressive weight matrices--to facilitate covariance estimation under the supposition of largely localized dependencies [3; 8; 21]. The details of these methods are beyond this paper's scope. In essence, they share in one or two pivotal facts, however: they usually posit that the average number of dependencies in a sample is bounded by a constant and that the researcher has knowledge of a partition of \(\zeta\) that produces \(K(n)\rightarrow\infty\) independent clusters, or (2) they replace \(\text{Var}(\mathbf{S_{n}})\) with a blueprint that is salient, but simple and mathematically convenient. Overall, these strategies set a grand majority of the \(\binom{n}{2}\) covariance parameters to zero _a priori_ in protestation of real conditions. Inference also depends on mixing conditions or concepts such as \(m\)-dependence, which justify a central limit theorem for dependent random variables [5; 37]. Researchers have also utilized the concentration of measure phenomenon as a tool for inference [24]. Hoeffding's inequality is a classical result in this domain, as are others [4; 16]. Most formulations rely on the assumption of mutual independence [6; 19; 34]. Many, however, also allow for weak or local dependence conditions to exist [9; 14; 22]. For instance, Hoeffding's inequality applies to sums of negatively associated random variables and to additive statistics that can be decomposed into smaller independent sums [18; 36]. These results are invaluable for theoretically bounding the tail probabilities of random sums under a wider range of circumstances. Unfortunately, however, most of these inequalities still rely upon restricted dependency pictures that do not fit the concerns of this paper. The main contribution of this manuscript is to show that finite sample inference is possible for additive statistics of bounded random variables without a detailed dependency model, and even when _all_ measured outcome variables are statistically dependent and no central limit theorem applies. Essentially, this is done by showing that a sharper version of Hoeffding's inequality still applies to \(\mathbf{S_{n}}-\mathbf{ES_{n}}=\boldsymbol{\epsilon}\) when \(\boldsymbol{\epsilon}\) is densely dependent, insofar as each \(\epsilon_{i}\) behaves in accordance with some symmetric, unimodal, but not necessarily identical probability law. No other restrictions are necessarily placed on the marginal or joint probability functions. While still non-trivial, this contribution is useful since random errors of this type often surface in regression settings. Additionally, Hoeffding's inequality possesses a closed form that is accessible to working statisticians. The paper's secondary contribution is a novel variance identity that is useful for analyzing the properties of \(\mathbf{S_{n}}\) under very general and terrible conditions. Extending these results to estimating equations is this paper's tertiary contribution. Section 2 introduces some key definitions and the aforementioned variance identity, while Section 3 uses it to prove a weak law of large numbers (WLLN) for additive statistics under the circumstances identified, and to explore the behavior of cluster-robust variance estimators in these same settings. Section 4 returns to our main theme of statistical inference in the face of dense, unknown, and intractable dependency structures. It defines a new type of random variable and uses this definition to prove our main results. Following this, Section 5 makes use of the work of Jennrich [20], Yuan and Jennrich [40], and Hall [15] to extend the statements of the previous sections to estimating equations. The last part of this paper before the conclusion--Section 6--demonstrates the value of our approach with a set of simulation experiments that mimic some worst-case dependency scenarios with linear estimators. After this is accomplished, the association between global changes in temperature and carbon dioxide levels is estimated to demonstrate the utility of the approach. ## 2 A Novel Variance-Covariance Identity for Additive Statistics We now introduce some important definitions. Say \(\mathbf{Y}^{\top}=(Y_{1},\ldots,Y_{n})\) is a \(1\times n\) random vector s.t. \(\text{E}Y_{i}^{2}<\infty\) for \(\forall i\) and \(\mathbf{w}\in\mathbb{R}^{p\times n}\) is a matrix of constants. Furthermore, as is tradition, say \(\text{Cov}(Y_{i},Y_{j})=\sigma_{i,j}=\text{E}Y_{i}Y_{j}-\text{E}Y_{i}\text{E} Y_{j}\) and hence \(\text{Var}(Y_{i})=\sigma_{i,i}=\sigma_{i}^{2}\). Graphs are important for this exploration. A graph \(\text{G}=(\text{V},\text{L})\) is constituted by a set of nodes V and a set of lines, L, that connect them. It is undirected if \(\text{e}_{i,j}\in\text{L}\) implies that \(\text{e}_{ji}\in\text{L}\); otherwise it is directed. Here, only undirected graphs will be of interest. Also, note that \(I=\{1,\ldots,n\}\) can now be seen as a node set. The definition presented next is central and allows for the construction of the variance identity. **Definition 1** (Linear Dependency Graph).: Let \(\mathcal{L}=(I,\mathrm{L})\) be a graph with a node set \(I\) w.r.t. \(\{Y_{i}\}_{i\in I}\) and a set of edges \(L\) between them. Then \(\mathrm{e}_{i,j}\in\mathrm{L}\) if and only if \(\sigma_{i,j}\neq 0\). It is also relevant to know that the degree of a node is defined as the sum of its existent links. Denote this function as \(d(i)=\sum_{j=1}^{n-1}1_{\mathrm{e}_{i,j}\in L}\), where \(1_{\mathrm{e}_{i,j}\in L}\) is an indicator function, and similarly denote the mean degree as \(\mu_{n}=n^{-1}\sum_{i=1}^{n}d(i)\). In the context of this paper, \(\mu_{n}\) is equal to the mean number of random variables that a typical random variable is correlated with in the sample. Pertinently, each \(1_{\mathrm{e}_{i,j}}\) will be treated as a non-stochastic function, conditional on the realization of \(\{Y_{i}\}_{i\in I}\) as a sample of random variables, unless otherwise noted. A few more definitions are necessary. The \(\odot\) symbol will signify the Hadamard product, which is the component-by-component multiplication of two matrices. Letting \(\mathbf{w}_{s}\) denote the \(s\)th row of \(\mathbf{w}\), consider \(\bar{\sigma}_{r,t}=|L|^{-1}\sum_{i<j}^{|L|}w_{r,i}w_{t,j}\sigma_{i,j}\) w.r.t. the statistics \(\mathbf{w}_{r}\mathbf{Y}\) and \(\mathbf{w}_{t}\mathbf{Y}\). This value will be called an average non-zero covariance term, while \(\phi_{r,t}=\{n^{-1}\sum_{i=1}^{n}w_{r,i}w_{t,i}\sigma_{i}^{2}\}^{-1}\cdot\bar {\sigma}_{r,t}\) will be called an average correlation. Ultimately, three matrices will also be required for the identity: \(\mathbf{G}\), \(\mathbf{C}\), and \(\mathbf{V}\). The first two matrices are defined from the previously specified quantities: \(\mathbf{C}^{p\times p}=(\bar{\sigma}_{i,j})\) and \(\mathbf{G}=\mathbf{1}+\mu_{n}\boldsymbol{\phi}\), where \(\mathbf{1}\) is a \(p\times p\) matrix of ones and \(\boldsymbol{\phi}^{\prime\times p}=(\phi_{i,j})\). Finally, \(\mathbf{V}\) is a diagonal matrix s.t. its diagonal is equal to the diagonal of \(\mathrm{Var}(\mathbf{Y})\). This last matrix is recognizable as \(\mathrm{Var}(\mathbf{Y})\) under the counterfactual assumption of mutual independence. **Proposition 1**.: _Let \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\top}\) be a \(n\times 1\) random vector s.t. \(EY_{i}^{2}<\infty\) for \(\forall i\) and \(\mathbf{w}\in\mathbb{R}^{p\times n}\) is a matrix of constants. Then \(\mathrm{Var}(\mathbf{w}\mathbf{Y})=\mathbf{w}\mathbf{V}\mathbf{w}^{\top} \odot\mathbf{G}=\mathbf{w}\mathbf{V}\mathbf{w}^{\top}+n\mu_{n}\mathbf{C}\)._ Proof.: Let \(s,t\) be arbitrary indexes from \(\{1,\ldots,p\}\). Then \(\mathbf{w}_{s}\mathbf{Y}=\sum_{i=1}^{n}w_{s,i}Y_{i}\) and \(\mathbf{w}_{t}\mathbf{Y}=\sum_{i=1}^{n}w_{r,i}Y_{i}\). It then follows that \(\mathrm{Cov}(\mathbf{w}_{s}\mathbf{Y},\mathbf{w}_{t}\mathbf{Y})=\sum_{i=1}^{n }w_{s,i}w_{r,i}\sigma_{i}^{2}+\sum_{i\neq j}w_{s,i}w_{r,i}\sigma_{i,j}\). From here, consider the linear dependency graph \(\mathcal{L}=(I,L)\) w.r.t. \(\{Y_{i}\}_{i\in I}\) as previously defined. Then: \[\sum_{i\neq j}w_{s,i}w_{r,i}\sigma_{i,j}=2\sum_{i<j}w_{s,i}w_{r,j}\sigma_{i,j} =2\cdot(\sum_{i<j}^{|L|}w_{s,i}w_{r,i}\sigma_{i,j}+0)=2|L|\cdot\bar{\sigma}_{s,t}=n\mu_{n}\cdot\bar{\sigma}_{s,t}\] The first equality follows from the definition of \(\mathcal{L}\), while the fourth follows from the Handshake lemma. Hence, \(\mathrm{Cov}(\mathbf{w}_{s}\mathbf{Y},\mathbf{w}_{t}\mathbf{Y})=(\mathbf{w} \mathbf{V}\mathbf{w}^{\top})_{s,t}+n\mu_{n}\mathbf{C}_{s,t}\). It then follows that \(\mathrm{Var}(\mathbf{w}\mathbf{Y})=\mathbf{w}\mathbf{V}\mathbf{w}^{\top}+n\mu _{n}\mathbf{C}\) since \(s,t\) were arbitrary. Now, note that \(\mathrm{Cov}(\mathbf{w}_{s}\mathbf{Y},\mathbf{w}_{t}\mathbf{Y})=\sum_{i=1}^{n }w_{s,i}w_{r,i}\sigma_{i}^{2}+n\mu_{n}\cdot\bar{\sigma}_{s,t}=\{1+n\mu_{n} \cdot(\sum_{i=1}^{n}w_{s,i}w_{r,i}\sigma_{i}^{2})^{-1}\bar{\sigma}_{s,t}\} \sum_{i=1}^{n}w_{s,i}w_{r,i}\sigma_{i}^{2}=\{1+\mu_{n}\phi_{s,t}\}\sum_{i=1}^{n }w_{s,i}w_{r,i}\sigma_{i}^{2}\). Again, since \(s,t\) were arbitrary, it then follows that \(\mathrm{Var}(\mathbf{w}\mathbf{Y})=\mathbf{w}\mathbf{V}\mathbf{w}^{\top}\odot \mathbf{G}\). The utility of Proposition 1 is that it summarizes the impact of an unknowable and inestimable system of statistical dependencies on the variance of an additive statistic with two summary constants that are more defensibly specified or bounded. Although intuition might be lacking as to how some environment dynamically acts upon \(\{Y_{i}\}_{i\in I}\) to produce \(\mathrm{Var}(\mathbf{Y})\), this might not be the case for \(\mu_{n}\) or the diagonal elements of \(\boldsymbol{\phi}\). Prior beliefs or knowledge might exist pertaining to these values. Although this also suggests a possible Bayesian route where these values are conceptualized as draws from prior distributions to model uncertainty, this road is not pursued in this paper. Moreover, recall that a Moulton factor is an expression for the variance inflation caused by intra-cluster correlation [28]. They typically have the form \(\gamma_{s,s}=(\mathbf{w}\mathbf{V}\mathbf{w}^{\top})_{s,s}^{-1}\mathrm{Var}( \mathbf{w}\mathbf{Y})_{s,s}\). From Proposition 1, we can see that \(\gamma_{s,s}=\mathbf{G}_{s,s}\). However, Proposition 1 is more general since Moulton factors are often derived under specific modeling constraints. This correspondence can also be seen from rearranging an expression from Proposition 1. For instance, say \(\mathbf{\Gamma}=(n^{-1}\cdot\mathbf{w}\mathbf{V}\mathbf{w}^{\top})^{-1}\mathbf{C}\). Then \(\mathrm{Var}(\mathbf{w}\mathbf{Y})=\mathbf{w}\mathbf{V}\mathbf{w}^{\top}\{ \mathbf{1}^{p\times p}+\mu_{n}\mathbf{\Gamma}\}\), which is also recognizable as an even more general form of a Moulton style covariance factor [29]. We will prove three bounds on the variances of simpler additive statistics, i.e., for \(\mathbf{w}_{s}=\mathbf{1}^{1\times n}\). These bounds can be useful for proving asymptotic properties. The next propositions bound \(\phi\) under the conditions of fully connected graphs or sampling designs that are relatively uninformative with respect to the covariance structure. **Proposition 2**.: _Say \(S_{n}=\sum_{i=1}^{n}Y_{i}\) and again consider \(\mathcal{L}\). Then \(-\mu_{n}^{-1}\leq\phi\leq\mu_{n}^{-1}(n-1)\) when \(\mu_{D}>0\)._ Proof.: Observe \(\sum_{i\neq j}\sigma_{i,j}\).By Cauchy-Schwarz and the Geometric-Mean Inequality, \(\sum_{i\neq j}\sigma_{i,j}\leq(n-1)\sum_{i=1}\sigma_{i}^{2}\). From here, we know that \(2\sum_{i<j}\sigma_{i,j}=n\mu_{n}\bar{\sigma}\). Then \(n\mu_{n}\bar{\sigma}\leq(n-1)\sum_{i=1}\sigma_{i}^{2}\), which implies \(\phi\leq\mu_{n}^{-1}(n-1)\). Furthermore, since variance are non-negative, \(1+\mu_{n}\phi\geq 0\implies\mu_{n}\phi\geq-1\) **Corollary 1**.: _For fully connected linear dependency graphs, \(-(n-1)^{-1}\leq\phi\leq 1\)._ Proof.: Immediate from Proposition 2 since \(\mu_{n}=n-1\). Provided that \(\mu_{n}(n)\rightarrow\infty\) as \(n\rightarrow\infty\), another informal corollary to Proposition 2 is that \(\phi\geq 0\). Philosophically, this is useful to know since it implies that any system that is heavy with statistical dependencies in this fashion almost necessitates non-negative average correlation. From here, consider \(\mathbf{\Lambda}\), a vector of \(N\) indicator variables \(\delta_{i}\), such that \(\delta_{i}=1\) if and only if \(Y_{i}\) is sampled from a larger population \(P\). This next proposition also provides conditions s.t. \(\phi\leq 1\) without strictly requiring a fully connected graph. It is useful when \(P\) possesses a high number of dependent random variables and it is possible to execute a non-informative sampling mechanism s.t. \(\mu_{n}\) (now treated as random in a temporary abuse of notation solely for Proposition 3) has a probability distribution that places most probability on lower values. **Proposition 3**.: _Say \(S_{n}=\sum_{i=1}^{n}Y_{i}\). Suppose \(n^{-1}\sum_{i=1}^{n}\text{Var}(Y_{i}|\mathbf{\Lambda})=n^{-1}\sum_{i=1}^{n}\sigma_ {i}^{2}\) and \(|L|^{-1}\sum_{i<j}^{|L|}Cov(Y_{i},Y_{j}|\mathbf{\Lambda})=|L|^{-1}\sum_{i<j}^{|L|} \sigma_{i,j}\), i.e., the sampling mechanism is uninformative on average with respect to the mean variance and individual covariances. Furthermore, denote \(n_{*}\) as the number of correlated random variables in the population \(P\) and say \(n_{*}>n\) and thus \(2^{-1}n(n-1)\) is the upper bound of the support of \(|L|\). Then \(\phi\leq 1\)._ Proof.: \(\text{Var}(S_{n}|\mathbf{\Lambda})=\sum_{i=1}^{n}\text{Var}(Y_{i}|\mathbf{\Lambda})+2 \sum_{i<j}\text{Cov}(Y_{i},Y_{j}|\mathbf{\Lambda})=\sum_{i=1}^{n}\sigma_{i}^{2}+2 \sum_{i<j}^{|L|}\sigma_{i,j}\), where only \(|L|\) is random. The derivation is similar to Proposition 2. We skip these steps and observe that \(n\mu_{n}|L|^{-1}\sum_{i<j}^{|L|}\sigma_{i,j}\leq(n-1)\sum_{i=1}^{n}\sigma_{i}^ {2}\). Here, note that \(\mu_{n}\) is also random and that \(\mu_{n}=n-1\) when \(|L|=2^{-1}n(n-1)\). However, since \((n-1)\sum_{i=1}^{n}\sigma_{i}^{2}\) is a constant upper bound of the left-hand side and it is possible to sample \(n<n_{*}\) correlated random variables, it must be the case that \(n(n-1)|L|^{-1}\sum_{i<j}^{|L|}\sigma_{i,j}\leq(n-1)\sum_{i=1}^{n}\sigma_{i}^{2}\) and hence that \(\phi\leq 1\). This next proposition is less demanding in its premises and hence applicable in more contexts. Furthermore, it will provide another route for establishing that \(\phi\leq 1\) when \(\sigma_{i}^{2}=\sigma^{2}\) for \(\forall i\), i.e., when the assumption of equal variances holds. Denote \(\eta=(n^{-1}\sum_{i}^{n}\sigma_{i}^{2})^{-1}\underset{i\in I}{\max}(\sigma_{i} ^{2})\) as the ratio of maximum variance to mean variance to these ends. It won't be proven, but it is obvious that \(\eta=O(1)\) when variances are finite. This fact also generalizes to the weighted case insofar as \(\mathbf{w}_{s}\) is not a trivial vector of zeroes. **Proposition 4**.: _Again observe \(S_{n}=\sum_{i=1}^{n}Y_{i}\). Then \(\text{Var}(S_{n})\leq\{1+\mu_{n}\eta\}\sum_{i=1}^{n}\sigma_{i}^{2}\). If \(\sigma_{i}^{2}=\sigma^{2}\) for \(\forall i\), then \(\phi\leq 1\) and \(\text{Var}(S_{n})\leq\{1+\mu_{n}\}\sum_{i=1}^{n}\sigma_{i}^{2}\)._ Proof.: Recall that \(\bar{\sigma}=|L|^{-1}\sum_{i<j}^{|L|}\sigma_{i,j}\). Hence, \(\bar{\sigma}\leq\underset{i<j}{\max}(\sigma_{i,j})=\sigma_{r,s}\), say. By Cauchy-Schwarz, \(\sigma_{r,s}\leq\sigma_{r}\sigma_{s}\leq\underset{i\in I}{\max}(\sigma_{i} ^{2})\). The general result then follows since \(\phi\leq\eta\). Under the additional premise, \(\eta=1\), which implies the last stated bound. Pertinently, \(\text{Var}(S_{n})\leq\{1+\mu_{n}\}\sum_{i=1}^{n}\sigma_{i}^{2}\) is a tight inequality w.r.t. some dependency structures. When \(\zeta\) is a sample of independent \(Y_{i}\), \(\mu_{n}=0\) and \(\text{Var}(S_{n})=\sum_{i=1}^{n}\sigma_{i}^{2}\leq(1+0)\sum_{i=1}^{n}\sigma_{i} ^{2}=(1+\mu_{n})\sum_{i=1}^{n}\sigma_{i}^{2}\). When \(\zeta\) is a collection of the same random variable \(n\) times, \(S=nY_{1}\) WLOG. Then \(\text{Var}(S_{n})=n^{2}\sigma_{1}^{2}\leq(1+n-1)n\sigma_{1}^{2}=(1+\mu_{n}) \sum_{i=1}^{n}\sigma_{i}^{2}\) since \(\mu_{n}=n-1\). Here, it is also useful to comment that we can also define a dependency graph \(D=(I,L_{D})\) s.t. a link exists in this graph if and only if \(\text{Pr}(Y_{i},Y_{j})\neq\text{Pr}(Y_{i})\text{Pr}(Y_{j})\). All the results of this section also hold when the corresponding values are defined w.r.t. this graph. We will call these values \(\mu_{D}\) and \(\phi_{D}\) respectively. When this is done, it is easy to show that \(\mu_{n}\phi_{n}=\mu_{D}\phi_{D}\). This can be established from the fact that \(\phi_{n}=|L|^{-1}|L_{D}|\cdot\phi_{D}\). ## 3 Asymptotic Properties of Additive Statistics We now show the utility of the previous set of results for the asymptotic analysis of additive statistics. Recall that one goal of this paper is to establish some basic but important statistical properties of additive estimators under 'apocalyptic' conditions, i.e., conditions such that independence is violated in such a way as to render the true dependency structure inconceivably checkered with non-nullified co-variation. To this end, we first establish some mild assumptions for the consistency of additive estimators and common variance estimators in this setting. Variance estimators are usually reserved for plug-in strategies with Wald statistics. Although we have no intention of justifying Wald-like hypothesis testing under our assumed conditions, establishing the consistency of variance estimators can still be useful for other purposes. Later, we see that they are still useful for investigating features of the unknown correlation structure, for example. 1. There exists a \(C_{*}\in\mathbb{R}^{+}\) s.t. \(|Y_{i}|\leq C_{*}\) for \(\forall i\in I\) 2. Denote a partition of \(I\) s.t. \(J_{k}\subseteq I\) for \(k\in\{1,2,\ldots K\}=\mathcal{K}\), \(\bigcup_{k=1}^{K}J_{k}=I\) and therefore \(J_{r}\cap J_{t}=\emptyset\) if \(r\neq t\). In accordance with this, say \(\zeta=\{Y_{i}\}_{i\in I}\), \(\zeta_{k}=\{Y_{j}\}_{j\in I_{k}}\), and hence \(\zeta=\bigcup_{k=1}^{K}\zeta_{k}\). Denote \(n_{k}=|\zeta_{k}|\), which means \(\sum_{k=1}^{K}n_{k}=n\). Then \(n_{r}^{-1}n_{t}=O(1)\) for \(\forall r,t\in\mathcal{K}\) 3. Let \(\mathcal{L}\) be an arbitrary linear dependency graph. Then \(\mu_{n}=o(n)\) A1 establishes that we are working within a universe of bounded random variables. If this condition is not strictly necessary for mean and variance estimation, we can of course also do with the common assumptions that \(\mathds{E}Y_{i}^{2}<\infty\) as A1' or the assumption that \(\mathds{E}Y_{i}^{4}<\infty\) for all \(i\) and that all referenced random variables are uniformly integrable. A2 simply stipulates that all cluster sizes have the same asymptotic order. This assumption is met in common research circumstances. We will not always need it. Note that A3 is very mild. For instance, it allows \(\mu_{n}(n)\to\infty\) as \(n\to\infty\). It only keeps \(\mu_{n}\) from _linearly_ scaling with sample size. Put otherwise, A3 is mild because it allows for the typical random variable of a given sample to be correlated, on average, with a diverging number of others also collected--and in any imaginative way--as \(n\) becomes arbitrarily large. There is no constraint on how this occurs. **Proposition 5**.: _Suppose A1' and A4 for \(\mathbf{wY}\) s.t. \(\mathbf{w}_{i,j}=(w_{i,j})=O(n^{-1})\). Then \(\mathbf{wY}\overset{p}{\to}E(\mathbf{wY})\) as \(n\to\infty\), where \(\overset{p}{\to}\) denotes convergence in probability._ Proof.: Let \(s\) be arbitrary and observe \(\mathbf{w}_{s}\mathbf{Y}=\sum_{i=1}^{n}w_{s,i}Y_{i}\). From Proposition 1, we know that \(\mathrm{Var}(\mathbf{w}_{s}\mathbf{Y})=\sum_{i=1}^{n}w_{s,i}^{2}\sigma_{i}^{2} +n\mu_{n}\bar{\sigma}_{s,s}\), where \(\bar{\sigma}_{s,s}=|L|^{-1}\sum_{i<j}w_{s,i}w_{s,j}\sigma_{i,j}\). Since \(\bar{\sigma}_{s,s}\leq\underset{i\in I}{\max}(|w_{s,i}|)^{2}\underset{i\in I} {\max}(\sigma_{i}^{2})\) and \(\underset{i\in I}{\max}(\sigma_{i}^{2})\leq C\in\mathbb{R}^{+}\) for \(\forall n\), it follows that \(n\mu_{n}\bar{\sigma}_{s,s}\leq n\mu_{n}[\underset{i\in I}{\max}(|w_{s,i}|)^{2} \underset{i\in I}{\max}(\sigma_{i}^{2})\leq n\mu_{n}[\underset{i\in I}{\max} (|w_{s,i}|)]^{2}C\). This implies that: \[\underset{n\to\infty}{\lim}n\mu_{n}\bar{\sigma}_{s,s}\leq\underset{n\to \infty}{\lim}n\mu_{n}[\underset{i\in I}{\max}(|w_{s,i}|)]^{2}C=0\] The last step follows since \(C\) is finite, while \(\mu_{n}=o(n)\) and \(\underset{i\in I}{\max}(|w_{s,i}|)^{2}=O(n^{-2})\). Since \(\sum_{i=1}^{n}w_{s,i}^{2}\sigma_{i}^{2}\to 0\) as \(n\to\infty\) as well, this implies that \(\mathrm{Var}(\mathbf{w}_{s}\mathbf{Y})\to 0\). A use of Chebyshev's inequality then implies that \(\mathbf{w}_{s}\mathbf{Y}\overset{p}{\to}\mathds{E}(\mathbf{w}_{s}\mathbf{Y})\). Because \(s\) was arbitrary, the conclusion is reached. Note that a quicker alternative proof could have used Proposition 4 since \(\eta=O(1)\). Therefore, an adequate sub-linear behavior w.r.t. \(\mu_{n}\) is sufficient for quadratic mean convergence. For instance, if \(\mu_{n}=n^{1/c},c>1\), this is still sufficient to establish a WLLN, although the rate of convergence will be sub-optimal, especially for small \(c\). It is also therefore apparent that weak dependence--at least as traditionally conceived--is not a necessary condition for the consistency of a large class of commonly employed statistics, including average loss functions for learning algorithms. Next, define \(\hat{e}_{i}=Y_{i}-\hat{\mathds{E}}Y_{i}\). Analyzing variance estimators that make use of squared residuals is easier under the simplifying assumption that the linear dependency graph of \(\{\hat{e}_{i}^{2}\}_{i\in I}\) is isomorphic to the linear dependency graph for \(\{\hat{e}_{i}\}_{i\in I}\). This is true for special cases, such as when the \(\hat{e}_{i}\) are normally distributed. Even if this does not hold exactly, it is arguably mild to assert that it is at least true that the mean degrees of these graphs have the same asymptotic order and thus can be interchangeably described by one \(\mu_{n}\). Moreover, we also use \(\mu_{n}\) for the limiting average degree, but context will make this clear. For instance, for finite sample sizes, the linear dependency graph of \(\{\hat{e}_{i}^{2}\}_{i\in I}\) is conceivably fully connected. However, provided that \(\hat{\mathds{E}}Y_{i}\overset{p}{\to}\mathds{E}Y_{i}\) for \(\forall i\) as \(n\to\infty\) and thus \(\hat{e}_{i}\overset{p}{\to}\epsilon_{i}\) for \(\forall i\) as \(n\) becomes arbitrarily large, the limiting linear dependency graph of \(\{\hat{e}_{i}^{2}\}_{i\in I}\) is equal to the graph for \(\{\epsilon_{i}^{2}\}_{i\in I}\). Hence, when discussing asymptotic orders, it is apparent that \(\mu_{n}\) refers to the mean degree of the limiting object. Under the aforementioned simplifying assumption, it is also apropos to note that the limiting mean degree of the graph for \(\{Y_{i}\}_{i\in I}\) can also be represented by the same \(\mu_{n}\) as the two previously mentioned. From here, we will always implicitly assume that an arbitrary weight \(w\) is \(O(n^{-1})\). **Proposition 6**.: _Assume A1 and A3. Let \(\mathbf{B}=\mathbf{wY}=\boldsymbol{\beta}+\mathbf{w}\epsilon\) s.t. \(E\boldsymbol{\epsilon}=\boldsymbol{0}\) and again let \(\mathbf{V}=\text{diag}\{\text{Var}(\mathbf{Y})\}\), where \(\text{diag}(\mathbf{A})\) constructs a diagonal matrix from the diagonal of \(\mathbf{A}\). Define \(\hat{e}=Y_{i}-EY_{i}\). Then for an arbitrary pair \((s,t)\), it is true that \((\mathbf{w}\hat{\mathbf{V}}\mathbf{w}^{\top})_{s,t}=\sum_{i=1}^{n}w_{s,i}w_{i,i}\hat{e}_{i}^{2}\overset{p}{\rightarrow}(\mathbf{w}\mathbf{V}\mathbf{w}^{ \top})_{s,t}\) as \(n\rightarrow\infty\)._ Proof.: Let \((s,t)\) be arbitrary under our premises. Then by Proposition 1, \(\text{Var}[\sum_{i=1}^{n}w_{s,i}w_{i,i}\hat{e}_{i}^{2}]=\{1+\mu_{n}\phi_{s,t,n} \}\sum_{i=1}^{n}w_{s,i}^{2}w_{t,i}^{2}\text{Var}(\hat{e}_{i}^{2})\). Say \(\phi_{s,t,n}\simeq C_{\phi}\) and \(\text{Var}(\hat{e}_{i}^{2})\leq M\) for \(\forall i\) since we know that they are \(O(1)\). Then \(\text{Var}[\sum_{i=1}^{n}w_{s,i}w_{i,i}\hat{e}_{i}^{2}]\leq\{1+(n-1)C_{\phi} \}\cdot M\cdot\sum_{i=1}^{n}w_{s,i}^{2}w_{t,i}^{2}\). Note that \(\sum_{i=1}^{n}w_{s,i}^{2}w_{t,i}^{2}=O(n^{-3})\) while \((n-1)C_{\phi}\cdot M\) is obviously only \(O(n)\) since \(M\) is just some constant. Thus, it follows that \(\text{Var}[\sum_{i=1}^{n}w_{s,i}w_{t,i}\hat{e}_{i}^{2}]\to 0\) as \(n\rightarrow\infty\). Although this is sufficient, we observe that the variance converges to zero at the asymptotically equivalent rate of \(\max(n^{-3}\mu_{n},n^{-3})\), which can be considerably faster than what was shown since we automatically set \(\mu_{n}\) to its upper bound. Under the same premises, \(\hat{e}_{i}\overset{p}{\rightarrow}Y_{i}-\text{E}Y_{i}\) as \(n\rightarrow\infty\). Hence, \(\text{E}\hat{e}_{i}^{2}\overset{p}{\rightarrow}\mathbf{V}_{i,i}\) as well as \(n\) becomes arbitrarily large by the Portmanteau theorem since \(\hat{e}_{i}\) is bounded with probability one for \(\forall i\). This is sufficient for our conclusion. An important note to make in relation to Proposition 6 is that \(\text{Var}[\sum_{i=1}^{n}w_{s,i}w_{t,i}\hat{e}_{i}^{2}]\) is \(O(n^{-2})\) at worst under our terrible, but manageable conditions. This is good to know since it must converge at a much faster rate than \(\mathbf{B}\) if it is to be used as a plug-in estimator. To be exact, we require \(n^{2}\text{Var}[\sum_{i=1}^{n}w_{s,i}w_{t,i}\hat{e}_{i}^{2}]\to 0\) as a sufficient condition for Wald plug-in estimation. Here is an informal proof. Let \(\hat{V}=\sum_{i=1}^{n}w_{s,i}w_{t,i}\hat{e}_{i}^{2}\) at observe that \(W_{n}=\{\gamma_{s}(n)\hat{V}\}^{-1/2}(\mathbf{B}_{s}-\beta_{s})\) is a Wald-like statistic for some function \(\gamma_{s}(n)>0\) that is intended to adjust for missed variability. Typical practices set \(\gamma_{s}(n)=1\)_a priori_ in accordance with the assumption that the employed covariance model is well-specified. For simplicity, we set \(\gamma_{s}(n)\) to \(\mu_{n}\) since we care only about asymptotic orders in this exploration and temporarily assume that \(\mu_{n}\) is non-zero and does not tend to zero. We know that \(\text{Var}(\mathbf{B}_{s})=O(n^{-1}\mu_{n})\). Thus, \(\{n^{-1}\mu_{n}\}^{-1/2}\) stabilizes the variance of \(\mathbf{B}_{s}\) in the sense that \(n\mu_{n}^{-1}\text{Var}(\mathbf{B}_{s})\) converges to a non-zero constant. Observe that \(W_{n}=\{n\hat{V}\}^{-1/2}[n^{-1}\mu_{n}]^{-1/2}(\mathbf{B}_{s}-\beta_{s})\). Since \(n^{2}\text{Var}(\hat{V})=O(n^{-1}\mu_{n})\), plug-in Wald statistics will always behave as intended under A3, at least insofar as asymptotic normality is not being considered. ### Some Consideration for Clustered Statistics Our next object is to explore the properties of cluster-robust variance estimation under thick dependency conditions. As setup, enact a partition of \(\zeta\) into \(K\) groups, as in A2, s.t. \(\mathbf{B}=\mathbf{wY}=\sum_{k=1}^{K}\mathbf{w}_{k}\mathbf{Y}_{k}\). The basic cluster robust estimator has the following form: \(\hat{\mathbf{C}}=\sum_{k=1}^{K}\mathbf{w}_{k}\hat{\mathbf{e}}_{k}\hat{ \mathbf{e}}_{k}^{\top}\mathbf{w}_{k}^{\top}\), where each \(\hat{\mathbf{e}}_{k}\) is a \(n_{k}\times 1\) vector of residuals as previously defined. Classically, this approach has required that \(K\rightarrow\infty\) as \(n\rightarrow\infty\) and for \(\max(n_{k})=n_{M}\) to be \(O(1)\) in addition. If the outcome variables of different groups are independent--or at least uncorrelated--then \(\hat{\mathbf{C}}\) is consistent for Var(\(\mathbf{B}\)) under these nice conditions [27]. Although incredibly useful, the problem with this approach is simple, albeit undeniable. Outside of contrived examples, the specification of a partition that produces \(K\) independent or even uncorrelated clusters is an arduous task. Recall that \(\mathcal{L}_{n}\) is unknown and inestimable. Hence, provided a dynamic and inter-dependent world, the chances that a user specifies a valid partition with partial and imperfect knowledge about \(\mathcal{L}_{n}\) are safely assumed to be small. This point has doubled poignancy when \(n_{M}\) is small in addition. This is problematic because an invalid specification of the partition structure calls into question the consistency and utility of cluster-robust variance estimation. Here, we prove that--even if a partition is invalidly specified--cluster-robust variance estimation can still be consistent for its identified portion of the variance under a couple of common scenarios. Overall, three cases are considered: (1) the case s.t. \(n_{M}\leq Q\in\mathbb{N}\), (2) the case s.t. \(\min_{k\in\mathcal{K}}(n_{k})=n_{m}\rightarrow\infty\) and \(\mathcal{L}_{k}\) is fully connected for \(\forall k\in\mathcal{K}\), and (3) the case s.t. \(n_{m}\rightarrow\infty,\mu_{n_{k}}=o(n_{k})\) for an arbitrary \(k\in\mathcal{K}\), and \(K=O(1)\). Although establishing the consistency of a particular \(\hat{\mathbf{C}}_{s}\) does not help us identify and estimate \(\text{Var}(\mathbf{B}_{s})\) in total, we show that the consistent estimation of portions of \(\text{Var}(\mathbf{B}_{s})\) can provide information on the magnitude of missed variability, which is why it is still important. Ultimately, this information can be used to inform a researcher's choice of a corrective factor in the style of Proposition 1. Gaining insight into the magnitude of \(\mu_{n}\phi_{n}\) is also important for additional reasons that become clear in Section 6. The next proposition provides a version of Proposition 1 for the sum of random vectors. It is helpful because it gives an expression for the bias induced by an invalid partition and helps to see the problem with more clarity. It is not strictly necessary for our analysis, but we show it for the sake of completeness. Say \(\mathbf{T}_{k}=\mathbf{w}_{k}\mathbf{Y}_{k}=\mathbf{\mu}_{k}+\mathbf{w}_{k}\mathbf{ \epsilon}_{k}\) and define a directed linear dependency graph \(\mathcal{L}_{\mathbf{T}}=(\mathcal{K},E_{\mathcal{L}_{\mathbf{T}}})\) s.t. a link exists from node \(r\) to \(t\) if and only if \(\mathrm{E}[(\mathbf{T}_{\mathbf{r}}-\mathbf{\mu}_{r})(\mathbf{T}_{\mathbf{t}}-\mathbf{\mu}_{t})^{\top} ]\neq\mathbf{0}\). Define \(\bar{\sigma}_{\mathbf{T}}=|E_{\mathcal{L}_{\mathbf{T}}}|^{-1}\sum_{r\neq t}\mathrm{ E}[(\mathbf{T}_{\mathbf{r}}-\mathbf{\mu}_{r})(\mathbf{T}_{\mathbf{t}}-\mathbf{\mu}_{t})^{\top}]\). Further, define \(\mathbf{\phi}_{\mathbf{T}}=\{\sum_{k=1}^{K}\mathrm{Var}(\mathbf{T}_{k})\}^{-1}K\bar{ \sigma}_{\mathbf{T}}\). **Proposition 7**.: _Observe \(\mathbf{T}_{k}=\mathbf{w}_{k}\mathbf{Y}_{k}=\mathbf{\mu}_{k}+\mathbf{w}_{k}\mathbf{ \epsilon}_{k}\) for \(\mathbf{B}=\sum_{k=1}^{K}\mathbf{T}_{k}\) and define a directed linear dependency graph \(\mathcal{L}_{\mathbf{T}}=(\mathcal{K},E_{\mathcal{L}_{\mathbf{T}}})\) as previously constructed. Then \(\mathrm{Var}(\mathbf{B})=\sum_{k=1}^{K}\mathrm{Var}(\mathbf{T}_{k})\{\mathbf{1}+\mu_{ \mathbf{T}}\mathbf{\phi}_{\mathbf{T}}\}=\sum_{k=1}^{K}\mathrm{Var}(\mathbf{T}_{k})+K\mu_{ \mathbf{T}}\bar{\sigma}_{\mathbf{T}}\)._ Proof.: Observe that \(\mathrm{Var}(\mathbf{B})=\mathrm{E}[\sum_{k=1}^{K}(\mathbf{T}_{k}-\mathbf{\mu}_{k}) ][\sum_{k=1}^{K}(\mathbf{T}_{k}-\mathbf{\mu}_{k})]^{\top}=\sum_{k=1}^{K}\mathrm{ Var}(\mathbf{T}_{k})+|E_{\mathcal{L}_{\mathbf{T}}}|\bar{\sigma}_{\mathbf{T}}\). By the degree-sum formula, \(|E_{\mathcal{L}_{\mathbf{T}}}|=K\cdot\mu_{\mathbf{T}}\). Substitution and algebraic manipulation of \(\sum_{k=1}^{K}\mathrm{Var}(\mathbf{T}_{k})\) in a manner analogous to Proposition 1 yields the identity. Therefore, it is then implied--provided the consistency of \(\mathbf{\hat{C}}\) and a multivariate central limit theorem holds--that \(\mathbf{\hat{C}}^{-1/2}(\mathbf{B}-\mathbf{\beta})\overset{d}{\to}\mathbf{N}( \mathbf{0},\mathbf{I}+\mu_{\mathbf{T}}\mathbf{\phi}_{\mathbf{T}})\), where \(\overset{d}{\to}\) denotes convergence in distribution. Although this identity provides a medium for representing bias, \(\mathbf{\phi}_{\mathbf{T}}\) is not an intuitive object in comparison to the other forms explored. Since most inferential settings truly require valid statements for the variances only, we retreat to this forum. For instance, \(\mathbf{B}_{s}=\sum_{k=1}^{K}\mathbf{w}_{s,k}\mathbf{Y}_{k}=\sum_{k=1}^{K} \sum_{j=1}^{n_{s}}w_{s,k,j}Y_{k,j}=\sum_{k=1}^{K}T_{s,k}\). We can thus resort to an iterated application of Proposition 1 and state \(\mathrm{Var}(\mathbf{B}_{s})=\{1+\mu_{K}\phi_{K}\}\sum_{k=1}^{K}\{1+\mu_{n} \phi_{s,n_{1}}\}\sum_{j=1}^{n_{k}}w_{s,k,j}^{2}\sigma_{k,j}^{2}\), where \((\mu_{K},\phi_{K})\) correspond to a linear dependency graph connected to \(\{T_{s,k}\}_{k\in\mathcal{K}}\) and the \((\mu_{n_{k}},\phi_{s,n_{k}})\) correspond to the cluster-specific dependency graphs for the outcome variables in each cluster \(\zeta_{k}\). However, more simply, we can settle for \(\mathrm{Var}(\mathbf{B}_{s})=\{1+\mu_{K}\phi_{K}\}\sum_{k=1}^{K}\mathrm{Var}(T_ {s,k})\) to represent the bias of cluster-robust methods when consistent estimation of each \(\mathrm{Var}(T_{s,k})\) is possible. Some additional preliminaries help to frame this exploration. Pertinently, we provisionally assume that A2 applies to an arbitrary \(\mu_{n_{n}}\) and \(\mu_{n_{n}}\) for \(r,t\in\mathcal{K}\). This is not a critical assumption. However, it greatly simplifies notation. If one does not wish to apply it, the forthcoming statements hold by replacing \(\mu_{n_{k}}\) with \(\max_{k\in\mathcal{K}}(\mu_{n_{k}})\). Now, recall that \(\mathrm{Var}(\mathbf{B}_{s})=O\{n^{-1}\mathrm{max}(\mu_{n},1)\}\). From the above cluster expression and our working assumption, we also know that \(\mathrm{Var}(\mathbf{B}_{s})=O\{n^{-2}\mathrm{max}(\mu_{K},1)\cdot Kn_{k}\cdot \mathrm{max}(\mu_{n_{k}},1)\}\), which simplifies to \(O(n^{-1}\mathrm{max}(\mu_{K},1)\cdot\mathrm{max}(\mu_{n_{k}},1)\)) since \(Kn_{k}=O(n)\) for all \(k\) under A2. This implies that \(\mathrm{max}(\mu_{n},1)\) and \(\mathrm{max}(\mu_{K},1)\cdot\mathrm{max}(\mu_{n_{k}},1)\) are equivalent in order. To progress, we note that \(\hat{\mathrm{Var}}(T_{s,k})=\mathbf{\hat{C}}_{s,k}=\sum_{j=1}^{n_{k}}w_{s,k,j}^ {2}\hat{e}_{k,j}^{2}+\sum_{r\neq t}w_{s,k,r}w_{s,k,j}\hat{e}_{k,r}\hat{e}_{k,r}\) for an arbitrary cluster \(\zeta_{k}\). The usual machinery requires that \(\mathrm{Var}(\mathbf{\hat{C}}_{s,k})\to 0\) as \(n\to\infty\). Since \(\mathrm{Var}(\mathbf{\hat{C}}_{s,k})\leq 2\{\mathrm{Var}(\sum_{j=1}^{n_{k}}w_{s,k,j}^{2}\hat{e}_{k,j}^ {2})+\mathrm{Var}(\sum_{r\neq t}w_{s,k,r}w_{s,k,j}\hat{e}_{k,r}\hat{e}_{k,r})\}\), it is sufficient to reason about the individual variances on the right-hand side. From a previous proposition, we know the form of \(\mathrm{Var}(\sum_{j=1}^{n_{k}}w_{s,k,j}^{2}\hat{e}_{k,j}^{2})\). Otherwise, \(\mathrm{Var}(\sum_{r\neq t}w_{s,k,r}w_{s,k,j}\hat{e}_{k,r}\hat{e}_{k,t})=\{1+\mu_{ n_{(n_{k}-1)}}\phi_{s,n_{(n_{k}-1)}}\}\sum_{r\neq s}w_{s,r}^{2}w_{s,t,j}^{2} \mathrm{Var}(\hat{e}_{k,r}\hat{e}_{k,t})\). For finite samples, we can conservatively expect \(\mu_{n_{k}(n_{k}-1)}=n_{k}(n_{k}-1)-1\) in most circumstances. Moreover, for simplicity, we can assume that \(\mathrm{Var}(\hat{e}_{k,r}\hat{e}_{k,t})\leq M\) and \(\phi_{s,n_{k}(n_{k}-1)}\leq C_{\phi}\) WLOG in addition. Now, let \(\mathbf{\hat{C}}_{s}=\sum_{k=1}^{K}\mathbf{\hat{C}}_{s,k}\). We also require \(\mathrm{Var}(\mathbf{\hat{C}}_{s})\to 0\), at least as a typical sufficient condition to establish that \(\mathbf{\hat{C}}_{s}\) is a consistent estimator of the variance model that results from a theorized partition. However, now, we have: \(\mathrm{Var}(\mathbf{\hat{C}}_{s})=\{1+\mu_{K,s}\phi_{K,s}\}\sum_{k=1}^{K}\mathrm{ Var}(\mathbf{\hat{C}}_{s,k})\). We must show that this expression tends to zero as \(n\to\infty\) to establish consistency. A small handful of cases will be considered. Essentially, we will need to reason about the asymptotic orders of \(\mu_{K},\mu_{n_{k}},K\), and \(n_{k}\). Note that we do not need to reason about \(\mu_{n_{k}(n_{k}-1)}\) since--outside of exceptional situations--it is safe to assume that it has an order of \((n_{k}-1)\mathrm{max}(\mu_{n_{k}},1)\). This is because--in the limit--each \(\hat{e}_{r}\hat{e}_{t}\) can still be expected to be correlated, even if marginally, with other products that share at least one index in the minimum. If we hold one pair of subscripts for some \(\hat{e}_{r}\hat{e}_{t}\) constant, there are \(2\cdot(n_{k}-1)\) others that meet this criteria. Since \(\mu_{n_{k}(n_{k}-1)}\leq n_{k}(n_{k}-1)-1\), the remaining factor should have an order that is equivalent to \(\mathrm{max}(\mu_{n_{k}},1)\). Although this has not been rigorously substantiated, we can provisionally assume this to be the case for this exploration without much loss. This is also because we truly only need it for Case 3. _Case 1:_\(n_{M}\leq Q\in\mathbb{N}\). Under this assumption, \(\mathrm{Var}(\mathbf{\hat{C}}_{s,k})=O(n^{-4})\). This follows from the fact that the estimator is a sum of a finite number of \(O( _Case 2: \(n_{m}\rightarrow\infty,\mathcal{L}_{k}\) fully connected for \(\forall k\in\mathcal{K}\)._ For this case, \(\mathrm{Var}(\hat{\mathbf{C}}_{s,k})=O(n^{-4}n_{k}^{2}\hat{\mu}_{n_{k}(n_{k}-1)} )=O((n^{-1}n_{k})^{4})\) since the sum of weighted \(\hat{e}_{r}\hat{e}_{r}\) s.t. \(r\neq t\) dominates the variance. Thus, we already know that a sufficient condition for consistency is that \(n_{k}=o(n)\). This is violated when \(K=O(1)\). Assuming that \(n_{k}=o(n)\) implies that \(\mathrm{Var}(\hat{\mathbf{C}}_{s})=O((n^{-1}n_{M})^{4}\cdot K\cdot\max(\mu_{K, *},1))\). Thus, a sufficient condition for consistency is that \(n_{M}^{3}\mathrm{max}(\mu_{K,*},1)=o(n^{3})\). This condition is fulfilled since \(n\) has an asymptotic order that is equivalent to \(Kn_{M}\). Utilization of a Wald-like statistic, however, requires that \(n_{M}^{3}\mathrm{max}(\mu_{K,*},1)=o(n)\), which is obviously non-trivial. These statements are easier to grasp when cluster sizes are equal as a special case. Then \(n=Kn_{k}\) for all \(k\) and the general expression for the order of \(\mathrm{Var}(\hat{\mathbf{C}}_{s})\) is \(O\{K^{-3}\mathrm{max}(\mu_{K,*},1)\}\). Since this implies \(\mathrm{Var}(\hat{\mathbf{C}}_{s})\leq C_{s}K^{-2}\) for some \(C_{*}\in\mathbb{R}^{+}\) even when all clusters are correlated, it is a consistent estimator of its portion of the variance. Again, \(n^{2}K^{-3}\mathrm{max}(\mu_{K,*},1)\) converging to zero is sufficient for Wald usage. Using substitution, we can translate this expression to \(K^{-1}n_{k}^{2}\mathrm{max}(\mu_{K,*},1)\). This means that one sufficient condition for Wald usage provided this specific setup is that \(\mathrm{max}(\mu_{K,*},1)\) is \(O(1)\) and for \(K^{-1}n_{k}^{2}\to 0\), which is more restrictive and less forgiving of invalid specifications. For an example of when this becomes problematic, consider a social network analysis context s.t. the total sample size of possible relationships in a directed network is \(N=n(n-1)\), where \(n\) is the number of actors sampled. Researchers often choose to cluster on actor identity. Then \(K=n\) and \(n_{k}=n-1\). Obviously, then, \(K^{-1}n_{k}=O(1)\). As a result, the denominator of the Wald statistic would fail to converge to a constant. It would remain random in behavior. _Case 3: \(n_{m}\rightarrow\infty,\mu_{n_{k}}=o(n_{k})\), \(K=O(1)\)._ This case necessitates that \(n_{k}=O(n)\). Under the condition that \(\mu_{n(n_{k}-1)}\) has the same asymptotic order as \((n_{k}-1)\mathrm{max}(\mu_{n_{k}},1)\), it is then implied that \(\mathrm{Var}(\hat{\mathbf{C}}_{s,k})=O\{n^{-1}\mathrm{max}(\mu_{n_{k}},1)\}\) and \(\mathrm{Var}(\hat{\mathbf{C}}_{s})=O\{n^{-1}\mathrm{max}(\mu_{n_{k}},1)\}\). Therefore, although \(\hat{\mathbf{C}}_{s}\) is consistent, it will converge at a rate that also prevents its straightforward application for Wald statistics. This is because \(n^{2}\mathrm{Var}(\hat{\mathbf{C}}_{s})\) does not converge to zero. To see this, first consider the sub-case s.t. \(\mathrm{max}(\mu_{n_{k}},1)=\mu_{n}\). When this is true, it is also true that \(\mu_{n_{k}}=O(\mu_{n})\). As a consequence, \(n^{2}\mathrm{Var}(\hat{\mathbf{C}}_{s})=O(n\mu_{n})\rightarrow\infty\) and \(\hat{\mathbf{C}}_{s}\) does not converge to a constant when used to construct a test statistic; it remains random. This situation obviously does not change for the remaining sub-case. #### 3.1.1 A Problem of Choice Let us recall our central problem to re-frame the last investigation. We have an additive estimator \(\mathbf{w}_{s}\mathbf{Y}\). However, although we believe that it is consistent, we have no access to \(\mathcal{L}\) or \(D\) and thus have limited, imperfect--or even incorrect--information pertaining how to model \(\mathrm{Var}(\mathbf{w}_{s}\mathbf{Y})\). There are two important situations to now consider: the situation s.t. it is believed a central limit theorem holds and the one where it is believed that one does not. We temporarily consider the former. If the researcher believes that a central limit theorem holds, then we have two non-mutually exclusive choices for Wald-like confidence sets: choose a defensible correction via Proposition 1 or use cluster-robust variance estimation under the auspices of a user-specified partition. Hitherto, the primary strategy is to use \(\hat{\mathbf{C}}_{s}\). We know that Case 1 estimators are the most robust against invalid partition specifications from the above analysis. However, since they require small cluster sizes, even if asymptotic normality holds, a lot of variability is likely to be missed and a correction will still need to be applied. Estimators in the spirit of Case 2 are also feasible choices, but their'safe' Wald-like use is predicated upon restrictive conditions. Put shortly, under A2, we require \(n_{k}\) to be \(o(n^{1/3})\) when the number of correlated clusters is asymptotically bounded. If \(\mu_{K,*}\) diverges with sample size, then each \(n_{k}\) must grow at an even slower, but ultimately unknown rate. Still, even if these estimators can be validly used for plugin-Wald statistics, there is no guarantee that further correction is unnecessary. Asymptotically expanding cluster sizes still does not guarantee that \(\mu_{K}=0\). Recall, however: the only explored case s.t. \(\hat{\mathbf{C}}_{s}\) was inconsistent for its portion of the variance was Case 2 when \(K=O(1)\). Otherwise, it is a consistent estimator of the target quantity identified by the theoretical variance model. The rate of convergence might limit its direct use as a plug-in estimator for confidence sets, but it is important to note that it can still be used to investigate the variance structure. Recall that Proposition 2 (in a circumstance of copious dependencies) almost guarantees that \(\phi\) is non-negative. Condition on this supposition, then, and note that \(\mathbf{w}_{s}\mathbf{Y}=\sum_{k=1}^{K_{1}}T_{s,k}=\sum_{k=1}^{K_{2}}T_{s,k,*}\) for any two partitions into \(K_{1}\) and \(K_{2}\) groups. Proposition 1 then allows us to state that \(\{1+\mu_{K_{1}}\phi_{K_{1}}\}_{k=1}^{K_{1}}\mathrm{Var}(T_{s,k})=\{1+\mu_{K_{2} }\phi_{K_{2}}\}\sum_{k=1}^{K_{2}}\mathrm{Var}(T_{s,k,*})\). It then follows WLOG that \([\sum_{k=1}^{K_{2}}\mathrm{Var}(T_{s,k,*})]^{-1}\sum_{k=1}^{K_{1}}\mathrm{Var} (T_{s,k})<1\) implies that \(\mu_{K_{2}}\phi_{K_{2}}<\mu_{K_{1}}\phi_{K_{1}}\). The latter inequality, of course, implies that the \(K_{2}\) partition possesses _less_ missed variability and is more robust. Unsurprisingly, then, one should always aim to choose the partition that results in the _highest_ estimated variance. For ample sample sizes, one can use differences such as \(\hat{\mathbf{C}}_{x,2}-\hat{\mathbf{C}}_{x,1}\) for this purpose. Nevertheless, this strategy does not guarantee a correct choice of partition--and still--a correction might be required. One strategy that synthesizes both approaches is to use various theories of dependence and hence partitions to compare \(\hat{\mathbf{C}}_{x}\) estimators from Cases 1, 2, and 3. If \(n\) is large enough, this should--in the minimum--give the researcher at least imperfect snapshots of the magnitudes of portions of the missed variance. She can then condition on this new (incomplete) knowledge and use it to inform a choice of correction via Proposition 1 after making use of the \(\hat{\mathbf{C}}_{x}\) estimator that yields the highest estimate. If it is believed that a \(\hat{\mathbf{C}}_{x}\) estimator from Case 2 possesses a poor rate of convergence, a Case 1 or Proposition 6 type estimator is more appropriate since they possess a better rate of convergence and can be 'treated' as constants more safely for ample sample sizes. The information acquired from the comparisons of cluster-robust estimators can still be factored into the choice of a deterministic correction. For the alternative situation--the one s.t. dependencies are so thick that a central limit theorem fails to hold--the variance estimators can still be used for exploratory ends insofar as at least A3 is true. ## 4 Finite Sample Inference with Dense Dependence Recall from Section 2: for any additive statistic, \(\mu_{D}\phi_{D}=\mu_{n}\phi_{n}\) and \(\phi_{D}=|L_{D}|^{-1}|L|\cdot\phi_{n}\). Hence, if \(\mu_{n}=o(\mu_{D})\), \(\phi_{D}\to 0\). This offers a new perspective on a known possibility. Statistical dependencies can diverge in number, but \(\mu_{n}\) can be \(o(n)\) or even bounded. When the latter is stipulated, this simply means that \(\mu_{D}(n)\phi_{D}(n)\) must be two such functions s.t. this is requirement is met. We do not need to know the details of these functions, otherwise. Their importance is that they inform us--even in the presence of dense statistical dependencies that prevent convergence to normality--that we can still reason about mean convergence for linear statistics and explore strategies for inference. This is what we accomplish here. We will be working under the supposition that \(\mu_{D}\to\infty\) or that \(\mu_{D}\) is even quite possibly \(O(n)\). We will _prefer_ that \(\mu_{n}=o(n)\), but we will see that this is not a strict requirement. Provided this setup, we show here that comfortably bounding \(\mu_{n}\phi_{n}\) for a given sample is feasibly sufficient for constructing cogent confidence sets through the use of Hoeffding's or Bernstein's inequalities. To do this, we first introduce a type of random variable. The reason we are discussing these variables is that they permit us to extend a handful of known concentration inequalities to circumstances s.t. \(\mu_{D}=O(n)\). ### \(\mathcal{U}\) Random Variables Defining this class of random variable requires familiarity with the average functional value from basic analysis. Say \(R_{i}=\int_{\mathbb{R}}1_{\gamma_{i}\in\mathcal{S}}dy_{i}\), where \(\mathcal{S}_{i}\) is the support of the cumulative distribution function (CDF) of \(Y_{i}\), say \(F(y_{i})\). For counting measures, \(R_{i}=|\mathcal{S}_{i}|\). When \(Y=g(X_{1},\ldots,X_{k})\) for some function \(g\), we can generalize this WLOG to \(R_{\mathbf{x}}=\int_{\mathbb{R}^{2}}1_{(x_{1},\ldots,x_{k})\in\mathcal{S}}dx_{ 1}\ldots dx_{k}\), where \(\mathcal{S}^{k}\subseteq\mathcal{S}_{1}\times\ldots\times\mathcal{S}_{k}\). Given this setup, and temporarily abandoning subscripts for readability, the average function value is then \(\mathrm{Av}(Y)=R^{-1}\int_{\mathcal{S}}ydy\). Although we can also define \(\mathrm{Av}_{\mathbf{x}}(Y)=R_{\mathbf{x}}^{-1}\int_{\mathcal{S}^{k}}g(x_{1}, \ldots,x_{k})dx_{1}\ldots dx_{k}\), the former is mostly the focus here for exposition. A random variable \(Y\) is in the \(\mathcal{U}\) class if and only if \(\mathrm{E}Y=\mathrm{Av}(Y)\). Similarly, it is said to be in the \(\mathcal{U}\) class w.r.t. \(\mathbf{X}\) if \(\mathrm{E}Y=\mathrm{Av}_{\mathbf{x}}(Y)\). Finally, a random variable is said to be _regular_ if its associated support is a single interval of real numbers or a set of integers \(\{m,m+1,\ldots,M-1,M\}\) s.t. no integer is missing between \(m\) and \(M\). In this case, it is easy to verify that \(\mathrm{Av}(Y)=2^{-1}(M+m)\) and membership in the \(\mathcal{U}\) class implies that \(\mathrm{E}Y=2^{-1}(M+m)\). A set of basic properties for this class are proven in the supplementary materials. Here, it is sufficient to note only a small handful of them. First, if \(Y\) is a regular random variable s.t. \(\mathrm{E}Y=0\), it is a member of the \(\mathcal{U}\) class if and only if \(m=-M\), i.e., the CDF of \(Y\) is defined on symmetric support. Moreover, a regular and continuous random variable \(Y\) is in the \(\mathcal{U}\) class if and only if \(\int_{\mathcal{S}}F(y)dy=\int_{\mathcal{S}}S(y)dy\), where \(S(y)=1-F(y)\), and hence equivalently if \(M-\mathrm{E}Y=\mathrm{E}Y-m\). Pertinently, although random variables with symmetric probability distributions are \(\mathcal{U}\) random variables, they are only a special case. \(\mathcal{U}\) random variables can be asymmetric. Lastly, it can be easily shown that for any constant \(c\), if \(Y\in\mathcal{U}\), i.e., it is in the class of \(\mathcal{U}\) random variables, then \(cY\in\mathcal{U}\) and \(Y+c\in\mathcal{U}\). These variables are quite common. For instance, we have already established that all regular and symmetric bounded random variables are in this group. If \(Y\) has a density \(f(y)\propto\sigma_{Y}\) and \(\sigma_{y}\to 0\) as \(n\to\infty\), then \(Y\) behaves more and more like a \(\mathcal{U}\) random variable as \(n\) becomes arbitrarily large. We will also see that the additive error variables associated with density or mass approximation are also elements when the approximating function is correct on average. More important for our purposes, however, is the connection between \(\mathcal{U}\) status and the common linear model: \(\mathbf{Y}=\mathbf{x}\boldsymbol{\beta}+\boldsymbol{\epsilon}\). Two conditions are ubiquitously supposed for its usage: (1) \(\operatorname{E}(\boldsymbol{\epsilon}|\mathbf{x})=0\), and (2) \(\epsilon_{i}\sim N(0,\sigma_{i}^{2})\) for \(\forall i\). The latter assumption, however, is simply recund. In reality, no observable error distribution for a \(Y_{i}\) on finite support could ever truly be normally distributed. This common assumption is best understood as a mathematically convenient statement that results in (hopefully) negligible error. For instance, we could replace the traditional assumption with the assertion that each \(\epsilon_{i}\) possesses a normal distribution that has been symmetrically truncated at a value that is extreme enough to leave out negligibly small probabilities. Such an assertion would be empirically isomorphic to the traditional assumption--and importantly--it would result in each \(\epsilon_{i}\) being a \(\mathcal{U}\) random variable. Therefore, it is safe to work in a universe s.t. \(\mathbf{w}_{s}\mathbf{Y}-\beta_{s}=\sum_{i=1}^{n}w_{s,i}\epsilon_{i}\) is a sum of \(\mathcal{U}\) functions. Positing that \(\{\epsilon_{i}\}_{i\in I}\) is a set of regular and symmetric \(\mathcal{U}\) random variables is also less restricting than assuming truncated normality since many more distributions meet these criteria. In the fixed regression setting, \(\mathcal{U}\) status can be feasibly verified using the same plot that is used to check for violations of linearity. Namely, we would look at the residual versus fitted plot to confirm that the scatter of points is mean-zero at any location of the graph, and that they are randomly dispersed between two points that are roughly equidistant from the horizontal axis. Under the null assumption of a well-specified model, this is sufficient. More generally, however, when the \(Y_{i}\) are continuous and identically distributed, we could verify that the behavior of the empirical CDF matches the sum-symmetric behavior described in the previous paragraph. Otherwise, for the discrete case, the empirical probability mass function could be plotted against the observed values to see if it demonstrates a sum-symmetric reallocation of mass from the area around the mode of the distribution to the tails. ### Finite Sample Inference for \(\mathcal{U}\) Random Variables This section accomplishes two primary objectives. One, it introduces and discusses an additional regularity condition that is required for further theoretical work. Two, it utilizes this condition to extend the Hoeffding and Bernstein inequalities to sums of regular and symmetric \(\mathcal{U}\) random variables under dense dependence. For this section, \(\mathbf{w}\) is any \(1\times n\) vector of constants. Moreover, \(\operatorname{Av}_{*}(\cdot)\) indicates a functional average taken over a Cartesian product of sets, i.e., over \(\mathcal{S}_{*}^{k}=\mathcal{S}_{1}\times\ldots\times\mathcal{S}_{k}\). Since the random variables being considered are bounded, it is helpful to know that all referenced objects exist for finite \(n\). 1. Let \(\boldsymbol{\epsilon}\) be a \(n\times 1\) vector of bounded random variables s.t. \(\operatorname{E}\boldsymbol{\epsilon}=0\). For all \(s>0\), it is then true that \(\max\left(\operatorname{E}\left(\exp\{s\cdot\mathbf{w}\boldsymbol{\epsilon} \}\right),\operatorname{E}\left(\exp\{s\cdot(-\mathbf{w}\boldsymbol{\epsilon} )\}\right)\right)\leq\operatorname{Av}_{*}\left(\exp\{s\cdot\mathbf{w} \boldsymbol{\epsilon}\}\right)\) This new condition places an implicit bound on the behavior of the distributions of the marginal random variables and their joint distribution simultaneously. This differs from traditional approaches, which mostly place restrictions on the dependency structure. Put succinctly, for A5 to be true, the allocation of mass or density has to be somewhat balanced for the marginal distributions, and the joint distribution has to be biased away from elements of the joint support that produce larger values of the target statistic in addition. Mutual independence alone does not guarantee A5. To see this, assume that \(\operatorname{E}(\exp\{sw_{i}\epsilon_{i}\})>\operatorname{Av}(\exp\{sw_{i} \epsilon_{i}\})\) for \(\forall i\). Then \(\operatorname{E}\left(\exp\{s\cdot\mathbf{w}\boldsymbol{\epsilon}\}\right)= \prod_{i}^{n}\operatorname{E}(\exp\{sw_{i}\epsilon_{i}\})>\prod_{i}^{n} \operatorname{Av}(\exp\{sw_{i}\epsilon_{i}\})=\operatorname{Av}_{*}\left(\exp \{s\cdot\mathbf{w}\boldsymbol{\epsilon}\}\right)\) and A5 is false. Pertinently, this does not make A5 more restrictive than mutual independence in general. It simply informs us that the conditions that would supply it are somewhat complex when there are no constraints placed on the marginal distributions. The informal description in the previous paragraph hints that unimodal and symmetric marginal distributions prevent this quandary from occurring under mutual independence. This indeed turns out to be true. Before showing this, we prove a pivotal lemma. Note that the \(\implies\) symbol is readable as 'implies.' **Lemma 1**.: _Let \(Z\in\mathcal{U}\) be regular and continuous s.t. \(\operatorname{E}Z=0\) and \(\max(\mathcal{S})=M\). Let \(s>0\) and \(w\in\mathbb{R}\) be arbitrary constants. Then \(\operatorname{Av}(\exp\{swZ\})\leq\exp\{24^{-1}s^{2}w^{2}R^{2}\}\)._ Proof.: We note that \(\int_{S}z^{k}dz=0\) for all odd integers \(k\) when \(Z\) is a continuous and regular \(\mathcal{U}\) random variable s.t. \(\operatorname{E}Z=0\). This is because \(0=2^{-1}(M+m)\), which implies that \(m=-M\). Hence \(\int_{S}z^{k}dz=(k+1)^{-1}\{M^{k+1}-(-M)^{k+1}\}=0\) since \(k+1\) is even. This implies that \(\operatorname{Av}(Z^{k})=0\) for all odd integers \(k\). Additionally, when \(k\) is even, observe that \(\int_{S}z^{k}dz=(k+1)^{-1}2M^{k+1}\) and hence \(\operatorname{Av}(Z^{k})=(k+1)^{-1}M^{k}\). Let \(2\mathbb{N}_{0}=\{n\in\mathbb{N}|n=2k,k\in\mathbb{N}\cup\{0\}|\}\). From here, for any constants \(s>0\) and \(w\in\mathbb{R}\): \[\exp\{swZ\}=\sum_{i=0}^{\infty}\{i!\}^{-1}s^{i}w^{i}Z^{i}\implies\] \[\mathrm{Av}(\exp\{swZ\})=\sum_{i=0}^{\infty}\{i!\}^{-1}s^{i}w^{i}\mathrm{Av}(Z^{i })=\sum_{i\in 2\mathbb{N}_{0}}\{i!\}^{-1}\{i+1\}^{-1}s^{i}w^{i}M^{i}=\sum_{i=0}^{ \infty}\{2i!\}^{-1}\{2i+1\}^{-1}w^{2i}s^{2i}M^{2i}\] Now, we compare two sequences: \(S_{1}=(2k)!\cdot(2k+1)\) and \(S_{2}=k!\cdot 6^{k}\) for \(k\in\mathbb{N}\). We will prove that \(S_{1}\geq S_{2}\) for \(\forall k\in\mathbb{N}\) by induction. Since \(S_{1}(0)=S_{2}(0)=1\) and \(2\cdot(2k+1)\geq 6\) for \(k\geq 1\), the base cases are established. Now, suppose \((2n)!\cdot(2n+1)\geq n!\cdot 6^{n}\) for \(n\geq 1\). \[(2n)!\cdot(2n+1) \geq n!\cdot 6^{n}\] \[2\cdot(2n+1) \geq 6\implies\] \[2\cdot(2n+1)\cdot(2n)!\cdot(2n+1) \geq n!\cdot 6^{n+1}\implies\] \[2\cdot(n+1)\cdot(2n+1)\cdot(2n)!\cdot(2n+1) \geq(n+1)\cdot n!\cdot 6^{n+1}\implies\] \[\{2(n+1)\}!\cdot(2n+1) \geq(n+1)!\cdot 6^{n+1}\implies\] \[\{2(n+1)\}!\cdot\{2(n+1)+1\} \geq(n+1)!\cdot 6^{n+1}\] Therefore, \(S_{1}^{-1}=\{(2k)!\cdot(2k+1)\}^{-1}\leq S_{2}^{-1}=\{k!\cdot 6^{k}\}^{-1}\) for all \(k\in\mathbb{N}\). Hence: \[\mathrm{Av}(\exp\{swZ\})=\sum_{i=0}^{\infty}\{2i!\}^{-1}\{2i+1\}^{-1}w^{2i}s^{ 2i}M^{2i}\leq\sum_{i=0}^{\infty}\{i!\}^{-1}\{6^{i}\}^{-1}w^{2i}s^{2i}M^{2i}= \sum_{i=0}^{\infty}\{i!\}^{-1}\{6^{-1}s^{2}w^{2}M^{2}\}^{i}=\exp\{6^{-1}s^{2}w^ {2}M^{2}\}\] Since \(M=2^{-1}R\) for \(\mathcal{U}\) variables, \(\mathrm{Av}(\exp\{swZ\})\leq\exp\{24^{-1}s^{2}w^{2}R^{2}\}\). **Proposition 8**.: _Suppose \(Z\in\mathcal{U}\) is a regular and continuous random variable s.t. \(EZ=0\). If \(EZ^{k}\leq\{2^{k}\cdot(k+1)\}^{-1}R^{k}\) for all even integers \(k\) and \(EZ^{k}\leq 0\) for odd integers \(k\), then \(E(exp\{sZ\})\leq Av(exp\{sZ\})\) for any constant \(s>0\). If \(Z\) has a symmetric probability distribution, \(s\) can be any constant._ Proof.: From Lemma 1, we know that \(\mathrm{Av}(\exp\{sZ\})=\sum_{i\in 2\mathbb{N}_{0}}\{i!\}^{-1}\{2^{i}\cdot(i+1)\}^ {-1}s^{i}R^{i}\). Therefore: \[\mathrm{E}(\exp\{sZ\})=\sum_{i=0}^{\infty}\{i!\}^{-1}s^{i}\mathrm{E}(Z^{i}) \leq\sum_{i\in 2\mathbb{N}_{0}}\{i!\}^{-1}s^{i}\mathrm{E}Z^{i}\leq\sum_{i\in 2 \mathbb{N}_{0}}\{i!\}^{-1}\{2^{i}\cdot(i+1)\}^{-1}s^{i}R^{i}=\mathrm{Av}(\exp \{sZ\})\] The expansion of the proposition to the case of symmetric probability distributions is quick. Under this case, all odd moments are also zero. Therefore, the first inequality of the latter sequence of logic becomes a strict equality. Since only even indexes are left, \(s\) can be any constant. **Lemma 2**.: _If \(Z\) is a symmetric, regular, and unimodal random variable s.t. \(EZ=0\) and \(\max(\mathcal{S})=M\), then \(EZ^{k}\leq Av(Z^{k})\) for all even integers \(k\) when \(Z\) is continuous. If \(Z\) is discrete and symmetric with support \(\mathcal{S}=\{-c_{M},-c_{M-1},\ldots,0,\ldots,c_{M-1},c_{M}\}\) w.r.t. constants \(c_{1}<c_{2}<\cdots<c_{M-1}<c_{M}\), then \(EZ^{k}\leq\{|\mathcal{S}|-1\}^{-1}|\mathcal{S}|\cdot Av(Z^{k})\)._ Proof.: We provide a proof for the continuous case first. By definition, \(f(z)\) is increasing on \([-M,0]\) and decreasing on \([0,M]\) if \(Z\) is unimodal and symmetric. However, \(z^{k}\) is decreasing on \([-M,0]\) and increasing on \([0,M]\) when \(k\) is an even integer. WLOG, we reason w.r.t. \([-M,0]\subset\mathcal{S}\). By Chebyshev's integral inequality, it is then true that \(\int_{-M}^{0}z^{k}f(z)dz\leq M^{-1}\int_{-M}^{0}z^{k}dz\cdot\int_{-M}^{0}f(z)dz\) since \(z^{k}\) and \(f(z)\) have opposite monotonicities on \([-M,0]\). Furthermore, since \(\int_{-M}^{0}f(z)dz=2^{-1}\) and \((-c)^{k}f(-c)=c^{k}f(c)\) for \(\forall c\in\mathcal{S}\) when \(k\) is an even integer and \(f(z)\) is symmetric, \(2\cdot\int_{-M}^{0}z^{k}f(z)dz=\mathrm{EZ}^{k}\). It is also obviously the case that \(2\cdot\int_{-M}^{0}z^{k}dz=\int_{-M}^{M}z^{k}dz\) since \(k\) is an even integer. Therefore, \(\int_{-M}^{0}z^{k}f(z)dz\leq M^{-1}\int_{-M}^{0}z^{k}dz\cdot\int_{-M}^{0}f(z)dz\) implies that \(\mathrm{EZ}^{k}\leq(2M)^{-1}\cdot\int_{-M}^{M}z^{k}dz=\mathrm{Av}(Z^{k})\). Now, for the discrete case, we will say that \(\mathcal{S}=\{-M,-(M-1),\ldots,M-1,M\}\) WLOG and that \(\sum_{i=-M}^{-1}f(z)=\sum_{i=1}^{M}f(z)\) and therefore that \(2\cdot\sum_{z=-M}^{-1}f(z)=1-f(0)\). Once again, we also note that \(f(z)\) is increasing on \(\{-M,\ldots,-1\}\) and \(z^{k}\) is decreasing on this same set for even integers \(k\). Thus, by the Chebyshev sum inequality, \(\sum_{z=-M}^{-1}z^{k}f(z)\leq M^{-1}\sum_{z=-M}^{-1}z^{k}\cdot\sum_{z=-M}^{-1}f(z) \implies 2\cdot\sum_{i=-M}^{-1}z^{k}f(z)\leq M^{-1}\cdot 2\cdot\sum_{z=-M}^{-1}z^{k} \cdot\{2^{-1}-2^{-1}f(0)\}\). Since \(z\cdot f(z)=0\) and \(z^{k}=0\) when \(z=0\), it is then implied that \(\mathrm{EZ}^{k}\leq(2M)^{-1}\sum_{\mathcal{S}}z^{k}\cdot\{1-f(0)\}\leq(2M)^{-1} \sum_{\mathcal{S}}z^{k}=\{|\mathcal{S}|-1\}^{-1}|\mathcal{S}|\cdot\mathrm{Av}(Z^{k})\). The last sequence of logic is true because \(|\mathcal{S}|=2M+1\) and \(f(0)>0\) **Corollary 2**.: _Suppose \(Z\) is a regular, symmetric \(\mathcal{U}\) random variable s.t. \(EZ=0\) and \(\text{max}(\mathcal{S})=M\). If \(Z\) is continuous, then \(E(\text{exp}\{sZ\})\leq\text{Av}(\text{exp}\{sZ\})\) for any \(s\in\mathbb{R}\). If \(Z\) is discrete as defined in Lemma 2, then \(E(\text{exp}\{sZ\})\leq\{2M\}^{-1}\{2M+1\}\cdot\{\text{Av}(\text{exp}\{sZ\})- \{2M+1\}^{-1}\}\) for any \(s\in\mathbb{R}\)._ Proof.: Let \(s\) be arbitrary. For the continuous case, Lemma 2 and Proposition 8 imply the result. For the discrete case, we again note that \(\text{Av}(Z^{k})=0\) when \(k\) is an odd integer because \(\sum_{z=-M}^{-1}z^{k}=-\sum_{z=1}^{M}z^{k}\) and \(z^{k}=0\) when \(z=0\). From the premises and Lemma 2, we know that \(\text{E}(\text{exp}\{sZ\})\leq 1+(2M)^{-1}(2M+1)\cdot\sum_{i=1}^{\infty}\{i! \}^{-1}s^{i}\text{Av}(Z^{i})\). Then: \[\text{E}(\text{exp}\{sZ\})\leq 1+(2M)^{-1}(2M+1)\cdot\sum_{i=1}^{\infty}\{i! \}^{-1}s^{i}\text{Av}(Z^{i})=1+(2M)^{-1}(2M+1)\cdot\{\text{Av}(\text{exp}\{sZ \})-1\}\] The statement of interest arrives via basic algebra and is therefore omitted. Proposition 8 and the subsequent statements establish that symmetric and regular \(\mathcal{U}\) random variables--precisely the type of variable that often arises in linear regression settings, as aforementioned--are good candidates for A5. Again, this does not mean that other types of marginal distributions that are not strictly \(\mathcal{U}\), regular, or symmetric are not. It is possible to imagine how different dynamics between marginal and joint distributions could still deliver A5. However, a class of variables that can serve as a basic case under mutual independence provides an anchor. Constraints on the joint distribution are next. From here, the use of A5 can be justified via recourse to another set of identities concerning \(\mathcal{U}\) variables. Unfortunately--in the continuous case--these identities require the existence of a high-dimensional density. Again say \(Z\) is an arbitrary bounded random variable with mass or density function \(f(z)\) and let \(L=f(z_{1},\ldots,z_{n})\) denote a joint mass function or density w.r.t. \(Y=g(Z_{1},\ldots,Z_{n})\) for some function \(g\). The two identities are: \[\text{Av}(Z)=\text{E}Z+R^{-1}\sigma_{Z,f^{-1}(Z)} \tag{1}\] \[\text{Av}\{g(Z_{1},\ldots,Z_{n})\}=\text{E}Y+R_{\mathbf{z}}^{-1}\sigma_{g(Z_ {1},\ldots,Z_{n}),L^{-1}} \tag{2}\] For random variables on finite support, inverse mass functions--both joint and marginal--will always exist for finite \(n\). Remember: the support of a marginal or joint distribution is the closure of the set of values s.t. \(f(z)>0\) and \(L>0\) respectively. Hence, \(L^{-1}\) and \(f^{-1}(z)\) are well-defined for mass functions. They are also well-defined for marginal and joint densities of bounded random variables if the basic objects exist: a precondition that requires the absolute continuity of the cumulative probability functions. Immediately, we can see that a variable \(Z\) is in the \(\mathcal{U}\) class if and only if it is uncorrelated with its own inverse mass or density. Just the same w.r.t. Eq. (2), we can see that \(\text{E}Y\leq\text{Av}\{g(Z_{1},\ldots,Z_{n})\}\) if and only if \(\sigma_{g(Z_{1},\ldots,Z_{n}),L^{-1}}\geq 0\). This of course means that as \(g(Z_{1},\ldots,Z_{n})\) takes larger values, \(L^{-1}\) tends to take larger ones as well. Informally, this indicates that \(L\) tends to take _smaller_ values as \(g(Z_{1},\ldots,Z_{n})\) takes larger ones. Hence, at least in a linear sense, the joint mass function or density tends to place smaller amounts of probability or density on the \((z_{1},\ldots,z_{n})\in\mathcal{S}^{n}\) that produce large values of \(Y\). In other words, if the joint density or mass function displays any behavior akin to the concentration of measure and the marginal distributions behave in the manner previously stated, one can expect \(\text{E}Y\leq\text{Av}\{g(Z_{1},\ldots,Z_{n})\}\) to hold as a condition. Recall that a probability measure is said to concentrate if it places most of its measurement on a subset of values in the support that neighbor the expected value of the random variable. If the values in an arbitrarily small neighborhood of the expected value continue to accumulate measure with sample size, the random variable converges in probability to a constant. Vitally, this suggests that A5 holds when--together with the aformentioned informal restrictions on the marginal distributions--A3 is true (1) and it is _also_ the case that \(L\) exists (2) and \(\mathcal{S}^{n}\) remains rectangular for arbitrarily large (but finite) samples (3). Consider the first property again: \(\mu_{n}=o(n)\). When the variances of the \(\epsilon_{i}\) are finite, \(\mu_{n}=o(n)\) implies that \(\mathbf{w}_{n}\boldsymbol{\epsilon}\) converges in probability to zero. Hence, it will be the case that \(\sum_{i=1}^{n}w_{i}\epsilon_{i}=g(\epsilon_{1},\ldots,\epsilon_{n})\approx 0\) for sufficiently sized \(n\) with high probability, which implies that the \((e_{1},\ldots,e_{n})\in\mathcal{S}^{n}\) that distance \(\mathbf{w}_{n}\boldsymbol{\epsilon}\) from zero in higher magnitudes will be afforded smaller density, at least overall in some sense, as \(n\) gets larger. This, in turn, will afford more of a likelihood that \(\text{E}(\text{exp}(s\mathbf{w}\boldsymbol{\epsilon}))\) is bounded by the target value. Again, the second property (and bounded supports) allow for Eq. (2) and the referenced objects to exist. The third property is important to explore in more detail. Essentially, it states that the mutual dependence between the \(\epsilon_{i}\) does not remove all density from some \((e_{1},\ldots,e_{n})\in\mathcal{S}^{n}_{*}\), even as \(n\) gets large. Importantly, this does _not_ mean that each \((e_{1},\ldots,e_{n})\in\mathcal{S}^{n}_{*}\) needs to be afforded a non-negligible density for all sample sizes. The density can become arbitrarily small. It simply needs to be non-zero for all \(n\) considered. If this holds, then \(\mathcal{S}^{n}=\mathcal{S}^{n}_{*}\) and thus \(\operatorname{Av}(\exp\{s\mathbf{w}\mathbf{e}\})=\operatorname{Av}_{*}(\exp\{s\mathbf{w}\bm {e}\})\). Importantly, we do not need the last two of these properties to hold in the true limit. We only need them to hold for sufficiently sized, but ultimately finite \(n\). For clarity, we explore yet another characterization. Define a function \(\eta(z)=f(z)-h(z)\), where \(f(z)\) and \(h(z)\) are two densities defined on the same support WLOG. Then it is apparent that \(\int_{\mathcal{S}}\eta(z)dz=0\). Now rearrange and multiply by \(z\): \(z\cdot f(z)=z\cdot h(z)+z\cdot\eta(z)\). Integrating across, we arrive to \(\operatorname{E}\!Z=\operatorname{E}_{h}\!Z+\int_{\mathcal{S}}z\cdot\eta(z)dz\), where \(\operatorname{E}_{h}\) indicates an expectation taken w.r.t. \(h(z)\). Another set of identities results when \(h(z)=R^{-1}\), the uniform density. Then the above becomes \(\operatorname{E}\!Z=\operatorname{Av}(Z)+\int_{\mathcal{S}}z\cdot\eta(z)dz\) and it follows that \(\int_{\mathcal{S}}z\cdot\eta(z)dz=-R^{-1}\sigma_{Z,f^{-1}(Z)}\). Just the same, \(\int_{\mathcal{S}}g(z_{1},\ldots,z_{n})\cdot\eta(z_{1},\ldots,z_{n})dz_{1} \cdots dz_{n}=-R_{\mathbf{z}}^{-1}\sigma_{g(Z_{1},\ldots,Z_{n}),L^{-1}}\). Therefore, \(\mathcal{U}\) status is directly related to \(\eta(\mathbf{z})\) and \(g(\mathbf{z})\) being orthogonal. In the supplementary materials, it is also proven that \(\eta(Z)\in\mathcal{U}\) is a sufficient condition for \(Z\in\mathcal{U}\) when \(h(z)=R^{-1}\). Although this condition seems abstract, it simply means that \(\operatorname{E}\!\left\{\eta(Z)\right\}=0\), i.e., that the expected value of the 'approximating' density matches the expected value of the true one. If \(\operatorname{E}\!L\to R_{\mathbf{z}}^{-1}\) as \(n\) grows, this is also sufficient for \(\operatorname{E}\!\left\{g(Z_{1},\ldots,Z_{n})\right\}\to\operatorname{Av}\! \left\{g(Z_{1},\ldots,Z_{n})\right\}\) and hence A5 when \(\mathcal{S}^{n}_{*}=\mathcal{S}^{n}\). ### Main Results Before using Lemma 1 to derive a sharper version of Hoeffding's inequality, we first extend classical results. A sub-\(\mathcal{U}\) random variable is one such that \(\operatorname{Av}(Z)\leq\operatorname{E}\!Z\). **Lemma 3** (Extension of Hoeffding's lemma).: _Let \(Z\) be a random variable defined on bounded support \(\mathcal{S}\) with minimum \(m\) and maximum \(M\). Then for any \(s>0\), \(Av\left(exp\{sZ\}\right)\leq exp\{sAv(Z)+8^{-1}s^{2}R^{2}\}\). If \(\operatorname{E}\!Z=0\) and \(Z\) is sub-\(\mathcal{U}\), \(Av\left(exp\{sZ\}\right)\leq exp\{8^{-1}s^{2}R^{2}\}\)._ Proof.: This proof follows the logic of the proof for Hoeffding's lemma since \(Av(\cdot)\) is a linear operator with the monotonic property over inequalities. Since \(exp\{sz\}\) is a convex function of \(z\), for all \(z\in\mathcal{S}\): \[exp\{sz\}\leq R^{-1}\left((M-z)exp\{sm\}+(z-m)exp\{sM\}\right)\] Then, denoting \(Av(Z)=Av\): \[Av\left(exp\{sZ\}\right)\leq R^{-1}\left((M-Av)exp\{sm\}+(Av-m)exp\{sM\}\right)\] Now, specify a function \(g(x)=R^{-1}mx+log\left((M-Av)+(Av-m)exp\{x\}\right)-log(R)\). Then it is easily demonstrable that \(g(0)=0,g^{\prime}(0)=R^{-1}Av\) and \(g^{\prime\prime}(x)\leq 4^{-1}\) for \(\forall x\). Hence, utilizing a Taylor expansion around \(0\), since \(g(x)=g(0)+xg^{\prime}(0)+2^{-1}x^{2}g^{\prime\prime}(x_{*})\) for some \(x_{*}\) that is between \(0\) and \(x\), it is true that \(g(x)\leq R^{-1}xAv+8^{-1}x^{2}\). This implies that \(g(sR)\leq sAv+8^{-1}s^{2}R^{2}\). Hence, \(Av\left(exp\{sZ\}\right)\leq exp\{sAv+8^{-1}s^{2}R^{2}\}\). Now, since \(s>0\), when \(\operatorname{E}\!Z=0\) and \(Z\) is sub-\(\mathcal{U}\), it is true that \(Av(Z)\leq 0\). Therefore, \(Av\left(exp\{sZ\}\right)\leq exp\{8^{-1}s^{2}R^{2}\}\). **Theorem 1** (Extension of Hoeffding's Inequality).: _Let \(\epsilon_{1},\ldots,\epsilon_{n}\) be defined on bounded supports \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\) such that for an arbitrary \(i\in\{1,\ldots,n\}\), \(m_{i}\leq\epsilon_{i}\leq M_{i}\) with probability one and \(E\epsilon_{i}=0\). Let \(\mathbf{w}\) be a vector of constants s.t. \(S_{n}=\mathbf{w}\mathbf{\epsilon}=\sum_{i=1}^{n}w_{i}\epsilon_{i}\). Suppose A5 for some fixed \(n\) and let \(\tau>0\) be arbitrary. Then \(Av(\epsilon_{i})\leq 0\) for \(\forall i\) and \(\mathbf{w}>\mathbf{0}\)\(\implies\)\(Pr(|S_{n}|>\tau)\leq 2exp\{-(\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2})^{-1}2\tau^{2}\}\)._ _Furthermore, if there exists some \(N\in\mathbb{N}\) such that for \(\forall n>N\) it is true that the above conditions hold, then the stated inequality is true for all \(n>N\) provided all objects exist._ Proof.: Again, note that for any \(s>0\), all integrals exist--including those of A5--since each \(\epsilon_{i}\) is a bounded random variable. This can easily be shown with an extended application of Fubini's theorem. Let \(\tau\) and \(s\) be arbitrary real numbers such that \(\tau>0\) and \(s>0\). We will proceed with the statement \(Pr(S_{n}>\tau)\). Then by Markov's inequality and A5: \[\Pr\left(\exp\{sS_{n}\}>\exp\{s\tau\}\right)\leq\operatorname{E}\left(\exp\{sS_ {n}\}\right)\exp\{-s\tau\}\leq\operatorname{Av}_{*}\left(\exp\{s(\sum_{i=1}^{n}w_ {i}\epsilon_{i})\}\right)\exp\{-s\tau\}\] However, \(\mathrm{Av}_{*}\left(\exp\{s(\sum_{i=1}^{n}w_{i}\epsilon_{i})\}\right)\exp[-s\tau]= \prod_{i=1}^{n}\mathrm{Av}\left(\exp\{sw_{i}\epsilon_{i}\}\right)\exp[-s\tau]\). By Lemma 3, then, since each \(sw_{i}>0\) and our premise asserts that \(\epsilon_{i}\) is sub-\(\mathcal{U}\) for \(\forall i\): \[\prod_{i=1}^{n}\mathrm{Av}\left(\exp\{sw_{i}\epsilon_{i}\}\right)\exp[-s\tau] \leq\prod_{i=1}^{n}\exp\{8^{-1}s^{2}w_{i}^{2}R_{i}^{2}\}\mathrm{exp}[-s\tau]= \exp[8^{-1}s^{2}\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2}-s\tau]\] From here, we proceed by finding the minimum of \(\exp[8^{-1}s^{2}\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2}-s\tau]\) for \(s\in\mathbb{R}^{+}\). It is easy to verify that this function is minimized at \(s=(\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2})^{-1}4\tau\). Thus, \(\Pr(S_{n}>\tau)\leq\exp\{-(\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2})^{-1}2\tau^{2}\}\). This proves one direction of the main statement. For the other, one is required to proceed in an analogous fashion for \(\Pr(-S_{n}>\tau)\). One simply uses A5 and Lemma 3 again. For this reason, the steps are omitted. Hence \(\Pr(S_{n}>\tau)+\Pr(S_{n}<-\tau)\leq 2exp\{-(\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2})^{- 1}2\tau^{2}\}\implies\Pr(S_{n}|>\epsilon)\leq 2exp\{-(\sum_{i=1}^{n}w_{i}^{2}R_{i }^{2})^{-1}2\tau^{2}\}\). The asymptotic statement of the theorem can be achieved by setting some \(N\in\mathbb{N}\) such that A5 holds for all natural numbers greater than \(N\) and then letting \(n\in\mathbb{N}\) be arbitrary such that \(n>N\). Then the proofs proceed exactly as before. This theorem is very useful for the construction of valid confidence intervals under almost arbitrary conditions of probabilistic dependence, and without any need to specify or completely understand the latent system of dependencies, or even many features of the marginal or joint probability distributions. Again, \(\mu_{D}\) could be \(O(n)\). All that is required in terms of dependence is A5. **Example 1**.: Say \(\{Y_{i}\}_{i\in\zeta}\) is an identically distributed sample of sub-\(\mathcal{U}\) random variable s.t. \(\mathrm{E}Y_{i}=\mu\). Also say \(m,M\) are the minimum and maximum of the associated support, respectively. Furthermore, say \(\mu_{D}=n-1\), but A5 is fulfilled. Then \([\tilde{Y}-R\cdot\sqrt{(2n)^{-1}\mathrm{log}(2/\alpha)},\tilde{Y}+R\cdot\sqrt {(2n)^{-1}\mathrm{log}(2/\alpha)}]\) is at least a \(1-\alpha\) confidence set for \(\mu\) and \(\alpha\in(0,1)\). If \(m\geq 0\) and \(m,M\) are unknown, then \(\{\mu\mid\left(1+\sqrt{2n^{-1}log(\alpha^{-1}2)}\right)^{-1}\tilde{Y}\leq\mu \leq\left(1-\sqrt{2n^{-1}log(\alpha^{-1}2)}\right)^{-1}\tilde{Y}\}\) is also at least a \(1-\alpha\) confidence set when \(n>2log(\alpha^{-1}2)\). This next lemma allows us to extend Bernstein's inequality, which can sometimes provide much sharper confidence sets. It is also an adaptation of a classic result. **Lemma 4**.: _Let \(Z\) be a random variable such that \(|Z|\leq M\) almost surely, \(EZ=0\), and \(Z\) is sub-\(\mathcal{U}\). Then for any \(0<s\), \(Av(exp\{sZ\})\leq exp\{M^{-2}Av(Z^{2})(exp\{sM})-1-sM\}\)._ Proof.: \[\exp\{sZ\}=1+sZ+\sum_{k=2}^{\infty}(k!)^{-1}s^{k}Z^{k}\leq 1+sZ+\sum_{k=2}^{ \infty}(k!)^{-1}s^{k}|Z|^{k}=1+sZ+\sum_{k=2}^{\infty}(k!)^{-1}s^{k}Z^{2}|Z|^{k -2}\] However, the last expression is less than or equal to: \[1+sZ+\sum_{k=2}^{\infty}(k!)^{-1}s^{k}Z^{M}t^{k-2}=1+sZ+M^{-2}Z^{2}\sum_{k=2}^ {\infty}(k!)^{-1}s^{k}M^{k}=1+sZ+M^{-2}Z^{2}(exp\{sM\}-1-sM)\] Hence, \(\mathrm{Av}(\exp\{sZ\})\leq 1+M^{-2}\mathrm{Av}(Z^{2})(exp\{sM\}-1-sM)\leq \exp\{M^{-2}\mathrm{Av}(Z^{2})(exp\{sM\}-1-sM)\}\). The first of the previous inequalities follows from the fact that \(\mathrm{Av}(Z)\leq 0\). The second follows from the fact that \(1+x\leq exp\{x\}\) for \(\forall x\in\mathbb{R}\). **Theorem 2**.: _Let \(\epsilon_{1},\ldots,\epsilon_{n}\) be defined on bounded supports \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\) and \(\mathbf{w}\) be a vector of positive constants s.t. \(S_{n}=\mathbf{w}\mathbf{e}=w\cdot\sum_{i=1}^{n}\epsilon_{i}\) for \(w>0\). Moreover, for an arbitrary \(i\in\{1,\ldots,n\}\), say \(|\epsilon_{i}|\leq M\) with probability one and \(E\epsilon_{i}=0\). Suppose A5 for some fixed \(n\) and let \(\tau>0\) be arbitrary. Finally, define a function \(h(u)=log(u+1)(u+1)-u\). Then \(\Pr(\mathcal{S}_{n}|>\tau)\leq 2exp\left(-M^{-2}\sum_{i=1}^{n}Av(\epsilon_{i}^{2}) \cdot h\left(|w\cdot\sum_{i=1}^{n}Av(\epsilon_{i}^{2})|^{-1}\tau M\right)\right)\) when \(\epsilon_{i}\) is sub-\(\mathcal{U}\) for \(\forall i\)._ Proof.: The proof is almost exactly to Theorem 1. Thus, we provide only a sketch. Let \(\tau>0\) and \(s>0\) be arbitrary. Then \(\Pr\left(\exp\{sS_{n}\}>\exp\{s\tau\}\right)\leq\prod_{i=1}^{n}\exp\{M^{-2} \mathrm{Av}(\epsilon_{i}^{2})\cdot(\exp\{swM\}-1-swM)\}\mathrm{exp}[-s\tau]= \exp\{M^{-2}\sum_{i=1}^{n}\mathrm{Av}(\epsilon_{i}^{2})\cdot(exp\{swM\}-1-swM) -s\tau\}\). These statements follow from from A5 and Lemma 4. Now, we proceed by minimizing the last expression with respect to \(s\). It is easy to observe that the minimum is achieved at \(s=(wM)^{-1}log\left(\{w\cdot\sum_{i=1}^{n}\mathrm{Av}(\epsilon_{i}^{2})\}^{-1} \tau M+1\right)\). Algebraic rearrangement achieves one side of the bound. Parallel logic, as in Theorem 1, achieves the other. **Corollary 3**.: _(Extension of Bernstein's Inequality.) Suppose the same setup as Theorem 2. Then for an arbitrary \(\tau>0\):_ \[Pr(|S_{n}|>\tau)\leq 2exp\left(-[2w^{2}\sum_{i=1}^{n}Av(\epsilon_{i}^{2})+3^{-1}2w \tau M]^{-1}\tau^{2}\right)\] Proof.: We simply use Theorem 2 and then note that \(h(u)\geq(2+3^{-1}2u)^{-1}u^{2}\) for \(u\geq 0\). Algebraic rearrangement supplies the result. To use Theorem 2 and Corollary 3 for confidence sets, one only needs to note that \(\mathrm{Av}(\epsilon_{i}^{2})\leq 3^{-1}2R_{i}^{2}\) for sub-\(\mathcal{U}\) variables of the form \(\epsilon_{i}=Y_{i}-\mathrm{E}Y_{i}\), where \(R_{i}\) here is the range of \(Y_{i}\). One can then substitute these values into the previous results and use additional substitutions depending on the known features of the conditional \(Y_{i}\) or marginal \(Y\). For \(\mathcal{U}\) random variables, we already know that \(\mathrm{Av}(\epsilon_{i}^{2})=12^{-1}R_{i}^{2}\). **Theorem 3**.: _Let \(\epsilon_{1},\ldots,\epsilon_{n}\) be continuous, regular \(\mathcal{U}\) random variables defined on supports \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\) such that for an arbitrary \(i\in\{1,\ldots,n\}\), \(\text{max}(\mathcal{S}_{i})=M_{i}\) and \(E\epsilon_{i}=0\). Let \(\mathbf{w}\) be a vector of constants s.t. \(S_{n}=\mathbf{w}\epsilon=\sum_{i=1}^{n}w_{i}\epsilon_{i}\). Suppose A5 for some fixed \(n\) and let \(\tau>0\) be arbitrary. Then \(Pr(|S_{n}|>\tau)\leq 2exp\{-(\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2})^{-1}6\tau^{2}\}\)._ Proof.: The proof will largely be omitted since it also mirrors Theorem 1. Instead of using Lemma 3, we can use Lemma 1 for \(\mathcal{U}\) variables. This improves the bound on each \(\mathrm{Av}(\exp\{sw_{i}\epsilon_{i}\})\) by a factor of \(3^{-1}\) within each exponentiation. It also allows for each \(w_{i}\) to be negative. The rest of the proof follows the exact same steps. **Example 2**.: Now, say the objective is the estimation of \(\mathbf{\beta}\) w.r.t. \(\mathbf{Y}=\mathbf{x}\mathbf{\beta}+\mathbf{\epsilon}\). Suppose the mean model is well specified and \(\mathbf{\epsilon}\) is _any_ vector of regular and continuous \(\mathcal{U}\) random variables that are densely dependent, but A5 applies. Let \(\mathbf{B}_{s}=\mathbf{w}_{s}\mathbf{Y}\) be arbitrary. Then \([\mathbf{B}_{s}-\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}R_{i}^{2}}\cdot\sqrt{6^{-1} \text{log}(2/\alpha)},\ \mathbf{B}_{s}+\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}R_{i}^{2}}\cdot\sqrt{6^{-1} \text{log}(2/\alpha)}]\) is at least a \(1-\alpha\) confidence interval for \(\beta_{s}\). Note that if each \(Y_{i}\) is non-negative, since the range of each \(\epsilon_{i}\) equals the range of each \(Y_{i}\), \(R_{i}\leq 2\mathrm{E}Y_{i}\) and hence one can feasibly replace \(\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}R_{i}^{2}}\) with \(2\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}(\hat{\mathrm{E}Y_{i}})^{2}}\) when each \(R_{i}\) is unknown. This will still be accurate asymptotically insofar as \(\mu_{n}=o(n)\). One could also replace the former with \(R\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}}\), where \(R\) is the range of the marginal distribution of \(Y\), provided it is known. If \(Y\) is sub-\(\mathcal{U}\) marginally and \(R\) is unknown, one can also replace the former with \(2\hat{Y}\sqrt{\sum_{i=1}^{n}w_{s,i}^{2}}\) for large enough \(n\). Lastly, we could also make use of the sample range of \(\hat{e}\). Although downwardly biased, it is still a reasonable choice. It is also consistent until mild regularity conditions. This topic is explored further in Section 6 and the supplementary materials. ## 5 A Quick Extension to Estimating Equations In this section, we extend the results of the previous sections to estimating equations, hence expanding their utility to a larger class of estimators. Consider a random variable \(h(Y_{i},\mathbf{\theta})\) for some measurable function \(h\). Although \(h(Y_{i},\mathbf{\theta})\) will often also depend upon a set of fixed constants for \(\forall i\), this will be left implicit. The estimating equations of interest here will take the following form, although sometimes it will be relevant to standardize them by a different function of \(n\): \[Q(\theta)=n^{-1}\sum_{i=1}^{n}\mathrm{E}h(Y_{i},\mathbf{\theta}) \tag{3}\] \[Q_{n}(\theta)=n^{-1}\sum_{i=1}^{n}h(Y_{i},\mathbf{\theta}) \tag{4}\] Equations (3) and (4) together are a bedrock of statistical theory and practice and are well understood. To quicken developments, we will draw heavily upon the prior results of the authors mentioned in the introduction. The main contribution of this section is to demonstrate that the variance identities can be used to establish a uniform weak law of large number (UWLLN) for Eq. (4) under exceptionally unfavorable conditions and to show that the estimators resulting from this setup are conditionally or asymptotically additive. The latter fact thus allows for the use of the exponential inequalities of the previous section, even in the face of 'apocalyptic' dependence, provided the right error structure. Denote \(\mathbf{\beta}\in\Theta^{q}\) to be the target parameter of interest once again. If \(\mathbf{\beta}\) minimizes (maximizes) \(Q(\mathbf{\theta})\), then \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) represents a sequence of random variables that constitute a class of extremum estimators, or M-estimators. For instance, when \(h(Y_{i},\mathbf{\theta})\) is a log-likelihood, \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) is a (quasi) maximum likelihood estimator. When \(Q(\mathbf{\beta})=0\), then \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) is instead a sequence of root estimators. All generalized method of moment estimators fall into this class [15]. This paper will focus on the latter in the spirit of Yuan and Jennrich [40], although some attention will be provided to the former as well. The reason for this is simple. The main objective is to (minimally) establish that the sequence of estimators \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) derived in concordance with Eq. (3) and Eq. (4) converges in probability to \(\mathbf{\beta}\). Theory associated with the minimization (maximization) of some \(Q(\mathbf{\theta})\) can accomplish this; however, since this path ultimately reduces to a consideration of \(\mathrm{E}\mathbf{\nabla}\mathbf{Q}(\mathbf{\theta})=\mathbf{0}\) as a moment condition, where \(\mathbf{\nabla}\) is the gradient, one can ultimately explore this case with greater generality. For clarity, \(Q\) will henceforth be used for extremum estimating equations, while \(U\) will be used for root equations. Only a couple new adjustments will be made to the typical set of regularity conditions, and partially for simplicity and intuition. The assumptions utilized are provided below: 1. \(\Theta^{q}\) is a compact Euclidean space 2. \(h(y_{i},\mathbf{\theta})\) is continuous in \(\mathbf{\theta}\) and is at least twice continuously differentiable w.r.t. \(\mathbf{\theta}\in\Theta\) for \(\forall y_{i}\) 3. \(|\partial h^{2}/\partial\mathbf{\theta}\)\(h(y_{i},\mathbf{\theta})|\leq Z_{i}\) for some r.v. \(Z_{i}\) and \(\sup_{1\leq i\leq n}Z_{i}=O(1)\) for \(\forall i\) 4. \(\hat{\mathbf{\beta}}_{\mathbf{n}}\in\Theta\) and \(\mathbf{U}_{\mathbf{n}}(\hat{\mathbf{\beta}}_{\mathbf{n}})=\mathbf{0}\) for \(\forall n\) 5. \(\mathbf{\theta}\in\Theta\) and \(\mathbf{U}(\mathbf{\theta})=\mathbf{0}\iff\mathbf{\theta}=\mathbf{\beta}\) 6. Denote \(\mathcal{L}_{n}\) as a linear dependency graph s.t. \(\zeta=\{h(Y_{1},\mathbf{\theta}),\ldots,h(Y_{n},\mathbf{\theta})\}\). Then \(\mu_{n}=o(n)\) for \(\forall\mathbf{\theta}\in\Theta\) 7. Let \(\mathbf{\theta}\in\Theta\) be arbitrary. Then \(\nabla\mathbf{U}(\mathbf{\theta})\) is non-singular and \(\nabla\mathbf{U}_{\mathbf{n}}(\mathbf{\theta})\) converges in probability to \(\nabla\mathbf{U}(\mathbf{\theta})\) 8. For all \(\mathbf{\theta}\in\Theta\) and \(\forall i\), \(\mathrm{Var}[h(Y_{i},\mathbf{\theta})]<\infty\) 9. \(Q(\mathbf{\beta})<\inf_{\mathbf{\theta}\in\Theta:\mathbf{\theta}\neq\mathbf{\beta}}Q(\mathbf{ \theta})\) 10. For \(\hat{\mathbf{\beta}}_{n}\in\Theta\), \(Q_{n}(\hat{\mathbf{\beta}}_{\mathbf{n}})\leq\inf_{\mathbf{\theta}\in\Theta}Q_{n}(\mathbf{ \theta})+o_{p}(1)\) Note that, in addition to these conditions, there is an implicit assumption that all inverse matrices that are utilized exist. R8 will be useful for establishing uniform convergence in probability. Utilization of R8 is a stronger assumption than is usually required; however, it is not expensive. The usual assumption is that \(\underset{\mathbf{\theta}\in\Theta}{\text{Esup}}|h(Y_{i},\mathbf{\theta})|<\infty\) for \(\forall i\) (R8'). In the usual maneuver, R8' is employed in conjunction with R1, R2, and the assumption that that a WLLN exists for the estimating equation to establish stochastic equicontinuity and thereby a UWLLN for Eq. (4). Outside of artificial pathological examples, however, such as when it is false that the variance of Eq. (4) tends to zero, this logic feasibly implies that R8 is also likely to hold. Regardless, R8 simply restricts \(\Theta\) to a space of'reasonable' values. R8 is also trivially true for well-behaved functions and bounded random variables, which is precisely our universe of concern. Regardless, R8 is not strictly necessary. All results can also be achieved with R8'. As aforementioned, the most important contribution in this section arrives via R6 (or analogous statements). Although it is a slightly higher moment condition, it provides more intuition in practice. Additionally, it makes the proofs quick and easier to follow for non-specialists. The next lemma presented establishes that a UWLLN can be obtained even when the mean dependency diverges as a function of \(n\). The proof is straightforward and similar to Proposition 5. **Lemma 5**.: _Let \(Z_{i}=h(Y_{i},\mathbf{\theta})\) be any random variable for some measurable function \(h\) s.t. R8 holds. Denote \(\mu_{n}\) as the mean degree of the linear dependency graph \(\mathcal{L}_{n}\) for \(\zeta=\{Z_{1},\ldots,Z_{n}\}\), say \(\tilde{Z}(\mathbf{\theta})=n^{-1}\sum_{i=1}^{n}Z_{i}\), and denote \(W=\underset{\mathbf{\theta}\in\Theta}{\text{sup}}|\tilde{Z}(\mathbf{\theta})-E\tilde{Z} (\mathbf{\theta})|\). If R6 also holds, then \(\tilde{Z}(\mathbf{\theta})\) converges uniformly in probability to \(E\tilde{Z}(\mathbf{\theta})\)._ Proof.: Let \(\mathbf{\theta}_{*}\) denote the \(\mathbf{\theta}\in\Theta\) corresponding to \(W=\underset{\mathbf{\theta}\in\Theta}{\text{sup}}|\tilde{Z}(\mathbf{\theta})-\mathrm{E }\tilde{Z}(\mathbf{\theta})|\). Hence, \(W=|\tilde{Z}(\mathbf{\theta}_{*})-\mathrm{E}\tilde{Z}(\mathbf{\theta}_{*})|\). Also note by assumption that \(\mathrm{Var}(Z_{i})\) is finite for \(\forall i\) and that \(\mu_{n}=o(n)\). Let \(\epsilon>0\) be arbitrary. Then: \[\Pr(W>\epsilon)=Pr(W^{2}>\epsilon^{2})\leq\epsilon^{-2}\cdot\mathrm{E}[\bar{Z}( \theta_{*})-\mathrm{E}\bar{Z}(\theta_{*})]^{2}\implies\] \[\Pr(W>\epsilon)\leq\epsilon^{-2}\cdot Var\left(\bar{Z}(\theta_{*})\right)\leq \epsilon^{-2}\cdot(1+\mu_{n}\phi_{n})\cdot n^{-2}\sum_{i=1}^{n}\mathrm{Var}(Z_ {i})\implies\] \[\lim_{n\to\infty}\Pr(W>\epsilon)\leq\lim_{n\to\infty}\epsilon^{-2}\cdot\sup_{1 \leq i\leq n}\mathrm{Var}(Z_{i})\cdot(1+\mu_{n}\phi_{n})\cdot n^{-1}=0\] Hence, \(\sup_{\theta\in\Theta}\lvert\bar{Z}(\theta)-\mathrm{E}\bar{Z}(\theta)\rvert\) converges in probability to zero. Xie and Yang [38] explored a set of exact conditions for weak and strong convergence of generalized estimating equations under the assumption that the weighted sums of clustered observations were uncorrelated with those of other clusters. Again say that \(n=\sum_{k=1}^{K}n_{k}\) for \(K\) unique clusters. They did so under three settings: 1) \(K\to\infty\) and \(n_{M}\) is bounded for all \(K\), 2) \(K\) is bounded but \(n_{m}\to\infty\), and 3) \(n_{m}\to\infty\) as \(K\to\infty\). An extension to Lemma 5 can be used to cover these settings. Similar to before, say \(\mathcal{L}_{n_{k}}\) refers to the linear dependency graph for random variables within cluster \(\zeta_{k}\) and \(\mathcal{L}_{T_{k}}\) refers to the linear dependency graph of the set of random variables \(\{T_{1},\ldots,T_{K}\}\), where now \(T_{k}=\sum_{j=1}^{n_{k}}h(Y_{j},\mathbf{\theta})\). From here, again let \(\mu_{n_{k}}\) signify the mean degree of \(\mathcal{L}_{n_{k}}\) and \(\mu_{K}\) the mean degree of \(\mathcal{L}_{T_{k}}\). **Lemma 6**.: _Suppose R1 and R8 and consider \(\bar{Z}(\mathbf{\theta})=K^{-1}\sum_{k=1}^{K}T_{k}\). Alternatively, for \(n_{M}=\underset{k\in\mathcal{K}}{\text{max}}(n_{k})\), define \(\bar{Z}_{M}(\mathbf{\theta})=n_{M}{}^{-1}\sum_{k=1}^{K}T_{k}\) and \(n_{m}=\underset{k\in\mathcal{K}}{\text{min}}(n_{k})\) as before. Then:_ 1. _Provided_ \(K(n)\to\infty\) _as_ \(n\to\infty\) _and_ \(n_{M}\) _is bounded for all_ \(K\)_, if_ \(\mu_{K}=o(K)\)_, then_ \(\bar{Z}(\mathbf{\theta})\overset{p}{\to}E\bar{Z}(\mathbf{\theta})\) _uniformly_ 2. _If_ \(K(n)\) _is bounded but_ \(n_{m}\to\infty\)_, and if_ \(\mu_{n_{k}}=o(n_{k})\) _for_ \(\forall k\)_, then_ \(\bar{Z}_{M}(\mathbf{\theta})\overset{p}{\to}E\bar{Z}_{M}(\mathbf{\theta})\) _uniformly_ 3. _If_ \(n_{m}\to\infty\) _as_ \(K(n)\to\infty\)_, then_ \(\bar{Z}_{M}(\mathbf{\theta})\overset{p}{\to}E\bar{Z}_{M}(\mathbf{\theta})\) _uniformly if_ \(\mu_{K}=O(1)\)_,_ \(K=o(n_{M})\)_, and_ \(K\cdot\mu_{n_{k}}=o(n_{M})\) _for_ \(\forall k\)_. Alternatively, the same result holds for_ \(Z(\mathbf{\theta})\) _if_ \(\mu_{n_{k}}=O(1)\) _for_ \(\forall k\)_,_ \(n_{M}=o(K)\)_, and_ \(n_{M}\cdot\mu_{K}=o(K)\)__ Proof.: The first case is very similar to that of Lemma 5, except now the cluster structure of each \(T_{k}\) is being considered. We will only prove the first two statements since the proof is largely repetitive. **Case 1:** Suppose \(K\to\infty\) and \(n_{M}\) is bounded for all \(K\) as \(n\to\infty\). Call this bounding constant \(M_{n}\). Furthermore, suppose \(\mu_{K}=o(K)\). Since each \(T_{k}\) is a finite linear combination of correlated random variables with finite variance, \(\mathrm{Var}(T_{k})<\infty\) for \(\forall k\). This can be seen by observing that \(\mathrm{Var}(T_{k})\leq n_{k}\sum_{j=1}^{n_{k}}\mathrm{Var}[h(Y_{j},\theta)] \leq M_{n}^{2}\sup_{1\leq j\leq n_{k}}\mathrm{Var}[h(Y_{j},\theta)]<\infty\) under the premises. Hence, by Lemma 5, \(\bar{Z}(\mathbf{\theta})\) obeys a UWLLN. **Case 2:** Suppose \(K(n)\) is bounded but \(n_{m}\to\infty\). Denote \(M_{K}\) and \(C_{*}\) as the asymptotic bounds of \(K(n)\) and \(\phi_{n_{k}}\) for \(\forall k\) respectively. Furthermore, suppose \(\mu_{n_{k}}=o(n_{M})\) for \(\forall k\). Now, let \(W=\underset{\theta\in\Theta}{\text{sup}}\lvert\bar{Z}_{M}(\mathbf{\theta})-\mathrm{E} \bar{Z}_{M}(\mathbf{\theta})\rvert=\lvert\bar{Z}_{M}(\mathbf{\theta}_{*})-\mathrm{E} \bar{Z}_{M}(\mathbf{\theta}_{*})\rvert\) for an arbitrary \(\epsilon>0\). Additionally, denote \(\sigma^{2}=\underset{1\leq u\leq n}{\max}\mathrm{Var}[h(Y_{u},\theta_{*})]\), \(\mu_{*}=\underset{k\in\mathcal{K}}{\text{max}}(\mu_{n_{k}})\) WLOG, and \(\phi_{*}\leq C_{*}\). Then \(\Pr\left(W>\epsilon\right)\leq\epsilon^{-2}\mathrm{Var}[\bar{Z}_{M}(\mathbf{ \theta}_{*})]\leq\epsilon^{-2}K\sum_{k=1}^{K}n_{M}^{-2}\mathrm{Var}(T_{k,\mathbf{ \theta}_{*}})\). And: \[\epsilon^{-2}K\sum_{k=1}^{K}n_{M}^{-2}\mathrm{Var}(T_{k,\mathbf{ \theta}_{*}})\leq\epsilon^{-2}K\sum_{k=1}^{K}n_{M}^{-2}(1+\mu_{n_{k}}\phi_{n_{k}}) [\sum_{j=1}^{n_{k}}\mathrm{Var}[h(Y_{k,j},\theta_{*}))]\leq\epsilon^{-2}K^{2}n _{M}^{-1}(1+\mu_{*}\phi_{*})\sigma^{2}\implies\] \[\lim_{n\to\infty}\Pr\left(W>\epsilon\right)\leq\epsilon^{-2}M_{K}^ {2}\sigma^{2}\lim_{n\to\infty}n_{M}^{-1}(1+\mu_{*}C_{*})=0\] We note that Lemma 5 can be used with known proof strategies to demonstrate that \(\hat{\mathbf{\beta}}_{\mathbf{n}}\overset{p}{\to}\mathbf{\beta}\) for M-estimators when it is possible for \(\mu_{n}(n)\to\infty\), and only under the auspices that \(\mu_{n}(n)\) grows at a rate that is sufficiently sub-linear [15]. Now, Lemma 6 is used to demonstrate that \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) is consistent and asymptotically additive. For this matter, recall that \(\mathbf{U_{n}}(\mathbf{\theta})=n^{-1}\sum_{i=1}^{n}\mathbf{h}(\mathbf{Y_{i}},\mathbf{ \theta})\) is now a \(q\times 1\) vector and \(\{\nabla\mathbf{U_{n}}(\mathbf{\theta})\}^{-1}\) is a \(q\times q\) matrix. **Proposition 9**.: _Suppose R1-R8 in conjunction with \(\mathbf{U_{n}}(\mathbf{\theta})\) and also that \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) satisfies R4. Additionally, suppose any of the three scenarios stated in Lemma 6. Then \(\hat{\mathbf{\beta}}_{\mathbf{n}}\stackrel{{ p}}{{\rightarrow}}\mathbf{\beta}\) and \(\sqrt{n}(\hat{\mathbf{\beta}}_{\mathbf{n}}-\mathbf{\beta})\) converges in distribution to a vector of additive statistics as \(n\rightarrow\infty\)._ Proof.: Recall that the premises of Lemma 6 make use of two different equations. One was standardized by \(n_{M}\) and one was standardized by \(K\) w.r.t. \(\mathbf{U_{n}}(\mathbf{\theta})=n^{-1}\sum_{i=1}^{K}\mathbf{T_{k}}=\sum_{i=1}^{K}n ^{-1}\sum_{j=1}^{n_{k}}\mathbf{h}(\mathbf{V_{k,j}},\mathbf{\theta})\). Denote these re-expressions \(\mathbf{U_{M}}(\mathbf{\theta})=n_{M}^{-1}n\mathbf{U_{n}}(\mathbf{\theta})\) and \(\mathbf{U_{K}}(\mathbf{\theta})=K^{-1}n\mathbf{U_{n}}(\mathbf{\theta})\) respectively. To establish that \(\mathbf{U_{K}}(\mathbf{\theta})\) and \(\mathbf{U_{M}}(\mathbf{\theta})\) follow a UWLLN under the relevant conditions of Lemma 6, one simply needs to specify an arbitrary component \(s\) of either random vector. Since this \(s\)th component is of the form covered by Lemma 6, under the auspices of R1 and R8, it follows that the \(s\)th component converges uniformly in probability to its expectation. Therefore, since \(s\) was arbitrary, \(\mathbf{U_{K}}(\mathbf{\theta})\) converges uniformly in probability to \(\mathbf{EU_{K}}(\mathbf{\theta})\) and \(\mathbf{U_{M}}(\mathbf{\theta})\) does the same to \(\mathbf{EU_{M}}(\mathbf{\theta})\). Thus, under Theorem 3 of Yuan and Jennrich [40] and R1, R2, R4, and R5 it is then implied that \(\hat{\mathbf{\beta}}_{\mathbf{n}}\stackrel{{ p}}{{\rightarrow}}\mathbf{\beta}\). Since \(\{\nabla\mathbf{U_{M}}(\mathbf{\theta})\}^{-1}\mathbf{U_{M}}(\mathbf{\theta})=\{ \nabla\mathbf{U_{K}}(\mathbf{\theta})\}^{-1}\mathbf{U_{K}}(\mathbf{\theta})=\{\nabla \mathbf{U_{n}}(\mathbf{\theta})\}^{-1}\mathbf{U_{n}}(\mathbf{\theta})\) for all \(\mathbf{\theta}\in\Theta\), it is possible to proceed WLOG utilizing only \(\mathbf{U_{n}}(\mathbf{\theta})\). Now, note that under R1, R2, R3, R7, and the continuous mapping theorem, \(\{\nabla\mathbf{U_{n}}(\mathbf{\theta})\}^{-1}\) converges uniformly in probability to \(\{\nabla\mathbf{U}(\mathbf{\theta})\}^{-1}\) for \(\forall\mathbf{\theta}\in\Theta\). Also, for some \(\tilde{\mathbf{\beta}}\in\Theta\) that is between \(\hat{\mathbf{\beta}}_{\mathbf{n}}\) and \(\mathbf{\beta}\) and an arbitrary \(q\times 1\) vector of constants \(\lambda\): \[\sqrt{n}\cdot\lambda^{\top}\mathbf{U_{n}}(\mathbf{\beta})=-\sqrt{n}\cdot\lambda^{ \top}\{\nabla\mathbf{U_{n}}(\tilde{\mathbf{\beta}})\}(\hat{\mathbf{\beta}}_{\mathbf{n }}-\mathbf{\beta})\] Therefore, since \(\hat{\mathbf{\beta}}_{\mathbf{n}}\stackrel{{ p}}{{\rightarrow}}\mathbf{\beta}\), it is also true under our conditions that \(\sqrt{n}\cdot\lambda^{\top}\mathbf{U_{n}}(\mathbf{\beta})\stackrel{{ p}}{{ \rightarrow}}-\sqrt{n}\cdot\lambda^{\top}\{\nabla\mathbf{U}(\mathbf{\beta})\}( \hat{\mathbf{\beta}}_{\mathbf{n}}-\mathbf{\beta})\), which implies that \(\sqrt{n}(\hat{\mathbf{\beta}}_{\mathbf{n}}-\mathbf{\beta})\stackrel{{ d}}{{ \rightarrow}}-\sqrt{n}\{\nabla\mathbf{U}(\mathbf{\beta})\}^{-1}\mathbf{U_{n}}(\mathbf{ \beta})\) by the Cramer-Wold device [12, 15, 40]. This completes the proof since \(\sqrt{n}\{\nabla\mathbf{U}(\mathbf{\beta})\}^{-1}\mathbf{U_{n}}(\mathbf{\beta})\) is a vector of random sums. It is important to note that \(\sqrt{n}(\hat{\mathbf{\beta}}_{\mathbf{n}}-\mathbf{\beta})\) will have finite variance without additional stabilization only if the maximum mean degree of linear dependency associated with the score functions is \(O(1)\). We provide two general examples to contextualize these results. **Example 3**.: **(Quasi-MLEs.)** _In this example, standard generalized linear models (GLMs) will be considered for dependent data. Recall that it has often been stated that GLMs are inappropriate when the response variables are dependent._ Say \(Q_{n}=n^{-1}\sum_{i=1}^{n}ln\{f(Y_{i},\mathbf{\theta})\}\) s.t. \(Y_{i}\) is a member of an exponential family with distribution function \(f(y_{i},\mathbf{\theta})\) for \(\forall i\). Also, say \(\mathrm{EY}_{i}=\mu_{i}=g(\mathbf{x_{i}}\mathbf{\beta})\) for a differentiable, monotonic canonical link function \(g\) and conformable vector of constants \(\mathbf{x_{i}}\). For \(\mathbf{Y}\in\mathbb{R}^{n\times 1}\), \(\mu\in\mathbb{R}^{n\times 1}\), and \(\mathbf{x}\in\mathbb{R}^{n\times q}\), denote \(\mathbf{d}=\partial\mathbf{\mu}/\partial\,\mathbf{\beta}\) and \(\mathbf{W}\) as the diagonal \(n\times n\) matrix s.t. \(\mathbf{W}_{i,i}=\phi\partial\mu_{i}/\partial\mathbf{x_{i}}\mathbf{\beta}\) for dispersion constant \(\phi\). Then \(\nabla\mathbf{Q_{n}}=\mathbf{U_{n}}=n^{-1}\mathbf{d}^{\top}\mathbf{W^{-1}}( \mathbf{Y}-\mathbf{\mu})\). Lastly, define a \(\mathcal{L}\) graph for \(\{\epsilon_{i}\}_{i\in l}\). From the previous results, it is then implied that \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta})\stackrel{{ d}}{{ \rightarrow}}\sqrt{n}\cdot(\mathbf{d}^{\top}\mathbf{W^{-1}}\mathbf{d})^{-1} \mathbf{d}^{\top}\mathbf{W^{-1}}\mathbf{\epsilon}=\sqrt{n}\cdot\mathbf{w}\mathbf{ \epsilon}\), say. Then \(\mathrm{Var}\{\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta})\}=n\cdot\mathbf{w} \mathbf{W}\mathbf{w}^{\top}\{\mathbf{1}^{p\times p}+\mu_{n}\mathbf{\Gamma}\}\) asymptotically, where \(\{\mathbf{1}^{p\times p}+\mu_{n}\mathbf{\Gamma}\}\) has the same definition as in Section 2, and where \(\mathbf{V}\) is a diagonal matrix of variances. If \(\mu_{n}=O(1)\) and the conditions of Section 4 hold, then an exponential inequality can be used for constructing confidence sets. Otherwise, one needs to additionally use a proper choice of \(\{1+\mu_{n}\phi_{n,s}\}^{-1/2}\) to stabilize the variance of \(\hat{\beta}_{s}\) and to ensure that all relevant objects exist. Of course, if it is believed that a central limit theorem holds, a deterministic correction can also be employed with a Wald-like statistic. **Example 4**.: **(Iteratively re-weighted least squares.)** _Since the variance identity and results of Section 4 can be applied to additive statistics with the necessary error structures, it is applicable to estimators calculated via iteratively re-weighted least squares (IRWLS). This extends their utility to a vast number of non-linear contexts. For instance, it then applies to the (weighted) minimization of \(Q_{n}(\mathbf{\theta})=(pn)^{-1}\sum_{i=1}^{n}\{Y_{i}-g(\mathbf{x_{i}}\mathbf{\theta})\}^{p}\) for some measurable function \(g\) and \(p>0\), insofar as IRWLS is used as a fitting procedure. Here, the special case will be considered s.t. \(p=2\) under a \(K\) cluster partition. This case will also use the notation from the previous example. It is apropos to note that the pragmatic form of quasi-MLEs are a special case of this setup when IRWLS is used as an approximate fitting algorithm._ IRWLS _is well studied and has been found to have good, reliable qualities. Under the requirement that \(\sum_{k=1}^{K}\mathbf{d}_{\mathbf{k,4}}^{\top}\mathbf{W^{-1}}\mathbf{d}_{\mathbf{ k,4}}\) is positive definite or the assumptions previously stated hold, the IRWLS estimator \(\hat{\mathbf{\beta}}_{t}\) can approximate \(\hat{\mathbf{\beta}}_{\mathbf{\mu}}\) to an almost arbitrary precision Yuan and Jennrich [40]._ To briefly show this, let \(\tau\equiv 0\) as a theoretical exercise. Then, only imposing the condition that \(\mathbf{W}_{\mathbf{k},\mathbf{t}}\) be positive definite: \(\hat{\mathbf{\beta}}_{t-\hat{\mathbf{\beta}}_{t-1}}=(\sum_{k=1}^{K}\mathbf{d}_{\mathbf{ k},\mathbf{t}}^{\top}\mathbf{W}_{\mathbf{k},\mathbf{t}}^{-1}\mathbf{d}_{ \mathbf{k},\mathbf{t}})^{-1}\sum_{k=1}^{K}\mathbf{d}_{\mathbf{k},\mathbf{t}}^{ \top}\mathbf{W}_{\mathbf{k},\mathbf{t}}^{-1}(\mathbf{Y}_{\mathbf{k}}-\mathbf{\mu} _{\mathbf{k},\mathbf{t}})\equiv\mathbf{0}\), which implies that \(K^{-1}\sum_{k=1}^{K}\mathbf{d}_{\mathbf{k},\mathbf{t}}^{\top}\mathbf{W}_{ \mathbf{k},\mathbf{t}}^{-1}(\mathbf{Y}_{\mathbf{k}}-\mathbf{\mu}_{\mathbf{k}, \mathbf{t}})=\mathbf{U}_{\mathbf{k}}(\hat{\mathbf{\beta}}_{t})\equiv\mathbf{U}_{ \mathbf{k}}(\hat{\mathbf{\beta}}_{t-1})\equiv\mathbf{0}\). Under the assumptions of Proposition 9, the IRWLS estimator converges in distribution to the correct target and is asymptotically additive. Hence, once again, all previous results apply for large enough \(n\), provided \(\mu_{n}=O(1)\) or proper stabilization is employed, and the other relevant conditions of Section 4 hold. However, if one is willing to condition on the sigma-algebra of events generated by some \(\hat{\mathbf{\beta}}_{t-1}\) following the occurrence of the event that \(d(\hat{\mathbf{\beta}}_{t-1},\hat{\mathbf{\beta}}_{t-2})<\tau\) for a sufficiently small \(\tau>0\), and for a long enough run, then \(\hat{\mathbf{\beta}}_{t}\) is an additive estimator for _all_ finite samples. Therefore, the variance identity and exponential inequalities of Section 4 can be used for conducting cogent inference with functional approximations that minimize error in \(\ell_{p}\) space. Importantly, this can again be accomplished when unknown, intractable, and possibly non-sparse dependency structures are present. ## 6 Simulations and a Data Application This section offers two simulation experiments under unfavorable dependence conditions: one for \(\bar{Y}\) and one for \(\hat{\mathbf{\beta}}\). Both for symmetric \(\mathcal{U}\) variables. An approximation to gauge the robustness of A5 will help since, in general, the exact form of \(\mathrm{E}(\exp\{s\mathbf{w}\mathbf{\epsilon}\})\) is intractable. To this end, we can use the fact that \(\mathrm{E}(\exp\{s\mathbf{w}\mathbf{\epsilon}\})\approx 1+2^{-1}s^{2}\{1+\mu_{n} \phi_{n}\}\sum_{i=1}^{n}w_{i}^{2}\sigma_{i}^{2}\) by Taylor approximation and that the latter expression is bounded by \(\exp\{2^{-1}s^{2}(1+\mu_{n}\phi_{n})\sum_{i=1}^{n}w_{i}^{2}\sigma_{i}^{2}\}\). For our \(\mathcal{U}\) variables, it is then implied that \(\exp\{2^{-1}s^{2}(1+\mu_{n}\phi_{n})\sum_{i=1}^{n}w_{i}^{2}\sigma_{i}^{2}\} \leq\exp\{24^{-1}s^{2}\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2}\}\), it least as an approximate rule of thumb for A5. Although the value on the right side of the inequality bounds the functional average specified in A5, it does so tightly. For instance, if \(w_{i}=n^{-1}\) for \(\forall i\), each error variable has the same upper bound for its support, and \(s=C\sqrt{n}\) for some positive constant \(C\), \(\mathrm{Av}_{*}(\exp\{s\mathbf{w}\mathbf{\epsilon}\})\rightarrow\exp[24^{-1}C^{2} R^{2}]\) quickly. Regardless, further algebraic manipulation of this setup implies the following rough bound on the summary values of the dependency structure: \(\mu_{n}\phi_{n}\leq 12^{-1}\{\sum_{i=1}^{n}w_{i}^{2}\sigma_{i}^{2}\}^{-1} \sum_{i=1}^{n}w_{i}^{2}R_{i}^{2}-1\). Homogeneity of variances and ranges results in a simpler weight-invariant bound of \(12^{-1}\sigma^{-2}R^{2}-1\). For simplicity, all ranges and variances will be equal. Insofar as this approximate bound holds, A5 should approximately hold as well and inference should remain robust, even with mild violations. All experiments possess \(N=10,000\) simulations. To get a more accurate assessment of robustness to violations of A5, however, we also compare \(\tilde{A}=\max(N^{-1}\sum_{i=1}^{N}\exp\{s\mathbf{w}\mathbf{\epsilon}_{i}\},N^{- 1}\sum_{i=1}^{N}\exp\{-s\mathbf{w}\mathbf{\epsilon}_{i}\})\) to \(\mathrm{Av}_{*}=\prod_{i=1}^{n}\{sw_{i}M_{i}\}^{-1}\mathrm{sinh}(sw_{i}M_{i})\), which is an exact form of \(\mathrm{Av}_{*}(\exp\{s\mathbf{w}\mathbf{\epsilon}\})\) for \(\mathcal{U}\) observations of our type. Investigating all arbitrary values of \(s\) isn't possible or necessary. Instead, we set \(s=\{M^{2}\cdot c_{*}\cdot\sum_{i=1}^{n}w_{i}^{2}\}^{-1/2}6\cdot\{6^{-1}\log(2 /\alpha)\}^{1/2}\) for a context-dependent value of \(c_{*}\) that limits the size of the exponential and average values for readability. This value of \(s\) also corresponds to the optimal value of Theorem 3 with homogeneous ranges and \(\tau\) set to the required expression for conservative, two-sided \(1-\alpha\) confidence sets. Additionally, we employ a value of \(s\) that is \(O(\sqrt{n})\) since this order better corresponds to the rule of thumb. Since we will be evaluating 95% confidence sets, \(\alpha=.05\) for all experiments. SetupTo test 'apocalyptic' scenarios, each simulation employs fully connected linear dependency graphs. In other words, we set \(\mu_{n}=n-1\) for \(n\in\{100,500,1500\}\). For the \(\bar{Y}\) setup, \(Y_{i}\sim Beta(\alpha,\alpha)\). This translates our rule of thumb to \(3^{-1}(2\alpha+1)-1\). The first set of simulations uses \(\alpha=10\) universally. According to the rule of thumb, then, confidence sets should start to lose their nominal values when \(\phi_{n}=.06,.01\), and \(.004\) respectively. For comparison, we establish a baseline at \(\phi=0\) and make use of values that neighbor these breakdown points. In many unfavorable settings, we would still not expect \(\mu_{n}\) to always attain its maximum or for \(\phi_{n}\) to take moderate values when it does. Hence, this series of simulations is built to demonstrate the robustness of the results for finite sample inference in practical settings since decent performance under these conditions suggest an adequate level of dependability in more modest ones. A second experiment fixes \(\phi\) to \(.1\) for \(n=500\) observations and varies only \(\alpha\) for values in \(\{10,25,50,100\}\) to further examine the relationship between the range-variance ratio and robustness to statistical dependence. For all of these simulations, \(c_{*}=10\). For the \(\mathcal{U}\) regression setup, each \(\epsilon_{i}\sim TruncNorm(M=20,\mu=0,\sigma^{2}=25)\). The linear model is characterized by \(Y_{i}=20+10\cdot t_{i}+\epsilon_{i}\), where each \(t_{i}\) is a fixed draw from \(T_{i}\sim TruncNorm(m=-5,M=5,\mu=1,\sigma^{2}=1)\). Here, \(c_{*}=\sigma=5\) for control of exponential size. Dependencies in the outcome variables are induced with Gaussian copulas. Literature on this method is available elsewhere [10; 11]. \(\mathbf{R}\) version 4.2.2 statistical software and the package 'copula' are employed for all experiments [32; 39]. The Beta distribution simulations make use of an exchangeable correlation matrix with off-diagonal cells populated by \(\phi_{n}\). The strategy for the regression simulations is more complicated. Essentially, we specify an unstructured correlation matrix and set its non-diagonal values to the corresponding elements of \(25^{-1}\phi_{*}n^{2}\cdot\mathbf{w}_{1}\mathbf{w}_{1}^{\top}\) for \(\phi_{s}\in\{0,.05,.1,.15\}\). The first row of weights is used as a basis since the product of its values will often match the valence of the weight products in the sum of covariances. This will induce a dense mosaic of positive summands and inflate the variance. As usual, empirical coverage is estimated by \(N^{-1}\sum_{i=1}^{N}1_{\theta\in\mathcal{C}}\), where \(\mathcal{C}\) is the constructed confidence set. Three coverage values are ultimately estimated--\(\hat{\mathrm{CI}}_{Wald},\hat{\mathrm{CI}}_{\mathcal{U}}\), and \(\hat{\mathrm{CI}}_{\hat{\mathrm{I}}}\)--although the third is estimated for regression simulations only. The estimated Wald confidence sets make use of cluster-robust standard errors for the Beta simulations and robust standard errors in the style of generalized estimating equations with an exchangeable correlation structure for the regression cases [17]. They suppose asymptotic normality. To mimic the specification of a partially correct but invalid partition, the outcome variables are assigned to \(10^{-1}n\) clusters sequentially. The \(\hat{\mathrm{CI}}_{\mathcal{U}}\) estimator corresponds to the confidence sets that are constructed in accordance with Theorem 3. Importantly, these sets treat \(M\) as known. This is not the case for the confidence sets targeted by \(\hat{\mathrm{CI}}_{\hat{\mathrm{I}}}\). Here, \(M\) is treated as unknown. The sets are still constructed in accordance with Theorem 3. However, the sample range of the residuals are used in place of \(2M\). This method is explored to gauge the robustness of the plug-in strategy, which will often be required in practice. Finally, the average lower and upper endpoint of the \(\mathcal{U}\) constructed confidence sets for the case that \(M\) is known are also provided for reference. The results of these simulations are available in **Table 1**, **Table 2**, and **Table 3**. For each table, A5 is estimated to hold when \(\hat{A}\leq\mathrm{Av}_{s}\) and is estimated to be violated when '\(>\)' is shown. **Table 1** shows that the coverage of the confidence sets resulting from A5 and Theorem 3 starts to dissipate in quality around the predicted points, and also when A5 ceases to be true. **Table 2** provides evidence that, even if A5 does not hold exactly, the robustness of the confidence sets that result from its employment is a function of the extrema of each \(\mathcal{S}_{i}\). In general, random variables with higher absolute extremes allow for more dense dependencies to exist without undermining inference: a fact that has been apparent since at least Hoeffding's lemma. Notably, **Table 3** substantiates the utility of the plug-in strategy. Although, as expected, it is not as robust as when the upper extreme of the support is known, the simulation evidence shows that the confidence sets that use the sample range maintain their semi-conservative nominal coverage value while A5 holds. Each table also shows that the standard methods employed absolutely fail to uphold nominal coverage. Overall, the key point supported by these experiments is that A5 is a feasible condition that allows for cogent finite sample inference in many important settings, and even when every outcome variable is statistically dependent. Additive statistics--from linear models and from estimating equations--are a critical tool for scientific discovery. Our job here was to demonstrate that they remain dependable enough for cogent finite sample inference in complicated modern settings. This has been accomplished, at least in some dimension. ### Carbon Dioxide and Global Warming In this section, we estimate the association between global temperature change and carbon dioxide levels between the years of 1979 and 2022. Monthly averages for global carbon dioxide levels (CO\({}^{2}\); ppm) and global temperature anomalies (Temp; \({}^{\circ}\)C) were acquired from the Global Monitoring Laboratory and the Goddard Institute for Space Studies respectively [23; 35]. More information pertaining to the latter source and the methods utilized for measurement are obtainable elsewhere [25]. Although the estimation of causality is beyond the scope of this analysis, monthly data for an industrial production index for the G-20 countries (Index) was also accessed for these years to act as a \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \(\beta_{0}=20\) & \(n\) & \(\phi_{*}\) & Mean Lower Endpoint & Mean Upper Endpoint & \(\mathcal{CI}_{Wall}\) & \(\mathcal{CI}_{U}\) & \(\mathcal{CI}_{\hat{R}}\) & \(\hat{A}\) & Av\({}_{*}\) \\ \hline & 100 & 0 & 15.458 & 24.556 & 0.884 & 1 & 1 & 1.047 & \(<\) & 1.262 \\ & & 0.05 & 15.459 & 24.557 & 0.823 & 1 & 0.999 & 1.066 & \(<\) & — \\ & & 0.1 & 15.460 & 24.558 & 0.770 & 1 & 0.995 & 1.086 & \(<\) & — \\ & & 0.15 & 15.461 & 24.559 & 0.725 & 1 & 0.987 & 1.106 & \(<\) & — \\ & 500 & 0 & 17.934 & 22.063 & 0.904 & 1 & 1 & 1.044 & \(<\) & 1.247 \\ & & 0.05 & 17.933 & 22.061 & 0.672 & 1 & 0.990 & 1.144 & \(<\) & — \\ & & 0.1 & 17.932 & 22.061 & 0.549 & 0.993 & 0.956 & 1.254 & \(>\) & — \\ & & 0.15 & 17.931 & 22.060 & 0.475 & 0.978 & 0.914 & 1.374 & \(>\) & — \\ & 1500 & 0 & 18.833 & 21.170 & 0.902 & 1 & 1 & 1.046 & \(<\) & 1.258 \\ & & 0.05 & 18.836 & 21.173 & 0.487 & 0.979 & 0.938 & 1.382 & \(>\) & — \\ & & 0.1 & 18.838 & 21.175 & 0.366 & 0.908 & 0.835 & 1.840 & \(>\) & — \\ & & 0.15 & 18.839 & 21.176 & 0.306 & 0.839 & 0.753 & 2.474 & \(>\) & — \\ \hline \(\beta_{1}=10\) & \(n\) & \(\phi_{*}\) & Mean Lower Endpoint & Mean Upper Endpoint & \(\mathcal{CI}_{Wall}\) & \(\mathcal{CI}_{U}\) & \(\mathcal{CI}_{\hat{R}}\) & \(\hat{A}\) & Av\({}_{*}\) \\ \hline & 100 & 0 & 6.836 & 13.151 & 0.885 & 1 & 1 & 1.023 & \(<\) & 1.111 \\ & & 0.05 & 6.835 & 13.150 & 0.854 & 1 & 0.999 & 1.028 & \(<\) & — \\ & & 0.1 & 6.835 & 13.150 & 0.823 & 1 & 0.997 & 1.033 & \(<\) & — \\ & & 0.15 & 6.834 & 13.149 & 0.794 & 1 & 0.996 & 1.037 & \(<\) & — \\ & 500 & 0 & 8.546 & 11.451 & 0.903 & 1 & 1 & 1.022 & \(<\) & 1.114 \\ & & 0.05 & 8.546 & 11.452 & 0.758 & 1 & 0.998 & 1.047 & \(<\) & — \\ & & 0.1 & 8.547 & 11.452 & 0.662 & 1 & 0.988 & 1.072 & \(<\) & — \\ & & 0.15 & 8.547 & 11.453 & 0.594 & 0.997 & 0.971 & 1.098 & \(<\) & — \\ & 1500 & 0 & 9.189 & 10.807 & 0.903 & 1 & 1 & 1.024 & \(<\) & 1.114 \\ & & 0.05 & 9.188 & 10.805 & 0.601 & 0.998 & 0.985 & 1.097 & \(<\) & — \\ & & 0.1 & 9.187 & 10.804 & 0.482 & 0.976 & 0.936 & 1.176 & \(>\) & — \\ & & 0.15 & 9.186 & 10.804 & 0.409 & 0.942 & 0.882 & 1.260 & \(>\) & — \\ \hline \hline \end{tabular} \end{table} Table 3: Regression Simulations rudimentary adjusting variable [31]. The index score for each country was summed for each month to construct a single index. Alternative metrics were also considered for this analysis. For instance, measurements on global population growth and the proportion of landmass covered by forests each year were also obtained. However, they were not utilized due to issues of multicollinearity. The approach of this paper is relevant for this question since, although useful and informative as conceptualizations, there is no reason for any complex ecological time series to strictly abide by the neat schematics of a typical moving average or auto-regressive model. Unknown unknowns likely impact the process across time and location. Even if auto-correlations diminish, this does not imply that dependencies do. _Methods._ The cardinal association is investigated w.r.t. two units of time: monthly and yearly. Only one time lag is utilized in both cases. This provided \(n=527\) observations for the monthly analysis and \(n=43\) observations for the yearly. For the latter exploration, all monthly variables are averaged for each year. For the former, a categorical variable for the season (December-February: Winter; March-May: Spring; June-August: Summer; September-November: Fall) is constructed to adjust for additional time trends. CO\({}^{2}\) and Index are also log-transformed for both analyses. Two baseline models are estimated: \(\text{Temp}_{t}=\beta_{0}+\beta_{1}\text{Temp}_{t-1}+\beta_{2}\text{log}\{ \text{CO}_{t-1}^{2}\}+\beta_{2}\text{log}\{\text{Index}_{t-1}\}+\sum_{i=1}^{3 }\alpha_{i}\text{Season}_{i}+\epsilon_{t}\) and \(\overline{\text{Temp}}_{t}=\beta_{0}+\beta_{1}\overline{\text{log}\{\text{ CO}^{2}\}}_{t-1}+\beta_{2}\overline{\text{log}\{\text{Index}\}}_{t-1}+\epsilon_{t}\). Non trend adjusting variables are dropped from the model if they do not induce at least a 10 percent change in the magnitude of the estimate. All models are fitted via ordinary least squares with a Type I error rate of \(\alpha=.05\). Confidence sets are constructed by way of Theorem 3, which also applies to stochastic regressions. For instance, when \(\mathbf{W}=(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}\), then \(\mathbf{B}_{s}=\beta_{s}+\sum_{i=1}^{n}W_{s,i}\epsilon_{i}\) is additive in \(Z_{s,i}=W_{s,i}\epsilon_{i}\). Strict stationarity, regularity, and symmetry of the \(\{Z_{s,i}\}_{i\in I}\) is sufficient for the application of the theorem in this analysis. Pertinently, \(\mathbf{B}_{s}\) is not unbiased for \(\beta_{s}\) for the first model since \(\text{Temp}_{t-1}\) appears on the right-hand side of the equation. However, it can still be consistent. Consistency is obtained if A3 applies to the mean degrees of the graphs associated with \(\{Z_{s,i}\}_{i\in I}\) and \(\{\epsilon_{i}\}_{i\in I}\). All conditions are feasibly checked with time series plots, histograms, and empirical CDF plots for \(\{W_{s,i}\hat{e}_{i}\}_{i\in I}\) and \(\{\hat{e}_{i}\}_{i\in I}\). Here, we use the plug-in estimator \(\hat{R}_{s}=\max_{i\in I}(W_{s,i}\hat{e}_{i})-\min_{i\in I}(W_{s,i}\hat{e}_{i})\). Again, this will underestimate the true range for finite samples. However, it is a consistent estimator under the mild regularity conditions we suppose. Proof of this is offered in the supplementary material. Moreover, the magnitude of the underestimation is not likely to undercut the conservatism of the method. In accordance with Theorem 3, then, confidence sets have the form \(\mathbf{B}_{s}\pm\sqrt{n}R_{s}\cdot[6^{-1}\text{log}(2/\alpha)]^{1/2}\). We use the same rule of thumb to gauge the robustness of A5: \(\phi_{s,n}\leq(n-1)^{-1}\cdot[12^{-1}S_{w,i}^{-2}\hat{R}_{s}^{2}-1]\), where \(S_{w,i}^{2}\) is the sample variance of the \(W_{s,i}\hat{e}_{i}\). This approximate bound is compared to auto-correlation estimates. Using the sample range for the calculation of the rule of thumb helps to counter its limitation as a plug-in. _Results._ Including log[Index\({}_{t-1}\)] resulted in an approximate.06 decrease in the effect estimate for log[CO\({}_{t-1}^{2}\)] after also adjusting for the season. This did not meet the specified threshold. Hence, log[Index\({}_{t-1}\)] was dropped from the model. Season was retained to adjust for time trends. Per every unit increase in the logarithm of the previous month's CO\({}^{2}\) levels, there is an estimated mean increase of 1.53 \({}^{\circ}\)C (\(\geq\) 95% CI:.25, 2.8) in the next month's global temperature after accounting for the season and the previous month's global temperature. This effect was larger than the estimated.6 \({}^{\circ}\)C change in average global temperature (\(\geq\) 95% CI:.24,.95) per unit increase in the previous month's temperature. For the annual model, \(\overline{\text{log}\{\text{Index}\}}_{t-1}\) was found to decrease the effect of interest by approximately.07 percent and was therefore removed. Per each unit change in the logarithm of the average yearly CO\({}^{2}\) level of the previous year, the mean global temperature of the next year increased by an estimated 3.87 \({}^{\circ}\)C (\(\geq\) 95% CI: 3, 4.73). _Model checking._ **Figure 1** demonstrates the time series, empirical CDF, histogram, and auto-correlation plots for the \(W_{s,i}\hat{e}_{i}\) related to CO\({}^{2}\) for each model. Both histograms provide evidence of symmetric and roughly regular distributions since they are unimodal, bell-shaped, and (mostly) vary in a smooth fashion over an interval with endpoints that are roughly symmetric about zero. The empirical CDF plots also display roughly the same amount of area above and below the curve, providing evidence of \(\mathcal{U}\) status. For our rule of thumb, we require \(\phi_{s,n}\) to be bounded by 0.014 and 0.018 respectively for the monthly and annual models. It is reasonable to assert that this bound is fulfilled due to the positive-negative oscillating character of the auto-correlation plots. With the number of lags set to \(\lfloor 10\text{log}_{10}(n)\rfloor\) and \(\lfloor 2^{-1}(n-1)\rfloor\), \(\hat{\phi}_{s,n}=-.007\) and \(-.002\) respectively for the monthly data and \(-.03\) and \(-.02\) for the annual data, of course under the supposition of strict stationarity. The time series plot for the monthly data shows no remarkable departure from this latter supposition, although there does appear to be some departures for the annual data. **Figure 2** shows the auto-correlation and time series plots for the residuals alone. These plots appear consistent with the assumption of stationarity and sub-linear mean dependencies. the residuals, which are a biased and constrained representation of the true error. This situation, however, applies to all regression diagnostic procedures that make use of the residuals and does not rule out their use for the estimation of restricted portions of auto-correlation. Ultimately, these constraints complicate, but do not eliminate, the utility of these objects' employment as a diagnostic mechanism. ## 7 Concluding remarks This manuscript accomplished three main objectives. First, it established a small set of related identities for the variance of a vector of random sums. These identities require reasoning about the mean number of outcome variables correlated within a sample and their average correlation only. Since the true dependency structure of any collection of random variables is safely posited to be unknown and empirically unidentifiable in whole, removing the strict need to specify \(\binom{n}{2}\) parameters is useful. It was shown that a researcher can elect to reason about these two intuitive summary constants instead, or that she can employ them in conjunction with popular covariance modeling methods to capture at least some of the variability that is missed by an invalid specification. Although these constants are unknown, so are the \(\binom{n}{2}\) covariance parameters that statisticians specify on a daily basis. Furthermore, the cogent defense of a conservative choice for these values is a much less demanding task than the alternative in a majority of circumstances. Pertinently, these identities were also used to affirm the consistency of additive estimators--including cluster-robust variance estimators w.r.t. their identified portion of the overall variance--under the very general condition that the average number of correlated variables in a sample is asymptotically sub-linear as a function of \(n\). For cluster variance estimators, this was shown to be the case even when no valid partition of the sample exists. A second accomplished objective was to extend these results to estimating equations and hence to the estimators of statistical approaches such as the generalized linear model. The third and most important contribution of this paper was to prove a sharpened version of Hoeffding's inequality for a class of commonly encountered random variables. Notably, it was proven that this inequality can apply even when every single outcome variable in a sample is statistically dependent, insofar as the magnitude of their average correlation is at least moderately controlled. This result is certainly valuable for many fields where the assumption of weak or local dependence is especially untenable, such as in climate science, social network analysis, finance, and really any ecological or sociological domain. That said, more work is due. Like all statistical models, the valid application of this inequality relies on a set of assumptions that can only be feasibly verified in practice. Although imperfect, however, the diagnostic processes available are equivalent to those used to check the assumptions of common regression models. In this sense, the approach established here at least possesses nomological validity. ## CRediT Statement **Shane Sparkes**: Conceptualization, Methodology, Validation, Formal analysis, Data Curation, Writing - Original Draft, Writing - Review and Editing, Visualization. **Lu Zhang**: Writing - Review and Editing, Supervision, Project administration. ## Acknowledgments We would like to thank the referees for their thoughtful commentary, encouragement, and insights. Moreover, we would also like to acknowledge Dr. Thomas Valente and Dr. Juan Pablo Lewinger for their commentary on a first scalar form of the variance identity. ## References * (1) A. C. Aitken, Iv.--on least squares and linear combination of observations, Proceedings of the Royal Society of Edinburgh 55 (1936) 42-48. * (2) T. Amemiya, Generalized least squares theory, Advanced Econometrics (1985). * (3) L. Anselin, Spatial regression, The SAGE handbook of spatial analysis 1 (2009) 255-276. * (4) G. Bennett, Probability inequalities for the sum of independent random variables, Journal of the American Statistical Association 57 (1962) 33-45. * (5) K. N. Berk, A central limit theorem for m-dependent random variables with unbounded m, The Annals of Probability (1973) 352-354. * Boucheron et al. (2003) S. Boucheron, G. Lugosi, O. Bousquet, Concentration inequalities, in: Summer school on machine learning, Springer, 2003, pp. 208-240. * Box (1987) G. E. Box, N. R. Draper, Empirical model-building and response surfaces., John Wiley & Sons, 1987. * Cressie (2015) N. Cressie, Statistics for spatial data, John Wiley & Sons, 2015. * Daniel (2014) P. Daniel, Concentration Inequalities for Dependent Random Variables, Ph.D. thesis, National University of Singapore, 2014. * Demarta et al. (2005) S. Demarta, A. J. McNeil, The t copula and related copulas, International statistical review 73 (2005) 111-129. * Embrechts et al. (2001) P. Embrechts, F. Lindskog, A. McNeil, Modelling dependence with copulas, Rapport technique, Departement de mathematiques, Institut Federal de Technologie de Zurich, Zurich 14 (2001) 1-50. * Feng et al. (2013) C. Feng, H. Wang, Y. Han, Y. Xia, X. M. Tu, The mean value theorem and taylor's expansion in statistics, The American Statistician 67 (2013) 245-248. * Gardiner et al. (2009) J. C. Gardiner, Z. Luo, L. A. Roman, Fixed effects, random effects and gee: What are the differences?, Statistics in medicine 28 (2009) 221-239. * Gotze et al. (2019) F. Gotze, H. Sambale, A. Sinulis, Higher order concentration for functions of weakly dependent random variables, Electronic Journal of Probability 24 (2019) 1-19. * Hall (2005) A. R. Hall, Generalized method of moments, Oxford university press, 2005. * Hoeffding (1994) W. Hoeffding, Probability inequalities for sums of bounded random variables, The collected works of Wassily Hoeffding (1994) 409-426. * Hijsgaard et al. (2006) S. Hijsgaard, U. Halekoh, J. Yan, The r package geepack for generalized estimating equations, Journal of statistical software 15 (2006) 1-11. * Janson (2004) S. Janson, Large deviations for sums of partly dependent random variables, Random Structures & Algorithms 24 (2004) 234-248. * Janson et al. (2011) S. Janson, T. Luczak, A. Rucinski, Random graphs, John Wiley & Sons, 2011. * Jennrich (1969) R. I. Jennrich, Asymptotic properties of non-linear least squares estimators, The Annals of Mathematical Statistics 40 (1969) 633-643. * Kedem and Fokianos (2005) B. Kedem, K. Fokianos, Regression models for time series analysis, John Wiley & Sons, 2005. * Kontorovich and Ramanan (2008) L. Kontorovich, K. Ramanan, Concentration inequalities for dependent random variables via the martingale method, The Annals of Probability 36 (2008) 2126-2158. * Lan et al. (2023) X. Lan, P. Tans, K. Thoning, Trends in globally-averaged co2 determined from noaa global monitoring laboratory measurements, Version 2023-08. * Ledoux (2001) M. Ledoux, The concentration of measure phenomenon, 89, American Mathematical Soc., 2001. * Lenssen et al. (2019) N. J. Lenssen, G. A. Schmidt, J. E. Hansen, M. J. Menne, A. Persin, R. Ruedy, D. Zyss, Improvements in the gistemp uncertainty model, Journal of Geophysical Research: Atmospheres 124 (2019) 6307-6326. * Liang and Zeger (1986) K.-Y. Liang, S. L. Zeger, Longitudinal data analysis using generalized linear models, Biometrika 73 (1986) 13-22. * MacKinnon (2023) J. G. MacKinnon, M. Q. Nielsen, M. D. Webb, Cluster-robust inference: A guide to empirical practice, Journal of Econometrics 232 (2023) 272-299. * Moulton (1986) B. R. Moulton, Random group effects and the precision of regression estimates, Journal of econometrics 32 (1986) 385-397. * Moulton (1990) B. R. Moulton, An illustration of a pitfall in estimating the effects of aggregate variables on micro units, The review of Economics and Statistics (1990) 334-338. * Nelder and Wederburn (1972) J. A. Nelder, R. W. Wederburn, Generalized linear models, Journal of the Royal Statistical Society: Series A (General) 135 (1972) 370-384. * OECD (2023) OECD, Industrial production (indicator), 2023. * Core Team (2021) R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2021. * Core Team (2022) R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2022. * Talagrand (1996) M. Talagrand, A new look at independence, The Annals of probability (1996) 1-34. * Team (2023) G. Team, Giss surface temperature analysis (gistemp), version 4, 2023. * Wajc (2017) D. Wajc, Negative association: definition, properties, and applications, Manuscript, available from https://goo. gl/2ekqM (2017). * Withers (1981) C. S. Withers, Central limit theorems for dependent variables. i, Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete 57 (1981) 509-534. * Xie and Yang (2003) M. Xie, Y. Yang, Asymptotics for generalized estimating equations with large cluster sizes, The Annals of Statistics 31 (2003) 310-347. * Yan (2007) J. Yan, Enjoy the joy of copulas: with a package copula, Journal of Statistical Software 21 (2007) 1-21. * Yuan and Jennrich (1998) K.-H. Yuan, R. I. Jennrich, Asymptotics of estimating equations under natural conditions, Journal of Multivariate Analysis 65 (1998) 245-260. * Ziegler et al. (1998) A. Ziegler, C. Kastner, M. Blettner, The generalised estimating equations: an annotated bibliography, Biometrical Journal: Journal of Mathematical Methods in Biosciences 40 (1998) 115-139. * Zorn (2001) C. J. Zorn, Generalized estimating equation models for correlated data: A review with applications, American Journal of Political Science (2001) 470-490. Properties and Deviations of Random Sums of Densely Dependent Random Variables: Supplemental document + Footnote †: journal: Journal of Multivariate Analysis ## 1 Introduction The manuscript introduces the \(\mathcal{U}\) class of random variables and utilizes a handful of related identities to offer intuition about A5. The purpose of this additional document is to prove some of the claims made in the paper about these variables. They are mostly foundational and elementary, although this does not mean that they are not informative. Moreover, this document seeks to further familiarize the reader with this type of variable and to justify model diagnostic methods and plug-in methods under the assumption of copious dependencies. Stated informally, the following ideas will be explored and proven here: 1. \(\mathcal{U}\) variables are characterized by the absence of a type of intrinsic linear distributional bias 2. Equivalent definitions of \(\mathcal{U}\) status exist that connect these variables to a generalized notion of sum-symmetry 3. Every continuous uniform distribution can be decomposed into the sum of an arbitrary unimodal \(\mathcal{U}\) variable with sub-uniform left tails and an additional error variable s.t. the latter is uncorrelated with all functions of the former 4. \(\mathcal{U}\) variables (in a sense) generalize the classic regression condition of exogeneity 5. The preservation of the intrinsic linear bias of a distribution during sampling is sufficient for unbiased moment estimation, and even with informative sampling 6. When \(\max(\mathcal{S}_{i})=M\) for \(\forall i\), where once again \(\mathcal{S}_{i}\) is the support of \(Y_{i}\), then \(Y_{(n)}=\max(Y_{1},Y_{2},\ldots,Y_{n})\) is a consistent estimator of \(M\) under mild regularity conditions, even when dependencies are plethora For simplicity, we will be avoiding any statements that are required for the existence of referenced objects. We require some additional setup for the fourth and fifth points. Consider a population of random variables \(P=\{Y_{k}\}_{k\in K}\) for \(K=\{1,\ldots,N\}\) and a sample \(\zeta=\{Y_{i}\}_{i\in K}\) for \(I\subset K\) and hence \(\zeta\subset P\). We will say \(\delta=(\delta_{1},\ldots,\delta_{N})\) s.t. \(\delta_{k}=1\) if and only if a \(Y_{k}\in P\) is an element in \(\zeta\) and it is also uncensored. We will also assume that \(\mathrm{E}\delta_{k}>0\) for all \(k\). It is well known, then, that an arbitrary \(Y_{i}\in\zeta\), say \(Y_{\delta_{i}}\), actually has a sampling distribution that is not in general equal to its population counterpart, i.e., \(Y_{\delta_{i}}\sim f_{\delta}(y_{i})=\{\mathrm{E}\delta_{i}\}^{-1}\mathrm{E}( \delta_{i}|y_{i})f(y_{i})\), where \(f(y_{i})\) is the density or mass function of \(Y_{i}\). Provided this setup, we only need one new main assumption for this exploration: 1. Let \(\mathcal{S}\) be the support of the distribution of \(Y\) and \(\mathcal{S}_{\delta}\) be the support of the distribution of \(Y_{\delta}\). Then \(\mathcal{S}=\mathcal{S}_{\delta}\). Essentially, C1 stipulates that sampling preserves the support. This is ultimately a very mild condition since it places no additional constraint on the distortion caused by informative (or biased) sampling mechanisms. All propositions are numbered separately from the main document, i.e., numbering begins anew here. ## 2 \(\mathcal{U}\) Random Variables The first proposition characterizes a precursor to the distance between \(\mathrm{E}Y\) and \(\mathrm{Av}(Y)\) by relating it to the aforementioned type of intrinsic distributional bias. It shows that the difference between \(\mathrm{E}Y\) and \(\mathrm{Av}(Y)\) is proportional to the covariance between a random variable and its inverse density or mass function. It is this covariance that we can conceptualize as an inherent bias within a probability distribution. Roughly put, the more a random variable is correlated with its own (inverse) density or mass function, the farther its expected value will be from its average value. This correlation also imparts deviations from uniformity and skews the shape of the distribution. **Proposition 1**.: _Let \(Y=g(X_{1},\ldots,X_{k})\) be a bounded random variable and construct an indicator variable \(1_{R}\) to designate when \((x_{1},\ldots,x_{k})\in\mathcal{S}\). Then \(Av_{\mathbf{x}}[g(X_{1},\ldots,X_{k})]=E[g(X_{1},\ldots,X_{k})]+R_{\mathbf{x}}^ {-1}\sigma_{g(X_{1},\ldots,X_{k}),1_{R}f(X_{1},\ldots,X_{k})^{-1}}\). Similarly, \(Av(Y)=EY+R^{-1}\sigma_{Y,1_{R_{y}}f^{-1}(Y)}\)._ Proof.: Suppose the premises. \[\mathrm{E}\{g(X_{1},\ldots,X_{k})f(X_{1},\ldots,X_{k})^{-1}1_{R} \}-\mathrm{E}\{g(X_{1},\ldots,X_{k})\}\mathrm{E}\{1_{R}f^{-1}(X_{1},\ldots,X_ {k})\}=\] \[\int_{\mathcal{S}}g(x_{1},\ldots,x_{k})d(x_{1},\ldots,x_{k})- \mathrm{E}\{g(X_{1},\ldots,X_{k})\}R_{\mathbf{x}}\implies\] \[R_{\mathbf{x}}^{-1}\sigma_{g(X_{1},\ldots,X_{k}),1_{R}f(X_{1}, \ldots,X_{k})^{-1}}=\mathrm{Av}_{\mathbf{x}}\{g(X_{1},\ldots,X_{k})\}-\mathrm{ E}\{g(X_{1},\ldots,X_{k})\}\] Rearrangement supplies the result. The proof is exactly analogous for the second identity. As an important side note, functional averages with respect to (w.r.t.) different spaces are not equal in general. The following proposition states when they are. **Corollary 1**.: \(Av_{\mathbf{x}}[g(X_{1},\ldots,X_{k})]=Av(Y)\) _if and only if \(R^{-1}\sigma_{Y,1_{R_{y}}f^{-1}(Y)}=R_{\mathbf{x}}^{-1}\sigma_{g(X_{1},\ldots, X_{k}),1_{R}f(X_{1},\ldots,X_{k})^{-1}}\)._ Proof.: This proof follows easily from Proposition 2. Only one direction will be supplied. \[\mathrm{Av}_{\mathbf{x}}\{g(X_{1},\ldots,X_{k})\} =\mathrm{Av}(Y)\implies\] \[\mathrm{E}\{g(X_{1},\ldots,X_{k})\}+R_{\mathbf{x}}^{-1}\sigma_{g (X_{1},\ldots,X_{k}),1_{R}f(X_{1},\ldots,X_{k})^{-1}} =\mathrm{E}Y+R^{-1}\sigma_{Y,1_{R_{y}}f^{-1}(Y)}\implies\] \[R_{\mathbf{x}}^{-1}\sigma_{g(X_{1},\ldots,X_{k}),1_{R}f(X_{1}, \ldots,X_{k})^{-1}} =R^{-1}\sigma_{Y,1_{R_{y}}f^{-1}(Y)}\] A special case of when Corollary 1 holds is when \(Y=\sum_{i}^{k}X_{i}\) and it is also the case that \(Y\) and each \(X_{i}\) are \(\mathcal{U}\) random variables, and the joint support of \(\sum_{i}^{k}X_{i}\) is equal to the Cartesian product of the marginal supports. We prove the most important part of this special case. This will then be used to show that if \(Y\in\mathcal{U}\), \(cY\in\mathcal{U}\) and \(Y+c\in\mathcal{U}\) for all \(c\in\mathbb{R}\). **Proposition 2**.: _Let \(Y=\sum_{i}^{k}X_{i}\). Say \(\mathcal{S}\) is the support of \(\sum_{i}^{k}X_{i}\) and \(\mathcal{S}_{i}\) is the support of \(X_{i}\) for \(\forall i\). If \(\mathcal{S}=\mathcal{S}_{1}\times\mathcal{S}_{2}\times\cdots\times\mathcal{S} _{k}\) and an arbitrary \(X_{i}\in\mathcal{U}\), then \(Av_{\mathbf{x}}(Y)=EY\)._ Proof.: We prove the continuous case without loss of generality (WLOG). \[\mathrm{Av}_{\mathbf{x}}(Y)=\{\prod_{i}^{k}R_{i}\}^{-1}\int_{\mathcal{S}}\{ \sum_{i}^{k}X_{i}\}dx_{1}\cdots dx_{k}=\sum_{i}^{k}\mathrm{Av}(X_{i})=\sum_{i }^{k}\mathrm{E}X_{i}=\mathrm{E}Y\] **Proposition 3**.: _Suppose \(Y\in\mathcal{U}\) and let \(c\in\mathbb{R}\). Then \(cY\in\mathcal{U}\) and \(Y+c\in\mathcal{U}\)._ Proof.: Let \(c\in\mathbb{R}\) be arbitrary. Since expectations are linear, \(\mathrm{E}(cY)=c\mathrm{E}Y=c\mathrm{Av}(Y)\). Since integrals and sums are linear operators, \(\mathrm{Av}(cY)=R^{-1}\int_{\mathcal{S}}cYdy=c\cdot R^{-1}\int_{\mathcal{S}} Ydy=c\mathrm{Av}(Y)\), which completes this part of the proof WLOG. Now, say \(Z=Y+c\). Since \(c\) is a constant, \(\mathrm{Av}_{\mathbf{x}}(Z)=\mathrm{Av}(Y)+c=\mathrm{E}Z\) by Proposition 2. Note that \(\mathcal{S}_{Z}\), the support of \(Z\), is just a shift of \(\mathcal{S}\), the support of \(Y\). Hence, for the discrete case, \(|\mathcal{S}_{Z}|=|\mathcal{S}|=R\). Then, for \(c_{i}\in\mathcal{S}\), \(\mathrm{Av}(Z)=R^{-1}\sum_{i}^{R}\{c_{i}+c\}=\mathrm{Av}(Y)+c=\mathrm{Av}_{ \mathbf{x}}(Z)=\mathrm{E}Z\). The continuous case is analogous. We can let \(\mathcal{S}\) be a union of \(p\) intervals of real numbers and say \(R_{i}=M_{i}-m_{i}\) is the length of the \(i\)th interval. Then \(\mathrm{Av}(Y)=\{\sum_{i}^{p}R_{i}\}^{-1}\{\sum_{i}^{p}2^{-1}R_{i}(M_{i}+m_{i})\}\). Since \(Z\) is simply a location shift: \[\mathrm{Av}(Z)=\{\sum_{i}^{p}R_{i}\}^{-1}\int_{\mathcal{S}_{Z}}zdz=\{\sum_{i}^{ p}R_{i}\}^{-1}\{\sum_{i}^{p}2^{-1}R_{i}(M_{i}+m_{i}+2c)\}=\{\sum_{i}^{p}R_{i}\}^{-1}\{ \sum_{i}^{p}2^{-1}R_{i}(M_{i}+m_{i})\}+c=\mathrm{Av}(Y)+c\] The next sequence of results establish equivalent definitions of \(\mathcal{U}\) status under the additional assumption of regularity, i.e., that the support of the random variable is a single interval of real numbers or a set of integers \(\{m,m+1,m+2,\ldots,M-1,M\}\). We will condition on \(1_{R}\) to relieve some notation burden. **Lemma 1**.: _Let \(Y\) be a regular random variable defined on support \(\mathcal{S}=[m,M]\) with cumulative distribution function \(F(y)\) and survival function \(S(y)\). Say \(k\neq 0\) is an arbitrary real number. Then:_ 1. \(\int_{m}^{M}F(y)^{k}dy=\sigma_{F(Y)^{k},f(Y)^{-1}}+R\cdot E\Big{(}F(Y)^{k}\Big{)}\)__ 2. \(\int_{m}^{M}S(y)^{k}dy=\sigma_{S(Y)^{k},f(Y)^{-1}}+R\cdot E\Big{(}S(Y)^{k}\Big{)}\)__ 3. \(\sigma_{F(Y),f(Y)^{-1}}=R^{-1}\sigma_{Y,f(Y)^{-1}}\)__ Proof.: Statement 1 and 2 are a direct application of Proposition 1. For statement 3, apply the special case s.t. \(k=1\) to statement 1: \[\int_{m}^{M}F(y)dy =\sigma_{F(Y),f(Y)^{-1}}+2^{-1}R\implies\] \[M-\mathrm{E}Y =\sigma_{F(Y),f(Y)^{-1}}+2^{-1}R\implies\] \[M-2^{-1}M+2^{-1}m-\mathrm{E}Y =\sigma_{F(Y),f(Y)^{-1}}\implies\] \[\mathrm{Av}(Y)-\mathrm{E}Y =\sigma_{F(Y),f(Y)^{-1}}\] The last statement immediately follows by again applying Proposition 1. **Proposition 4**.: _Suppose \(Y\) is a regular, continuous random variable with support \(\mathcal{S}=[m,M]\), CDF \(F(y)\), survival function \(S(y)=1-F(y)\), and define \(\epsilon=Y-\mathrm{E}Y\). Then the following statements are equivalent:_ 1. \(Y\in\mathcal{U}\)__ 2. \(\sigma_{Y,f(Y)^{-1}}=0\)__ 3. \(M-\mathrm{E}Y=\mathrm{E}Y-m\)__ 4. \(\int_{m}^{M}\{F(y)-S(y)\}dy=0\)__ 5. \(\sigma_{F(Y),f(Y)^{-1}}=0\)__ 6. \(F(Y)\in\mathcal{U}\)__ 7. \(\int_{m-\mathrm{E}Y}^{M-\mathrm{E}Y}\epsilon^{k}d\epsilon=0\) _for all odd integers_ \(k\)__ Proof.: Equivalences between statements 1, 2, 3, 5, and 6 follow easily from Proposition 1, the definition of being \(\mathcal{U}\) class, and regularity since \(\mathrm{Av}(Y)=2^{-1}(M+m)\), and Lemma 1. To see how equivalences with statements 3 and 4 follow: \[\mathrm{E}Y =2^{-1}(M+m)\implies\] \[2\mathrm{E}Y =M+m\implies\] \[\mathrm{E}Y-m =M-\mathrm{E}Y\] However, the last line is true if and only if \(\int_{m}^{M}F(y)dy=\int_{m}^{M}S(y)dy\). This can easily be shown by applying integration by parts to each side of the equality. For the last statement, note that statements 1 through 6 imply that \(\epsilon^{k}\) is an odd function integrated over an interval that is symmetric about zero. For the opposite implication, assuming statement 7 implies that \((M-\mathrm{E}Y)^{s}=(\mathrm{E}Y-m)^{s}\) for some even integer \(s\). Since \(M\geq\mathrm{E}Y\) and \(\mathrm{E}Y\geq m\), it is then true that \(M-\mathrm{E}Y=\mathrm{E}Y-m\). This completes the proof. The next two propositions are for integer values random variables. They can be generalized, but this is not accomplished here. **Proposition 5**.: _Suppose \(Y\) is an integer valued random variable with \(\mathcal{S}=\{1,2,3,\ldots,M\}\), CDF \(F(y)\), and survival function \(S(y)=1-F(y)\). Then the following statements are equivalent:_ 1. \(Y\in\mathcal{U}\)__ 2. \(\sigma_{Y,f(Y)^{-1}}=0\)__ 3. \(\sum_{i}^{M}F(i)=EY\)__ 4. \(\sum_{i}^{M}\{F(i)-S(i)\}=1\)__ 5. \(Av(F(Y))=2^{-1}(M^{-1}+1)\)__ Proof.: The first two equivalences follow from Proposition 1 and definition. By Gauss' summation identity, \(\sum_{i}^{M}i=2^{-1}M(M+1)\). Hence, \(\operatorname{Av}(Y)=2^{-1}(M+1)\). This implies that \(M+1=2\mathrm{E}Y\), which implies statement 3 via the fact that \(\sum_{i}^{M}F(i)=M+1-\sum_{i}^{M}if(i)\). Assuming statement 3 then implies statements one and two, also by this last identity. For statement 4, suppose \(Y\in\mathcal{U}\). Then: \[\sum_{i}^{M}\{F(i)-S(i)\} =2\sum_{i}^{M}F(i)-M\] \[=2\mathrm{E}Y-M\] \[=1\] Now, suppose \(\sum_{i}^{M}\{F(i)-S(i)\}=1\). Then: \[1 =M+2-2\mathrm{E}Y\implies\] \[\mathrm{E}Y =2^{-1}(M+1)\] For the last statement: \[M^{-1}\sum_{j=1}^{M}F(j) =2^{-1}(M^{-1}+1)\implies\] \[M^{-1}\sum_{j=1}^{M}2F(j)-1 =M^{-1}\implies\] \[\sum_{j=1}^{M}2F(j)-M =1\implies\] \[\sum_{j=1}^{M}\{2F(j)-1\} =1\implies\] \[\sum_{j=1}^{M}\{F(j)-S(j)\} =1\implies\] \[Y\in\mathcal{U}\] One reverses the direction for the other equivalences. The next three propositions bound the distance between the expected and average values. **Proposition 6**.: _Let \(Y=g(X_{1},X_{2},\ldots,X_{k})\) be an measurable function of bounded random variables. Then:_ \[|E(Y)-Av_{\mathbf{x}}(Y)|\leq\sqrt{Av_{\mathbf{x}}(Y^{2})}\sqrt{R_{\mathbf{x }}\cdot E[f(X_{1},X_{2},\ldots,X_{k})]-1}\] \[|E(Y)-Av(Y)|\leq\sqrt{Av(Y^{2})}\sqrt{R\cdot E[f(Y)]-1}\] Proof.: We prove the case for continuous random variables w.r.t. \(\mathbf{x}\) WLOG. \[\int_{\mathcal{S}}g(x_{1},x_{2},\ldots,x_{k})f(x_{1},x_{2},\ldots,x _{k})d\mathbf{x}-R_{\mathbf{x}}^{-1}\int_{\mathcal{S}}g(x_{1},x_{2},\ldots,x_{k })d\mathbf{x}\] \[=\int_{\mathcal{S}}g(x_{1},x_{2},\ldots,x_{k})(f(x_{1},x_{2}, \ldots,x_{k})-R_{\mathbf{x}}^{-1})d\mathbf{x}\implies\] \[|\mathrm{E}\left(Y\right)-\mathrm{Av}_{\mathbf{x}}(Y)|\leq\sqrt{ \int_{\mathcal{S}}g(x_{1},x_{2},\ldots,x_{k})^{2}d\mathbf{x}}\int_{\mathcal{S} }[f(x_{1},x_{2},\ldots,x_{k})-R_{\mathbf{x}}^{-1}]^{2}d\mathbf{x}\] Now, for the right-hand side of the inequality: \[\sqrt{\int_{\mathcal{S}}g(x_{1},x_{2},\ldots,x_{k})^{2}d\mathbf{ x}\int_{\mathcal{S}}[f(x_{1},x_{2},\ldots,x_{k})-R_{\mathbf{x}}^{-1}]^{2}d \mathbf{x}}\] \[=\sqrt{R_{\mathbf{x}}\mathrm{Av}_{\mathbf{x}}(Y^{2})}\sqrt{\int _{\mathcal{S}}f(x_{1},x_{2},\ldots,x_{k})^{2}d\mathbf{x}-2R_{\mathbf{x}}^{-1}+ R_{\mathbf{x}}^{-1}}\] \[=\sqrt{\mathrm{Av}_{\mathbf{x}}(Y^{2})}\sqrt{R_{\mathbf{x}}\cdot \mathrm{E}[f(X_{1},X_{2},\ldots,X_{k})]-1}\] **Proposition 7**.: _Let \(Y\sim f(y)\) be continuous and regular with density \(f(y)\). Suppose \(\underset{y\in\mathcal{S}}{\text{min}}\{f(y)\}=m_{f}>0\) and say \(\underset{y\in\mathcal{S}}{\text{max}}\{f(y)\}=M_{f}\). Then:_ 1. \(|EY-Av(Y)|\leq(2\sqrt{3})^{-1}\sqrt{R}\sqrt{Av\{f^{-1}(Y)\}-R}\)__ 2. \(|EY-Av(Y)|\leq(4\sqrt{3})^{-1}\{m_{f}^{-1}-M_{f}^{-1}\}\)__ Proof.: By Lemma 1: \[|\mathrm{E}Y-\mathrm{Av}(Y)| =|\sigma_{F(Y),f^{-1}(Y)}|\] \[\leq\sigma_{F(Y)}\sigma_{f^{-1}(Y)}\] \[=(2\sqrt{3})^{-1}\ \sigma_{f^{-1}(Y)}\] Statement 1 follows from the fact that \(\mathrm{Var}[f^{-1}(Y)]=\int_{m}^{M}f^{-1}(y)dy-R^{2}\). Statement 2 follows from the fact that \(\mathrm{Var}[f^{-1}(Y)]\leq 4^{-1}(m_{f}^{-1}-M_{f}^{-1})^{2}\) since it is a bounded random variable. **Proposition 8**.: _Let \(Y\sim f(y)\) be a regular random variable with CDF \(F(y)\). Then \(f(Y)\in\mathcal{U}\) implies that \(Y\in\mathcal{U}\) if and only if \(F(Y)\in\mathcal{U}\). If \(Y\) is discrete, this is also provided that \(\mathcal{S}=\{1,2,3,\ldots,M\}\)._ Proof.: First we prove the discrete case. Note that, in general, \(\mathrm{E}[F(Y)]=2^{-1}[1+\mathrm{E}[f(Y)]]\). This can be shown by expanding the sum \(\sum_{i\in\mathcal{S}}F(y)f(y)\), which is equal to \(\sum_{i\in\mathcal{S}}f(y_{i})^{2}+\sum_{i<j:i,j\in\mathcal{S}}f(y_{i})f(y_{j})\). Since it is then the case that \(\{\sum_{i\in\mathcal{S}}f(y_{i})\}^{2}=\sum_{i\in\mathcal{S}}f(y_{i})^{2}+2 \cdot\sum_{i<j:i,j\in\mathcal{S}}f(y_{i})f(y_{j})=1\), \(\mathrm{E}[F(Y)]=1-\sum_{i<j:i,j\in\mathcal{S}}f(y_{i})f(y_{j})\), and \(\mathrm{E}[f(Y)]=\sum_{i\in\mathcal{S}}f(y_{i})^{2}\), the aforementioned identity follows. Now, suppose \(f(Y)\in\mathcal{U}\). Then \(\mathrm{E}[f(Y)]=M^{-1}\sum_{i=1}^{M}f(i)=M^{-1}\). By Proposition 6, then, \(|\mathrm{Av}(Y)-\mathrm{E}Y|\leq 0\), which implies that \(Y\in\mathcal{U}\). Since \(\mathrm{E}[f(Y)]=M^{-1}\), \(F(Y)\in\mathcal{U}\) by Proposition 5. The continuous case follows from Proposition 6 and Proposition 4 since \(\mathrm{E}[f(Y)]=R^{-1}\). **Proposition 9**.: _Let \(Y\sim f(y)\) be a regular and continuous random variable. Then \(f^{-1}(Y)\in\mathcal{U}\) implies that \(Y\in\mathcal{U}\). Furthermore, if \(Y_{k}=g(X_{1},\ldots,X_{k})\) for some function \(g\) s.t. \(\underset{y\in\mathcal{S}}{\text{min}}\{f(y)\}=m_{f}\to\infty\) as \(k\to\infty\) and \(\lim_{k\to\infty}Y_{k}=Z\) is bounded, then \(Y_{k}\) converges in distribution to a \(\mathcal{U}\) random variable._ Proof.: Both statements are consequences of Proposition 7. Since \(\mathrm{E}[f^{-1}(Y)]=R\), the first statement follows from Proposition 7 since \(|\mathrm{Av}(Y)-\mathrm{E}Y|\leq 0\). For the second statement, note that \(|\mathrm{Av}(Y_{k})-\mathrm{E}Y_{k}|\leq(4\sqrt{3})^{-1}m_{f}^{-1}\), again by Proposition 7 and also the fact that \(M_{f}>0\). Say \(Z=\lim_{k\to\infty}Y_{k}\). Then: \[\lim_{k\to\infty}|\mathrm{Av}(Y_{k})-\mathrm{E}Y_{k}|=|\mathrm{Av}(\lim_{k\to \infty}Y_{k})-\mathrm{E}(\lim_{k\to\infty}Y_{k})|\leq 0\] Hence, \(Y_{k}\) converges in distribution to \(Z\in\mathcal{U}\). An informal corollary is interesting. Say \(Y_{k}\) has a density in a scale family distributions, i.e., that \(Y_{k}\sim\sigma^{-1}f(\sigma^{-1}y_{k})\), where \(\sigma=\sqrt{\mathrm{Var}(Y_{k})}\) and \(f(\cdot)\leq Q\in\mathbb{R}^{+}\). Then, if \(\mathrm{Var}(Y_{k})\to 0\) as \(k\to\infty\), \(\min_{n\in\mathcal{S}}[\sigma^{-1}f(\sigma^{-1}y_{k})]\to\infty\) and Proposition 9 applies. This is useful because--insofar as it is reasonable to assert that, say, \(\tilde{Y}\) is in the scale family of distributions as stated--convergence in mean square is sufficient for convergence to \(\mathcal{U}\) status. The next proposition provides insight into how regular \(\mathcal{U}\) variables can be constructed. To this end, we use the fact that an arbitrary density can be expressed as \(f(y)=\{\int_{S}g(y)dy\}^{-1}g(y)\) w.r.t. an appropriate function \(g\) and, just the same, an arbitrary mass function can be expressed as \(f(y)=\{\sum_{i\in\mathcal{S}}g(y_{i})\}^{-1}g(y)\) for an appropriate function. **Corollary 2**.: _Suppose \(Y\sim f(y)\) is regular with CDF \(F(y)\) and survival function \(S(y)\). Suppose \(f(y)=\{\int_{S}g(y)dy\}^{-1}g(y)\) and \(G(y)=\int_{S}^{y}g(t)dt\) for some function \(g\). Then \(Av\{G(y)\}=2^{-1}\int_{m}^{M}g(y)dy\) if and only if \(Y\in\mathcal{U}\). For the discrete case on \(\mathcal{S}=\{1,\ldots,M\}\), \(Av\{G(y)\}=2^{-1}(R^{-1}+1)\sum_{i=1}^{R}g(i)\) if and only if \(Y\in\mathcal{U}\), where \(f(y)=\{\sum_{i\in\mathcal{S}}g(y_{i})\}^{-1}g(y)\) and \(G(y)=\sum_{i=1}^{y}g(i)\)._ Proof.: The proof is largely omitted. It follows from the re-expression of the mass function or density and Propositions 5 and 4 respectively. We provide two simple examples of functions that generate continuous \(\mathcal{U}\) distributions before moving forward. _Example 1_.: Set \(g(y)=1\). Then \(G(y)=(y-m)\) and \(\mathrm{Av}[G(y)]=R^{-1}\{R2^{-1}(M+m)-Rm\}=2^{-1}R=2^{-1}\int_{m}^{M}1dy\). This generates a \(Unif(m,M)\) random variable. _Example 2_.: Set \(g(y)=sin(y)\) on \([0,\pi]\). Then \(G(y)=1-cos(y)\) and \(\mathrm{Av}[G(y)]=\pi^{-1}\pi=1=2^{-1}2=2^{-1}\int_{0}^{\pi}sin(y)dy\). Then \(f(y)=2^{-1}sin(y)\) generates a \(\mathcal{U}\) distribution. ### Decomposition of Continuous Uniform Random Variables This next proposition is theoretically interesting and useful. It essentially states that all continuous uniform distributions can be decomposed into a type of \(\mathcal{U}\) variable and error such that the random error is uncorrelated with all functions of that \(\mathcal{U}\) variable. **Theorem 1** (Uniform Decomposition Theorem).: _Consider \(U\sim Unif(m,M)\) on \(\mathcal{S}=[m,M]\). Suppose \(Y\in\mathcal{U}\) is an arbitrary regular random variable that has a unimodal distribution on \(\mathcal{S}\). Furthermore, suppose \(\underset{y\in\mathcal{S}}{\inf[f(y)]}\leq Av\{f(Y)\}\) s.t. \(\underset{y\in\mathcal{S}}{\inf[f(y)]}\) occurs in the left tail. Then \(U=Y+\epsilon\) for some random variable \(\epsilon\) s.t. \(E(\epsilon|Y)=0\)._ Proof.: Observe that \(\mathrm{Av}[f(Y)]=R^{-1}\), which is the density of \(U\). Since \(Y\) has a unimodal distribution on the same support as \(U\), their cumulative distribution functions possess the single-crossing property, i.e., that there is some point \(c\in\mathcal{S}\) s.t. \(F(y)\leq G(y)\) for \(y\in[m,c]\) and \(G(y)\leq F(y)\) for \(y\in[c,M]\), where \(G(y)\) is the uniform CDF. Since \(\underset{y\in\mathcal{S}}{\inf[f(y)]}\leq R^{-1}\) in the left tail, this implies that \(\int_{m}^{c}F(y)dy\leq\int_{m}^{c}G(y)dy\) for this \(c\) in the support. Now, since \(Y\in\mathcal{U}\), \(\mathrm{E}Y=\mathrm{E}_{U}Y\). Now, for contradiction, suppose there exists a \(t\in\mathcal{S}\) s.t. \(\int_{m}^{t}F(y)dy>\int_{m}^{t}G(y)dy\). We know from the above that \(t>c\). We now follow the implied logic: \[\int_{m}^{t}F(y)dy >\int_{m}^{t}G(y)dy\implies\] \[M-\mathrm{E}Y-\int_{t}^{M}F(y)dy >M-\mathrm{E}U-\int_{t}^{M}G(y)dy\implies\] \[\int_{t}^{M}F(y)dy <\int_{t}^{M}G(y)dy\] Recall, since \(F(y)\) and \(G(y)\) have the single crossing property, \(F(y)\geq G(y)\) for \(y\in[c,M]\). Hence, since \(t>c\), \(\int_{t}^{M}F(y)dy\geq\int_{t}^{M}G(y)dy\). Therefore, a contradiction is reached and \(\int_{m}^{t}F(y)dy\leq\int_{m}^{t}G(y)dy\) for \(\forall t\in\mathcal{S}\). By the Rothschild-Stiglitz theorem, then, this implies the existence of a random variable \(\epsilon\) s.t. \(U=Y+\epsilon\) and \(\mathrm{E}(\epsilon|Y)=0\). **Corollary 3**.: _Let \(Y\) be a unimodal \(\mathcal{U}\) random variable defined on \(\mathcal{S}=[m,M]\) and suppose \(\underset{y\in\mathcal{S}}{\text{inf}}[f(y)]\leq Av[f(Y)]\) s.t. \(\underset{y\in\mathcal{S}}{\text{inf}}[f(y)]\) occurs in the left tail. Then \(\text{Var}(Y)\leq 12^{-1}R^{2}\)._ Proof.: Suppose the premises. Then by Theorem 1, \(U=Y+\epsilon\), where \(U\sim Unif(m,M)\) and \(\epsilon\) is some random variable s.t. \(\mathrm{E}(\epsilon|Y)=0\). Therefore, \(\text{Var}(U)=\text{Var}(Y)+\text{Var}(\epsilon)\). This means that \(12^{-1}R^{2}\geq\text{Var}(Y)\). ### \(\mathcal{U}\) variables and regression Up until this point, we have omitted the notation that is related to sampling. We did this to make things more focused and readable. Now, there is an explicit need for it. When we use \(\mathrm{E}_{\delta}(\cdot)\), this signifies that the expectation is taken w.r.t. the sampling distribution. A short word will be provided on the applicability of \(\mathcal{U}\) concepts to stochastic linear regressions, mostly for inference about associations. Since observational studies do not truly fix the analysis on a design matrix very often, stochastic regression concepts are sometimes more appropriate. This implies the following data generating mechanism, which is stated more generally w.r.t. \(\mathbf{X}_{\delta}\in\mathbb{R}^{n\times p}\): \(\mathbf{Y}_{\delta}=\mathbf{X}_{\delta}\mathbf{\beta}+\mathbf{\epsilon}_{\delta}\). Again, strict exogeneity, i.e., that \(\mathrm{E}_{\delta}(\mathbf{\epsilon}|\mathbf{X})=0\), is a sufficient condition for asymptotically unbiased estimation of \(\mathbf{\beta}\) provided other weak dependency and regularity conditions. \(\mathcal{U}\) concepts can offer an alternative condition for consistent estimation that is milder in some respects. To start, we will say two functions \(h(x)\) and \(g(x)\) are orthogonal if \(\int_{\mathcal{S}_{\delta}}h(x)g(x)dx=0\) when \(\mathcal{S}_{\delta}\) is a union of intervals of real numbers or \(\sum_{\mathcal{S}_{\delta}}g(x_{i})h(x_{i})=0\) when \(\mathcal{S}_{\delta}\) is discrete. The following lemma provides a general basis for constructing instrumental variables. It states that if a variable \(g(Z)\) is orthogonal to the expected error function, conditional on \(Z\), then \(g(Z)\) weighted by its inverse (mass) density is a valid instrumental variable. **Lemma 2**.: _Let \(\mathbf{\epsilon}\) be an arbitrary random variable s.t. \(E_{\delta}\epsilon=0\) and say \(Z\sim f_{\delta}(z)\) is an arbitrary random variable. Then \(g(z)\) and \(E_{\delta}(\epsilon|z)\) are orthogonal if and only if \(E_{\delta}[f_{\delta}(Z)^{-1}g(Z)\epsilon]=0\)._ Proof.: Suppose the premises. We will prove the statement for continuous \(Z\) WLOG. \[0 =\mathrm{E}_{\delta}[f_{\delta}(Z)^{-1}g(Z)\epsilon]\] \[=\mathrm{E}_{\delta}[\mathrm{E}_{\delta}[f_{\delta}(Z)^{-1}g(Z) \epsilon]|Z]\] \[=\mathrm{E}_{\delta}[f_{\delta}(Z)^{-1}g(Z)\mathrm{E}_{\delta}( \epsilon|Z)\}\] \[=\int_{\mathcal{S}_{Z}}f_{\delta}(z)^{-1}g(z)\mathrm{E}_{\delta}( \epsilon|z)f_{\delta}(z)dz\] \[=\int_{\mathcal{S}_{Z}}g(z)\mathrm{E}_{\delta}(\epsilon|z)dz\] Proposition 10 generalizes the notion of exogeneity. Further word is provided following the proof. **Proposition 10**.: _Let \(\epsilon\) be an arbitrary random variable s.t. \(E_{\delta}\epsilon=0\) and say \(Z\sim f_{\delta}(z)\) is an arbitrary bounded random variable. Then \(E_{\delta}(\epsilon|Z)\in\mathcal{U}\) if and only if \(E_{\delta}\{f_{\delta}(Z)^{-1}\epsilon\}=0\)._ Proof.: Suppose the premises. We will prove the statement for continuous \(Z\) WLOG. Set \(g(Z)=1\). \((\leftarrow)\) In Lemma 2, we saw that \(0=\mathrm{E}_{\delta}\{f_{\delta}(Z)^{-1}\epsilon\}=\int_{S_{Z}}\mathrm{E}_{ \delta}(\epsilon|z)dz\). This, however, implies that \(\mathrm{Av}\{\mathrm{E}_{\delta}(\epsilon|Z)\}=0\). Thus, since \(0=\mathrm{E}_{\delta}\{\mathrm{E}_{\delta}(\epsilon|Z)\}=\mathrm{Av}\{\mathrm{ E}_{\delta}(\epsilon|Z)\}\), it is implied that \(\mathrm{E}_{\delta}(\epsilon|Z)\in\mathcal{U}\). \((\rightarrow)\) Suppose \(\mathrm{E}_{\delta}(\epsilon|Z)\in\mathcal{U}\). Then it is true that \(0=R_{Z}^{-1}\int_{S_{Z}}\mathrm{E}_{\delta}(\epsilon|z)dz\) implies \(\int_{S_{Z}}\mathrm{E}_{\delta}(\epsilon|z)dz=\mathrm{E}_{\delta}\{f_{\delta }(Z)^{-1}\epsilon\}=0\). The typical exogenous condition states that \(\mathrm{E}_{\delta}(\epsilon|Z)=0\). This stipulation is recognizable as a special case of Proposition 10 since if \(\mathrm{E}_{\delta}(\epsilon|Z)=0\), it is trivially true that \(\mathrm{E}_{\delta}(\epsilon|Z)\in\mathcal{U}\). Hence, Lemma 2 and Proposition 10 offer another route to identifying and consistently estimating regression parameters, although this is not explored further here. Next, recall that C1 stipulates that a sampling process has preserved the support of a random variable and \(Y_{\delta}\) is a _sampled_ random variable, i.e., its probability distribution is a sample distribution. The next proposition is useful to know since it establishes that preserving the intrinsic linear bias of a population distribution during sampling is sufficient and necessary for population moment identification, provided the support has been preserved. **Proposition 11**.: _Denote a random variable \(Y_{\delta}\) and let \(g\) be some measurable function such that C1 holds. Then \(E_{\delta}\{g(Y)\}=E\{g(Y)\}\) if and only if \(\sigma_{g(Y_{\delta}),1\delta f_{\delta}^{-1}(Y)}=\sigma_{g(Y),1\delta f^{-1}( Y)}\)._ Proof.: Suppose C1. Then \(\mathrm{Av}\{g(Y_{\delta})\}=\mathrm{Av}\{g(Y)\}\). \((\leftarrow)\) Suppose \(\sigma_{g(Y_{\delta}),1\delta f_{\delta}^{-1}(Y)}=\sigma_{g(Y),1\delta f^{-1}( Y)}\). Then: \[\mathrm{Av}\{g(Y_{\delta})\} =\mathrm{Av}\{g(Y)\}\implies\] \[\mathrm{E}_{\delta}\{g(Y)\}+R^{-1}\sigma_{g(Y_{\delta}),1\delta f _{\delta}^{-1}(Y)} =\mathrm{E}\{g(Y)\}+R^{-1}\sigma_{g(Y),1\delta f^{-1}(Y)}\implies\] \[\mathrm{E}_{\delta}\{g(Y)\} =\mathrm{E}\{g(Y)\}+R^{-1}\sigma_{g(Y),1\delta f^{-1}(Y)}-R^{-1} \sigma_{g(Y_{\delta}),1\delta f_{\delta}^{-1}(Y)}\implies\] \[\mathrm{E}_{\delta}\{g(Y)\} =\mathrm{E}\{g(Y)\}+0\] \((\rightarrow)\) This case is analogous in logic and is thus omitted. An easy corollary is that \(\sigma_{g(Y),\mathrm{E}\{\delta}(\epsilon|g(Y))}=0\) if and only if the intrinsic linear bias of a distribution is preserved, conditional on the support of the distribution remaining intact under sampling. Since many different sample distributions of \(Y\) exist that meet these conditions, it substantiates a much milder condition for unbiased moment estimation than non-informative sampling. ## 3 Plug-ins and Model Diagnostics Concentration inequalities employed for inference in the manuscript ultimately require \(R_{i}=M_{i}-m_{i}\) to be known or estimable. It is known that \(Y_{(n)}-Y_{(1)}\) (the maximum minus the minimum order statistic) is a consistent estimator when statistical dependencies are local or restricted, i.e., when some version of weak mixing applies, or when outcome variables become approximately independent w.r.t. some conception of distance [1]. We do not suppose this here. Hence, we require a different justification for using extreme order statistics. The example analysis of the paper made use of stochastic linear regression w.r.t. the model \(\mathbf{Y}=\mathbf{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}\). Like in the paper, say \((\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}=\mathbf{W}\) and \(\mathbf{B}=\mathbf{W}\mathbf{Y}\) is a an estimator of \(\boldsymbol{\beta}\) s.t. \(\mathrm{E}(\mathbf{W}\boldsymbol{\epsilon})\rightarrow\mathbf{0}\) as \(n\) grows arbitrarily large. Also consider \(\{W_{s,i}\hat{e}_{i}\}_{i\in I}\), where \(I=\{1,\ldots,n\}\), \(\hat{e}_{i}=Y_{i}-\mathbf{X}_{i}\mathbf{B}\), and \(W_{s,i}\) is cell \((s,i)\) of \(\mathbf{W}\). Ultimately, we are interested in the properties of \(\{W_{s,i}\epsilon_{i}\}_{i\in I}\). When an arbitrary \(\mu_{s,n}=o(n)\), \(\mathbf{B}\overset{p}{\rightarrow}\boldsymbol{\beta}\) and hence \(\hat{e}_{i}\overset{p}{\rightarrow}\epsilon_{i}\) as \(n\rightarrow\infty\). When this is the case, \(W_{s,i}\hat{e}_{i}=W_{s,i}\epsilon_{i}+o_{p}(1)\) s.t. \(n\cdot o_{p}(1)\to 0\) as \(n\rightarrow\infty\). Since we are assuming strict stationarity, we can let \(s\) be arbitrary and say \(Z_{i}=W_{s,i}\epsilon_{i}\) to reason more generally. The next proposition offers an asymptotic justification of using \(Z_{(n)}\) WLOG. **Proposition 12**.: _Consider \(\{Z_{i}\}_{i\in\mathcal{S}}\) s.t. each \(Z_{i}\sim F_{i}(z)\) is a continuous random variable on support \(\mathcal{S}_{i}\) and \(\underset{i\in\mathcal{S}}{\max}(\mathcal{S}_{i})=M\). Moreover, say \(Z_{(n)}=\max(Z_{1},\ldots,Z_{n})\) has a density supported on \(\mathcal{S}_{n}\subseteq\mathcal{S}=[-M,M]\) s.t. \(\max(S_{n})=M\) for \(\forall n\in\mathbb{N}\). Now, let \(Z_{\mathcal{A}_{n}}\sim F_{i}(z|Z_{i-1}\leq z,Z_{i-2}\leq z,\ldots,Z_{1}\leq z)\) be well-defined under the convention that \(Z_{\mathcal{A}_{n}}\sim F_{1}(z)\). Denote the sequence of conditional CDFs \((F_{n}[z|Z_{n-1}\leq z,\ldots,Z_{1}\leq z],F_{n-1}[z|Z_{n-2}\leq z,\ldots,Z_{1} \leq z],\ldots,F_{2}[z|Z_{1}\leq z],F_{1}[z])\) as \(\mathcal{F}=(F_{i}\{z|\mathcal{A}_{i}\}_{i\in\mathcal{I}})_{i\in\mathcal{I}}\). Suppose (1) there exists an absolutely continuous random variable \(Q\) on support \(\mathcal{S}_{Q}\supseteq\mathcal{S}\) s.t. \(\max(\mathcal{S}_{Q})=M\) and, for all increasing functions \(g\), \(\mathrm{E}[g(Z_{\mathcal{A}_{i}})]\leq E\mathrm{I}g(Q)\) for at least \(k(n)\in\mathbb{N}\) of the conditional distributions in the sequence \(\mathcal{F}\), or (2), it is true that \(1_{z<M}F_{i}(z|\mathcal{A}_{i})<1\) for all \(z\in\mathcal{S}\) for at least \(k(n)\in\mathbb{N}\) of the distribution functions of \(\mathcal{F}\). Finally, suppose that \(k(n)\rightarrow\infty\) as \(n\rightarrow\infty\). Then \(Z_{(n)}\overset{p}{\rightarrow}M\) as \(n\rightarrow\infty\)._ Proof.: First, note for any random variable \(X\) on \(\mathcal{S}_{X}=[m,M]\) s.t. \(-M\leq m\), it is true that \(\int_{-M}^{M}\{2^{-1}-F(x)\}dx=\mathrm{EX}\). Now, note that \(F(Z_{(n)}\leq z)=\mathrm{Pr}(Z_{1}\leq z,Z_{2}\leq z,\ldots,Z_{n}\leq z)=F_{n} (z|Z_{n-1}\leq z,\ldots,Z_{1}\leq z)\cdot F_{n-1}(z|Z_{n-2}\leq z,\ldots,Z_{1} \leq z)\cdots F_{2}(z|Z_{1}\leq z)\cdot F_{1}(z)\) for an arbitrary \(z\in\mathcal{S}\). We start with (1). Say there exists a random variable \(Q\sim F_{Q}(z)\) on \(\mathcal{S}_{Q}\supseteq\mathcal{S}\) s.t. \(\max(\mathcal{S}_{Q})=M\). Furthermore, assume that \(\mathrm{E}\{g(Z_{\mathcal{A}_{i}})\}\leq\mathrm{E}\{g(Q)\}\) for all increasing functions \(g\) for at least \(k(n)\in\mathbb{N}\) conditional distributions in the sequence \(\mathcal{F}\). We collect the indexes of these conditional distributions into a set \(W\subseteq I\). It is then implied that \(F_{w}(z|Z_{w-1}\leq z,\ldots,Z_{1}\leq z)\leq F_{Q}(z)\) for all \(z\in\mathcal{S}_{Q}\) and \(w\in W\) since \(\mathrm{E}\{g(Z_{\mathcal{A}_{n}})\}\leq\mathrm{E}\{g(Q)\}\) for all increasing functions \(g\) if and only if \(Z_{\mathcal{A}_{n}}\) first-order stochastically dominates Q. Hence, for all \(z\in\mathcal{S}_{Q}\): \[F(Z_{(n)}\leq z) =F_{n}(z|Z_{n-1}\leq z,\ldots,Z_{1}\leq z)\cdot F_{n-1}(z|Z_{n-2} \leq z,\ldots,Z_{1}\leq z)\cdots F_{2}(z|Z_{1}\leq z)\cdot F_{1}(z)\] \[\leq 1^{n-k(n)}\cdot\{F_{Q}(z)\}^{k(n)}\] \[\leq\{F_{Q}(z)\}^{k(n)}\] From the above, we know that \(\mathrm{E}Z_{(n)}=\int_{-M}^{M}\{2^{-1}-F(Z_{(n)}\leq z)\}dz\). Since \(F(Z_{(n)}\leq z)\leq\{F_{Q}(z)\}^{k(n)}\) for all \(z\in\mathcal{S}\), it follows that \(2^{-1}-\{F_{Q}(z)\}^{k(n)}\leq 2^{-1}-F(Z_{(n)}\leq z)\) for all \(z\in\mathcal{S}\) and: \[\int_{-M}^{M}\{2^{-1}-\{F_{Q}(z)\}^{k(n)}\}dz\leq\mathrm{E}Z_{(n)}\leq M\] Since \(1_{z<M}\{F_{Q}(z)\}^{k(n)}\) is discontinuous only at \(\{M\}\), which has measure zero, \(\int_{-M}^{M}1_{z<M}\{F_{Q}(z)\}^{k(n)}dz=\int_{-M}^{M}\{F_{Q}(z)\}^{k(n)}dz\) by the Riemann-Lebesgue theorem. Moreover, observe that \(1_{z<M}\{F_{Q}(z)\}^{k(n)}\to 0\) uniformly on \(\mathcal{S}\) as \(n\rightarrow\infty\) and hence \(k(n)\rightarrow\infty\) and it is also true that \(\int_{-M}^{M}1_{z<M}\{F_{Q}(z)\}^{k(n)}dz<\infty\) for \(\forall n\). Note that uniform convergence follows since \(1_{z<M}\{F_{Q}(z)\}^{k(n)}=\{1_{z<M}F_{Q}(z)\}^{k(n)}\) and \(1_{z<M}F_{Q}(z)<1\) for all \(z\) considered. Thus, we can interchange the order of limits and integration and: \[M-\int_{-M}^{M}\lim_{n\rightarrow\infty}1_{z<M}\{F_{Q}(z)\}^{k(n)}dz\leq\lim_{n \rightarrow\infty}\mathrm{E}Z_{(n)}\leq M\] Finally, this of course implies the following statement: \[M-0=M\leq\lim_{n\rightarrow\infty}\mathrm{E}Z_{(n)}\leq M\] Hence, by the Squeeze theorem, \(\mathrm{E}Z_{(n)}\to M\) as \(n\rightarrow\infty\). From here, since we know that \(\max(\mathcal{S}_{n})=M\) for all \(n\in\mathbb{N}\), and \(\mathrm{E}Z_{(n)}\to M\) as \(n\rightarrow\infty\), it is then implied that \(F(Z_{(n)}\leq z)\) converges to a distribution function s.t. \(\mathrm{Pr}(Z_{(n)}<M)=0\) and \(\mathrm{Pr}(Z_{(n)}\geq M)=1\) on limiting support \(\mathcal{S}_{n}=\{M\}\). Hence, \(Z_{(n)}\overset{d}{\rightarrow}M\), which implies that \(Z_{(n)}\overset{p}{\rightarrow}M\) since \(M\) is a constant. We now commence by (2). Again construct the set \(W\subseteq I\) s.t. for an arbitrary \(w\in W\), \(1_{z<M}F_{w}(z|\mathcal{A}_{w})<1\) for all \(z\in\mathcal{S}\). Furthermore, say \(F_{*}(z_{*}|\mathcal{A}_{*})=\underset{w\in W,z\in\mathcal{S}}{\max}\{1_{z<M}F_ {w}(z|\mathcal{A}_{w})\}\). Then \(1_{z<M}\cdot F(Z_{(n)}\leq z)\leq\{F_{*}(z_{*}|\mathcal{A}_{*})\}^{k(n)}\) and the proof proceeds exactly the same. Pertinently, the condition that a dominating expectation exists for all increasing functions \(g\) is relatively mild. This is fulfilled, for instance, if there exists some (possibly unknown) random variable \(Q\) on the specified support s.t. \(Q\leq Z_{\mathcal{A}_{k}}\) with probability one for a sufficient amount of the conditional functions. Stated informally, this condition, or the second one, will be fulfilled when the dependencies that are present don't eliminate \(M\) from the support of at least \(k\) of the conditional distributions and \(k\) still goes to infinity with sample size. This is a very mild condition because it can still be true even when every single outcome variable is statistically dependent. Pertinently, since \(Z_{(1)}=-\max(-Z_{1},\ldots,-Z_{n})\), Proposition 12 can automatically be applied to sample minimums as well. This is also important since it establishes that \(2^{-1}(Z_{(1)}+Z_{(n)})\) is a consistent estimator of the mid-range. If \(Z\in\mathcal{U}\), it is also a consistent estimator of E\(Z\), and even possibly when \(Z\) is not. For our paper, Proposition 12 can be applied to \(\hat{Z}_{i}=W_{s,i}\hat{e}_{i}\) under the auspices of its additional premises when it is also true that \(W_{s,i}\hat{e}_{i}\) converges in probability to \(W_{s,i}\hat{e}_{i}\). This approach also works for fixed regressions with very little difference. For this case, we simply require that \(\mu_{n}=o(n)\) for \(\{\epsilon_{i}\}_{i\in I}\) and for Proposition 12 to apply to \(\{Y_{i}\}_{i\in I}\) w.r.t. their marginal distribution, and for the model to be validly specified for its first moments. Lastly, we made use of the empirical CDF for checking the \(\mathcal{U}\) status of \(W_{s,i}\hat{e}_{i}\). Since the empirical CDF is an arithmetic average of indicator functions, we actually require \(\mu_{D}=o(n)\) for consistency. This is because two indicator variables are uncorrelated if and only if they are independent. Consistency follows from the propositions in the main document. This does not mean that the empirical CDF is useless when \(\mu_{D}=O(n)\). In this case, each estimator is still unbiased. Therefore, although it is not likely to be consistent, it can still be used to gauge the veracity of the assumption. If a \(\mathcal{U}\) shape is visually present, this is still adequate evidence that the assumption is at least approximately true. This is because the probability of observing such a shape should be fairly low when the assumption is not met. In a situation where convergence to probability does not exist, there should naturally be more variability present. In short, we would expect to see shapes that deviate from our expectation more often than not.
2308.13109
Magneto-optical anisotropies of 2D antiferromagnetic MPX$_3$ from first principles
Here we systematically investigate the impact of the spin direction on the electronic and optical properties of transition metal phosphorus trichalcogenides (MPX$_3$, M=Mn, Ni, Fe; X=S, Se) exhibiting various antiferromagnetic arrangement within the 2D limit. Our analysis based on the density functional theory and versatile formalism of Bethe-Salpeter equation reveals larger exciton binding energies for MPS$_3$ (up to 1.1 eV in air) than MPSe$_3$(up to 0.8 eV in air), exceeding the values of transition metal dichalcogenides (TMDs). For the (Mn,Fe)PX$_3$ we determine the optically active band edge transitions, revealing that they are sensitive to in-plane magnetic order, irrespective of the type of chalcogen atom. We predict the anistropic effective masses and the type of linear polarization as an important fingerprints for sensing the type of magnetic AFM arrangements. Furthermore, we identify the spin-orientation-dependent features such as the valley splitting, the effective mass of holes, and the exciton binding energy. In particular, we demonstrate that for MnPX$_3$ (X=S, Se) a pair of non equivalent K+ and K- points exists yielding the valley splittings that strongly depend on the direction of AFM aligned spins. Notably, for the out-of-plane direction of spins, two distinct peaks are expected to be visible below the absorption onset, whereas one peak should emerge for the in-plane configuration of spins. These spin-dependent features provide an insight into spin flop transitions of 2D materials. Finally, we propose a strategy how the spin valley polarization can be realized in 2D AFM within honeycomb lattice.
Miłosz Rybak, Paulo E. Faria Junior, Tomasz Woźniak, Paweł Scharoch, Jaroslav Fabian, Magdalena Birowska
2023-08-24T22:55:23Z
http://arxiv.org/abs/2308.13109v1
# Magneto-optical anisotropies of 2D antiferromagnetic MPX\({}_{3}\) from first principles ###### Abstract Here we systematically investigate the impact of the spin direction on the electronic and optical properties of transition metal phosphorus trichalcogenides (MPX\({}_{3}\), M=Mn, Ni, Fe; X=S, Se) exhibiting various antiferromagnetic arrangement within the 2D limit. Our analysis based on the density functional theory and versatile formalism of Bethe-Salpeter equation reveals larger exciton binding energies for MPS\({}_{3}\) (up to 1.1 eV in air) than MPS\({}_{3}\) (up to 0.8 eV in air), exceeding the values of transition metal dichalcogenides (TMDs). For the (Mn,Fe)PX\({}_{3}\) we determine the optically active band edge transitions, revealing that they are sensitive to in-plane magnetic order, irrespective of the type of chalcogen atom. We predict the anisotropic effective masses and the type of linear polarization as an important fingerprints for sensing the type of magnetic AFM arrangements. Furthermore, we identify the spin-orientation-dependent features such as the valley splitting, the effective mass of holes, and the exciton binding energy. In particular, we demonstrate that for MnPX\({}_{3}\) (X=S, Se) a pair of non equivalent K+ and K- points exists yielding the valley splittings that strongly depend on the direction of AFM aligned spins. Notably, for the out-of-plane direction of spins, two distinct peaks are expected to be visible below the absorption onset, whereas one peak should emerge for the in-plane configuration of spins. These spin-dependent features provide an insight into spin flop transitions of 2D materials. Finally, we propose a strategy how the spin valley polarization can be realized in 2D AFM within honeycomb lattice. ## I Introduction The subtle interplay between the spin, charge, orbital, and lattice degrees of freedom driven by the electron correlation is one of the key aspects in condensed matter physics behind novel electronic phases of matter and intriguing physical phenomena. In particular, the electronic properties can be modified whenever the spin direction is altered, as spin-orbit coupling (SOC) depends on the spin direction. For instance, spin-valley coupling serves as the fundamental mechanism in optically-controlled valley polarization [1], spin-Hall and valley-Hall effects [2]. Contrast to most of the findings in non-magnetic 2D materials [3; 4; 5], where pseudo-spins are involved, 2D magnets exhibit active carrier spins, enabling studies of magnetism in reduced dimensions [6]. Unlike ferromagnets (FMs), the antiferromagnets (AFMs) are commonly found in nature and they are permitted in each magnetic symmetry group, however they are less utilized than FMs [7]. Currently, AFM materials are considered as promising candidates for future spintronic applications, due to unique properties including insensitivity to external magnetic fields, lack of stray fields, and ultrafast spin dynamics in the terahertz regime [8]. The spontaneously long-range ordered of microscopic magnetic moments, resulting in zero net magnetization, which makes the AFMs insensitive to external magnetic fields. Thus, a control of AFM state requires very high magnetic fields and demands unconventional means of detection [7]. Many properties such as optical, electronic, and vibrational, rely on the magnetic ordering [9; 10; 11], as well as magnetic moment orientation [12]. In particular, a giant impact of the spin direction on the band structure, have been recently demonstrated for 2D ferromagnetic CrI\({}_{3}\) material [13]. Although, the spin-direction properties are reported for conventional and layered FMs [14; 15], the research on 2D AFM materials is very limited and scarce [6; 7; 16; 17]. One of essential feature for manipulating AFM state is the magnetocrystalline anisotropy (MAE) which is evident in spin flip or spin flop transitions. The latter one requires relatively weak MAE and it is reported in literature for AFM compounds exhibiting various electronic states such as topological insulators [18], layered materials [19], and conventional semiconductors [20]. In addition, the experimental techniques including indirect means of magnetic phases such as anisotropic magnetoresistance (AMR), anomalous Hall effect (AHE) or second harmonic generation (SGH) [7; 21; 22], are employed to detect the AFM orientation in 2D materials. Interestingly, the magnetic phase transition of controlled anisotropic phenomenon in layered magnets provides crucial understanding of fundamental magnetism in reduced dimensions. The present work is motivated by the lack of systematic studies regarding the impact of AFM orientation of magnetic moments on the optoelectronic properties of 2D materials. Hence, in this paper we put particular attention to pinpoint the magnetic fingerprints in indirect properties that can engineer the AFM ordering. We conduct a theoretical analysis based on density functional theory (DFT) and effective Bethe-Salpeter equation (BSE) to identify various spin direction features, which can be utilised in spin-processing functionalities. This study focuses on spin angle evolution of the effective masses of carriers, optically active band edge transitions, exciton binding energies, which remain almost unexplored in the context of van der Waals (vdW) AFM crystals. Here, we present a systematic study of the impact of spin orientation on the electronic and optical properties of series of the monolayers of MPX\({}_{3}\), where M= Mn, Ni, Fe, and X= S, Se, assuming collinear arrangement of the magnetic moments. The results are presented as following: first we examine the magnetic ground state, determining easy and hard axes of magnetization at the level of PBE+U+SOC approach. Next, we consider the electronic features such as band extrema, effective masses, valley splitting and excitonic properties including the excitonic binding energy. Although, the electronic properties have been widely reported for particular spin arrangement and direction, the impact of the orientation of the collinearly ordered spins on opto-electronic properties MPX\({}_{3}\) are largely missing. Finally, band edge excitons have been systematically predicted for this class of materials for the first time. ## II Computational details The calculations were performed in the framework of density functional theory (DFT) using the generalized gradient approximation within the PBE flavor [23], as implemented in VASP software [24]. The ion-electron interactions were described by the projector augmented wave (PAW) method [25]. Plane-wave basis cutoff and \(\Gamma\) centered Monkhorst-Pack [26] k-point grid were set to 500 eV and \(10\times 6\times 1\), respectively. A Gaussian smearing of 0.05 eV was employed for the Brillouin zone (BZ) integration. The interlayer vdW forces were treated within Grimme scheme using D3 correction [27]. A vacuum of the thickness equal to 20 A was added to mimic the isolated monolayer. Most of the results were obtained using PBE+U method based on Dudarev's approach, with the effective on-site Hubbard U parameter (U\({}_{eff}\) =U-J, where J=1eV) assumed for \(3d\) orbitals. Note that U\({}_{eff}\) is hereafter denoted as U. To check its impact on various properties two values of U= 3, 5 eV were employed. The SOC within the non-collinear treatment of magnetism was taken into account on the top of the PBE+U scheme. The position of the atoms and unit cell were fully optimized within the PBE+U approach. In order to predict the magnetic easy and hard axis, the spins of the magnetic ions have been rotated from out-of plane to in-plane directions with polar angle step \(\Delta\Theta\)=15\({}^{0}\). For every \(\Theta\) the atomic positions and the lattice parameters have been fully optimized. The effective masses of carriers at the band edges were examined using finite difference method [28]. The direct interband momentum matrix elements were computed from the wave function derivatives using Density Functional Perturbation Theory (DFPT) [29] in order to determine the optically active transitions, as discussed in [30]. The macroscopic optical dielectric constants frequency \(\epsilon_{\infty}\) were obtained using DFTP in the independent particle approach with (DFT-TD) and without local field effects (IP) [29] Exciton calculations were performed within the effective BSE formalism [31; 32; 33; 9; 34; 35] using the effective masses and dielectric constants obtained from the DFT calculations. We solve the effective BSE numerically using the parameters given in Ref. [9]. ## III Results ### Magnetic ground state \(\&\) magnetic easy axis First, we examine the impact of the rotation of spins on the energy profile to determine the magnetic ground state and magnetic easy/hard axes for all employed MPX\({}_{3}\) systems. We assume a collinear alignment of the spins and angle of rotation \(\Theta\) (see Fig. 1 (a)) varied from the out-of plane to in-plane configuration. The rotation of the spins within the basal plane yields up to two orders of magnitude smaller energy changes compared to out-of plane ones, hence in-plane rotations are not further considered. The magnetic ground state exhibits antiferromagnetic Neel (AFM-N) and antiferromagnetic zigzag (AFM-z) arrangement of the spins for Mn and M=Ni, Fe, respectively (see Fig. 1), in line with other reports [36; 9; 37; 30]. The MPX\({}_{3}\) materials are reported to be robust antiferromagnets [10] and even high concentration of the substitutional dopants could not alter the magnetic ordering of the host [38; 10]. Additionally, changing the angle of spin alignment \(\Theta\) requires at least one order of magnitude lower energy (tens of meV per magnetic ion) than change of the AFM ordering (at least tens of meV per magnetic ion). The computed energy difference (\(\Delta E\)) between the out-of plane and in-plane directions can be one order of magnitude greater for Se than S compounds (see Fig. 1 (b-d)), which is expected for heavier atoms, exhibiting larger SOC coupling [38]. Surprisingly, FePS\({}_{3}\) exhibits a larger MAE than FePSe\({}_{3}\) (see Fig 1(e)). The value of U does affect neither the type of AFM ordering nor the direction of the magnetic easy axis. However, the \(\Delta\)E depends on the Hubbard U parameters. Namely, the smaller values are generally obtained for larger U, except for FePS\({}_{3}\) monolayer (see Fig. S2). For the employed materials, an increase in the effective U resulted in larger lattice constants by up to 1.5 % and larger magnetic moments (see Table S2). Let us now consider the easy axes of magnetization, which are presented in Fig. 1(f, g) for all employed structures, with the hard axes predicted to be orthogonal to the corresponding easy axis/plane. In particular, for FePX\({}_{3}\) the magnetic easy axis points along the \(c\) crystallographic direction as reported in previous studies [36, 39, 40, 41, 42, 43]. Hence, this kind of systems are considered to be Ising-type antiferromagnets, with a strong uniaxial magnetic anisotropy. For the MnPX\({}_{3}\) compounds, we predict that the easy plane coincides with monolayer plane, unlike the recent experimental report for MnPS\({}_{3}\), which has demonstrated that the spins are slightly tilted from the \(c\)-axis [44]. The direction of the ordered spins for MPX\({}_{3}\) compounds result from the interplay between the single-ion anisotropy and magnetic dipolar interactions (MDIs), as already discussed in a recent publication [38]. In addition, the spin-orbit splitting is negligible for MnPS\({}_{3}\), which manifests itself as the lowest MAE for all employed structures (see Fig. 1(b,e)). Thus, the MDI, generally regarded to be weak [45], might be decisive in determining the direction of the MnPS\({}_{3}\) spins. Note, that the MDIs are not accounted for in our PBE+U+SOC approach, and will be considered elsewhere. On the other hand, in the case of the MnPSe\({}_{3}\), the MDIs could be neglected due to larger lattice parameters and SOC. Hence, the magnetic easy axis of MnPSe\({}_{3}\) coincides with the monolayer plane, in agreement with other reports in the literature [41, 43, 46], and confirmed by Figure 1: (a) The rotation angle \(\Theta\) of collinearly aligned AFM spins directed towards the (001) plane. (b-d) Polar plots of energy difference \(\Delta E\) between the particular direction of the spins and the lowest energy configuration for Hubbard U=5 eV. (e) The magnetocrystalline energy (MAE) is defined here, as a difference between the energies of spins parallel to hard (highest energy) and easy axes (lowest energy) of magnetization. (f-h) Structural arrangement of the spins exhibited by the magnetic ground state for all employed systems. The direction of the spins are parallel to the easy axis / plane of magnetization. The \(\Delta E\) and the MAE are given in meV per magnetic ion. spin flop transition reported upon the nonmagnetic substitution in MnPS\({}_{3}\) thin films [38]. Regarding NiPS\({}_{3}\), the in-plane position of the magnetic moments are preferable and the magnetic ordering can be described by the XY Heisenberg Hamiltonian [39; 40; 47; 48]. Our results predict the easy axis of magnetization within the monolayer frame, without any deviation in the z direction as reported recently for monolayer [49]. However, we do not exclude the possibility that the inclusion of the MDIs interaction can facilitate the rotation of the spins towards the out of the plane direction. For the case of NiPS\({}_{3}\), we determined the easy axis of magnetization to be in out-of-plane direction in line with recent report [50]. Similar results regarding the easy axis/plane of magnetization have been reported for the corresponding bulk systems [38]. ### Electronic properties #### electronic band gaps Our results reveal that the structures containing Mn and Fe exhibit direct band gaps located at K high symmetry point (see Fig. 2 (a)), in line with previous reports [51; 9; 37], while the NiPS\({}_{3}\) system is an indirect semiconductor confirmed by experiments [47]. In particular, for FePS\({}_{3}\) and NiPS\({}_{3}\) there is a strong contribution of the \(3d\) states to the bands in vicinity of Fermi level. Hence, for these systems, significant changes of the position of the band extrema (VBM, CBM) and their curvatures upon the Hubbard U parameter are visible (see Tables S2 and S3 and discussions therein). #### U dependence For all employed compounds, except MnPX\({}_{3}\), the U parameter significantly affects the curvature of the conduction bands as reported previously [37], and hence the effective masses of holes. Hence, all features are examined assuming two Hubbard U parameters U=3, 5 eV). In particular, the large value of the effective masses of electrons exhibited for Ni contained systems, indicating a flat character of the conduction bands, which can be further enhanced by adopting larger values of U (see Table S3). By applying the Hubbard U to 3d states, one can shift the 3d states away from the Fermi level. In particular, for MnPX\({}_{3}\) structures the conduction bands are mainly build from p-states, and further shifting 3d states do no affect the band curvature nor the band gap value, whereas for FePX\({}_{3}\) compounds the conduction bands consist mainly of 3d states, hence further enlargement of U strongly impacts electronic features around band edges. Additionally, our results corroborate the previous reports on strong dependence of Hubbard U on fundamental energy band gap (see Table S2) [37]. #### Effective masses For all of employed monolayers, the heaviest effective mass is obtained for NiPX\({}_{3}\), whereas the lightest one is demonstrated for MnPS\({}_{3}\) (see Table S3 and Fig. S3). Generally, the lower effective masses are exhibited by the electrons than holes, and for Se contained structures than corresponding S systems (by about 30%), except for NiPX\({}_{3}\) structures where these trends are opposite. Additionally, the anisotropic in-plane components of effective masses (m\({}_{1}\neq\) m\({}_{2}\)) are exhibited by monolayer with AFM-z ground state, whereas for MnPX\({}_{3}\) monolayers with AFM-N ordering, the in-plane components are isotropic (m\({}_{1}\)=m\({}_{2}\)). Note, that the AFM-z magnetic arrangement breaks the hexagonal symmetry, as it consist of spins ferromagnetically aligned along zigzag chain (see Fig 1(g)), whereas the AFM-N type of magnetic order is commensurate with the structural symmetry. Hence, the anisotropic effective masses, and thus, anisotropic transport properties can be regarded as a magnetic marker distinguishing different type of antiferromagnetic ordering within hexagonal lattice. In addition, the inclusion of the SOC does not affect the curvature of examined systems, except for MnPSe\({}_{3}\) where slight changes are shown for K+ and K- valleys (see Table S4). #### Spin-orientation-dependent electronic features in MPX\({}_{3}\) (M=Mn, Fe; X=S,Se) Now, let us make closer inspection on the **band edges** of MLs exhibiting direct transitions. The electronic features are collected in Table S3. Note, that in the absence of the SOC, the band extrema are doubly degenerated for all employed systems. The presence of SOC in the structures containing Mn preserves the spin degeneracy of the band extrema (VBM, CBM), however causing the valley splitting at +K and -K, which is expected for the honeycomb lattice with AFM-N arrangements of the spins with SOC included [52]. Namely, the SOC preserves the spin degeneracy but leads to a renormalization of the valleys (+K and -K are not equivalent) for AFM-N magnetic ground state. The size of the valley splitting (\(\Delta\)) depend on the chalcogen atoms (larger values for Se atoms), Hubbard U, and spin directions (see Fig. 2 and Tables I, S6). In particular, the largest valley splitting is attained for the out-of-plane direction of the spins (\(\Delta^{CB}\) = 39 meV for MnPSe\({}_{3}\) and U=3eV) and lower value of U as presented in Fig. 2(c), whereas no polarity of the valleys are observed for the in-plane direction of the spins. Namely, the rotation of the spins towards out-of-plane directions results in enhancement of the valley splitting. Hence, the band gap of MnPSe\({}_{3}\) can be changed by up to 35 meV as presented in Fig. 2(b, c), whereas for the rest of the compounds the minor changes are observed (see Fig. 2(b)). Additionally, the effective masses of holes for MnPSe\({}_{3}\) are sensitive to the direction of the spins as presented in Fig. 2(d). Interestingly, in the case of MnPS\({}_{3}\), the band gap changes its character from direct to indirect when the spins are rotated from an out-of-plane to an in-plane directions. Regarding the monolayer of FePX\({}_{3}\), the presence of SOC and local inversion symmetry breaking in FeX\({}_{6}\) octahedra lift the degeneracy of the band extrema (few meV), irrespective to the spin direction (see column 4 in Table I). In particular, larger spin polarization \(\delta\) is obtained for CBM than VBM (\(\delta^{CB}\)= 9.1 meV, \(\delta^{VB}\)= 1.8 meV), and for higher values of U for FePS\({}_{3}\). In contrast to the FePS\({}_{3}\), the opposite trends are obtained for FePSe\({}_{3}\) (for the details see Table S6). Hence, the spin splitting of the bands is obtained for hexagonal lattices exhibiting in-plane structural anisotropy, as reported for MnPS\({}_{3}\) (see Fig. 5(d) in [51]). In this case the spin polarization occurred, alongside with renormalization of the +K/-K valleys as expected for AFM-N. In addition, one of the consequences Figure 2: Spin-orientation electronic properties. (a)Band extrema for all employed monolayers as a function of spin direction (color scale). (b) Enhancement of the electronic band gaps \(\Delta E_{gap}\) for particular spin directions in respect to out-of-plane spins. (c) Valley splitting \(\Delta^{VB}\), \(\Delta^{CB}\) are defined \(\Delta^{VB}\)=E\({}_{K-}^{VB}\)=E\({}_{K+}^{VB}\), \(\Delta^{CB}\)=E\({}_{K-}^{CB}\)=E\({}_{K+}^{CB}\), respectively for Mn and Fe contained monolayers, given in meV. Note, that for the clarity of picture regarding MnPS\({}_{3}\), the \(-\Delta^{CB}\) is plotted. (d) In-plane components of effective mass tensor for electrons (positive masses) and holes (negative masses) of MnPX\({}_{3}\) in respect to spin direction and for two non-equivalent \(\pm K\) valleys. of the local inversion symmetry breaking exploited in Fe monolayers is a lattice distortion reflected in the ratio of the lattice parameters deviated from hexagonal symmetry (\(b/a\neq\sqrt{3}\)) [55; 51]. ### Excitonic properties #### Dielectric properties We start our discussion by analyzing the dielectric properties of the employed systems. Generally, the in-plane components of static dielectric constants (\(\varepsilon_{xx}\), \(\varepsilon_{yy}\)) are isotropic for all studied systems, except for monolayers of (Ni,Fe)PX\({}_{3}\), where they differ by up to 4% (see SI and Tabs. S7, S8). Due to the fact that dielectric tensor is well defined for the bulk materials, we calculate the 2D polarizability \(\chi_{\parallel}\) (dielectric screening length, for the details see SM). Our results reveal that the dielectric screening length increases as the atomic number of the metal also increases \(\chi_{\parallel}^{Mn}>\chi_{\parallel}^{Fe}>\chi_{\parallel}^{Ni}\), exhibiting the smallest and highest values equal to 3.25 A (MnPSe\({}_{3}\), U=5 eV) and 6.13 A (NiPSe\({}_{3}\), U=5eV), respectively. In addition, the systems containing Se exhibit \(\sim\)40% larger screening properties than their corresponding S systems. The screening properties of MPX\({}_{3}\) compounds are generally smaller than for other vdW structures such as MoSe\({}_{2}\) (8.23 A). Note that for the (Ni,Fe)PX\({}_{3}\) systems, the dielectric screening lengths strongly depend on the Hubbard U. In particular, the larger values of \(\chi_{\parallel}\) are attained for the smaller Hubbard U, which correlates with the strong impact of the U on the DOS (see Fig. S2 (b)). #### Band edge excitons Let us now examine the band edge excitons using the relevant information from DFT (\(m_{eff}\), \(\chi_{\parallel}\)) combined with the effective BSE [31; 32]. We show the exciton binding energies (E\({}_{B}\)) in Fig. 3(a) (for the details of calculations see Ref. [9]). All employed structures exhibit large E\({}_{B}\) exceeding 1 eV and 0.8 eV for bare monolayers of MPS\({}_{3}\) and MPS\({}_{3}\), respectively, outperforming the values in TMDCs [9] (the exact values are collected in Tables S9, S10). The excitons have been recently experimentally reported for few layers of NiPS\({}_{3}\)[57; 56; 57], and FePS\({}_{3}\)[51], however, their origin is still under a hot debate. Generally the E\({}_{B}\) are larger by about 30% for S contained structures than corresponding Se compounds, mainly due to larger effective masses and smaller dielectric screening for MPS\({}_{3}\). The E\({}_{B}\) decreases as the effective dielectric constant of the environment increases, preserving the same trend for all monolayers. This can be explained within the oversimplified exciton picture (hydrogen model) for which E\({}_{B}\) is proportional to effective mass and inversely proportional to the square of dielectric screening. #### Optical transitions and selection rules Besides the binding energy, it is also relevant to determine the selection rules of the direct band edge transitions, which are summarized in Table S3. All of the transition are optically allowed (non-zero oscillator strength) exhibiting linear polarization of light. In particular, for monolayers with AFM-N phase (MnPX\({}_{3}\)) the polarization of light is along \(z\) direction, whereas for the AFM-z (FePX\({}_{3}\)) pointing along \(y\) direction. Interestingly, similar conclusions are observed assuming various AFM metastable phases within the same magnetic material [9]. Therefore, the polarization of light is sensitive to magnetic order irrespective of the type of transition metal and chalcogen atoms. This is in line with recently reported linear polarization of the sharp emission, that aligned perpendicular to the spin orientation [57]. Hence, the polarization of light might be a tool to distinguish the type of AFM ordering. All of these direct band edge excitons are optically active transitions and couple to \(z\)-polarized light. Comparing with the widely studied TMDCs, these transitions in MPX\({}_{3}\) systems have an intensity two orders of magnitude smaller than the bright (A and B) transitions but have comparable intensity to the dark (D) transitions [58; 30]. #### Spin-orientation-dependent excitonic features in MnPX\({}_{3}\) Interestingly, the excitonic properties are sensitive to the direction of the spins. Owing to the spin direction dependent effective masses observed in MnPX\({}_{3}\) presented in Fig. 2 (d), we discuss below only these type of monolayers. The effect of the spin direction on E\({}_{B}\) is on the order of 1-2 meV in MnPS\({}_{3}\) but on the order of 10 meV in MnPSe\({}_{3}\) monolayer, as presented in Fig. 3 (b), irrespective of the value used for the Hubbard U parameter. Furthermore, opposite K-valleys (at +K and \(-\)K) show different effective masses and, consequently, different values of E\({}_{B}\), as shown in Fig. 3(c). Hence, the E\({}_{B}\) depends on the orientation of the AFM aligned spins. Particularly, due to the sizeable valley-dependent curvature of the VBM (see Fig. 2 (d)) observed for the MnPSe\({}_{3}\), the E\({}_{B}\) of excitons at \(\pm K\) valleys differ by up to E\({}_{B}^{+}\) - E\({}_{B}^{+}\) Figure 3: (a) Large exciton binding energies obtained for all considered monolayers with magnetic ground state with respect to the dielectric screening of the surroundings. The values for air/vacuum, SiO\({}_{2}\)[53] substrate, and hBN encapsulation [54] are denoted by vertical lines. (b) Evolution of the E\({}_{B}\) with respect to the spin direction for two values of Hubbard U (3 and 5 eV). (c) The energy difference between the E\({}_{B}\) calculated for \(-K\) and \(+K\) valleys as a function of the spin direction angle \(\Theta\) (within DFT+U=5eV and DFT-TD approaches), for hBN encapsulated and freestanding layers. 12 meV (for MnPSe\({}_{3}\) in vacuum). Combining the information of the optical selection rules and exciton binding energies, we present in Fig.S8 the schematic diagram of optically active band edge transition considering the in-plane and out-of the plane direction of spins. Our calculations reveal that the emissions from the opposite \(+K\) and \(-K\) valleys are energetically below the onset of absorption spectrum. For the in-plane directions of the spins, the energy difference between \(+K\) and \(-K\) is rather small (separated about 1 meV and 0.1 meV for MnPSe\({}_{3}\) and MnPS\({}_{3}\), respectively, as shown in Figs. S8(a,d)). On the other hand, for out-of-plane spins, the exciton emission from \(\pm\)K valleys are separated by 42 meV and 6 meV for MnPSe\({}_{3}\) and MnPS\({}_{3}\), respectively. Figure 4: Schematic diagrams (U=3 eV) of direct transitions for (a, b) MnPSe\({}_{3}\) and (c, d) MnPS\({}_{3}\) exhibiting in-plane and out-of-plane directions of the AFM aligned spins, as schematically denoted at the bottom of the picture (g, h). Schematic absorption onsets (c, f) with depicted in coloured exciton peaks A,B and C. Namely, green (A) red (B) and blue (C), peaks indicate band edge excitons for in-plane and out-of-plane spin configurations of AFM aligned spins at +K and -K valleys, respectively. Black arrows close to bands indicate spin degenerated bands. The position of the peaks are in order but not in scale. All of the presented direct transition are coupled to \({}^{\prime\prime}\)z\({}^{\prime\prime}\) polarized light with the intensities listed in Table S3. (i) The first BZ with labeled high symmetry k-points. The separation of the exciton peaks at \(\pm K\) valleys (see difference between the position of the B and C peaks in Fig. S8) is calculated as: \[\Delta_{B} = [E_{g}(K-)-E_{B}(K-)]-[E_{g}(K+)-E_{B}(K+)] \tag{1}\] \[= \Delta E_{B}+\Delta^{VB}-\Delta^{CB}\,.\] where \(\Delta E_{B}=E_{B}(K-)-E_{B}(K+)\) is the difference between the exciton binding energies at \(\mp K\) valleys and \(E_{g}(K\pm)\) the electronic band gap at a given \(K\) valley. We predict that one peak should be observed for in-plane oriented spins (green peak in Fig. S8) whereas two peaks emerge for the spins deflected from the monolayer plane (red and blue peaks in Fig. S8). Therefore, the energy \(\Delta_{B}\) is a robust magnetic fingerprint of the AFM spin directions. Note that the energy separation of \(\pm\)K valleys depends on the dielectric environment, as presented in Fig. 3(c), but should be visible in hBN encapsulated samples, specially in MnPSe\({}_{3}\) since the valley splitting is dominated by the electronic counterparts \(\Delta^{CB}\) and \(\Delta^{VB}\). ## IV Conclusions Here, we systematically examine the MPX\({}_{3}\) materials emphasizing the role of spin reorientation, magnetic arrangement and electron correlation effects in various properties. In this regard, the chemical trends in respect to the type of the chalcogen atom as well as transition metal are examined. We have demonstrated that the electronic features such as band gaps, effective masses, dielectric screening and exciton binding energies strongly depends on the type of chalcogen atom. Notably, larger effective masses are attained for S than Se contained monolayers (by about 30 %), whereas smaller dielectric screening (up to 40%) are reached for MPS\({}_{3}\). In general, the larger effective masses and smaller dielectric screening length are obtained for employed materials than for widely examined TMDs, resulting in larger exciton binding energies of direct transitions of MPX\({}_{3}\) than corresponding monolayers of TMDcs. Generally, all examined band edge direct transitions of (Mn,Fe)PX\({}_{3}\) monolayers turned out to be optically active coupled to linear polarized light, with type of polarization sensitive to magnetic arrangement. Regarding the direct band edge transitions, the MnPX\({}_{3}\) are of particular importance as could be exploited as promising valley electronics materials. In particular, we have shown that the valley splitting at the \(\pm K\) can be effectively controlled by the direction of the magnetic moments. In particular, the sizable valley splitting occurs for the out-of plane direction of the magnetic moments in monolayer of MnPSe\({}_{3}\) (smaller effect observed for MnPS\({}_{3}\)), resulting in valley dependent gaps. Namely, two distinct peaks are expected to be visible below the absorption onset for the out-of-plane AFM aligned spins, whereas one peak for the in-plane case. Hence, the number of the peaks and the separation of the peaks can be regarded as magnetic fingerprint of the orientation of AFM aligned spins. The change in the separation of the peaks points towards the plausible spin rotation. On the other hand, the \(\pm K\) valleys obtained for MnPX\({}_{3}\) are spin degenerated, which might hinder their spintronic applications. Hence, we have also propose a novel way how the spin valley polarization can be attained. We have suggested that particular deformation of the hexagonal lattice of MnPX\({}_{3}\) could lead to the spin resolved valley splitting. Our results give insight into the valley splitting realization in 2D antiferromagnets. In addition, the anistropic effective masses and the type of linear polarization can be regarded as magnetic markers probing the type of AFM arrangements. Finally, the spin dependent features have been identified such as valley splitting of VBM and CBM, the effective mass of holes and exciton binding energies. These features can be referred as sensitive parameters that provide insight into spin flop transitions. ## V Acknowledgment M.B. acknowledges financial support from the University of Warsaw under the "Excellence Initiative - Research University" project. P.E.F.J. and J.F. acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project-ID 314695032, projects B07 and B11), SPP 2244 (Project No. 443416183), and of the European Union Horizon 2020 Research and Innovation Program under Contract No. 881603 (Graphene Flagship). T.W. acknowledges financial support of National Science Centre, Poland under grant no. 2021/41/N/ST3/04516. Access to computing facilities of the Interdisciplinary Center of Modeling (ICM), University of Warsaw are gratefully acknowledged. We gratefully acknowledge the Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grants no. PLG/2022/015685 and PLG/2023/016571. We acknowledge ACK Cyfronet AGH (Poland) for awarding this project access to the LUMI supercomputer, owned by the EuroHPC Joint Undertaking, hosted by CSC (Finland) and the LUMI consortium through Pl-Grid organization (Poland), under the grant entitled: "Electronic, optical and thermoelectric properties of selected layered \({}^{1}\)University of Warsaw, Faculty of Physics, 00-092 Warsaw, Pasteura 5, Poland \({}^{2}\)Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg, Germany \({}^{3}\)Department of Semiconductor Materials Engineering Faculty of Fundamental Problems of Technology Wroclaw University of Science and Technology Wybrzeze Wyspiariskiego 27, 50-370 Wroclaw, Poland \({}^{*}\)Electronic address: [email protected] ## I Determination of Hubbard U from _Ab Initio_ To estimate the Hubbard \(U_{eff}\) parameter, we adopt the linear response method proposed by method of Anisimov and co-workers in the framework of a plane-wave pseudopotential approach implemented in VASP by Cococcioni _et. al_[59]. The U parameter is found from the linear fits for both non-selfconsistent (NSCF) as and selfconsistent (SCF) response functions as defined \(U\approx\left(\frac{\partial N_{b}^{SCF}}{\partial V_{I}}\right)^{-1}-\left( \frac{\partial N_{b}^{NSCF}}{\partial V_{I}}\right)^{-1}\). The results of U parameters for \(d\) electrons of transition metal ions are collected in Table S2. \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \(MPX_{3}\) : & \(MnPS_{3}\) & \(MnPSe_{3}\) & \(FePS_{3}\) & \(NiPS_{3}\) & \(NiPSe_{3}\) & \(NiPS_{3}-bulk\) & \(CoPS_{3}\) \\ \hline \(U_{eff}[eV]\) : & 4.9 & 5.3 & 5.1 & 5.6 & 5.6 & 5.6 & 5.7 \\ \hline \end{tabular} **TABLES OF COLLECTED ELECTRONIC FEATUREES.** Table S2. Calculated Hubbard \(U_{eff}\) of \(d\) electrons of transition metals for various MPX\({}_{3}\) materials. The values are given in eV. Figure S6. Projected density of states (PDOS) assuming various exchange-correlation functional for MnPS\({}_{3}\) (right side) and FePS\({}_{3}\)(left side). \begin{table} \begin{tabular}{c|c|c|c|c c c|c c c|c c} \hline \hline \multicolumn{1}{c|}{method} & band edge & \multicolumn{1}{c|}{_k-point_} & \multicolumn{1}{c|}{holes: \(m_{1}\) [principle axis]} & \multicolumn{3}{c}{\(m_{2}\) [principle axis]} & \multicolumn{3}{c}{_SO offset [meV]_} \\ \hline \multicolumn{1}{c|}{_without SO_} & _VB_ & _K/K’_ & **-0.767** & [-0.38 & 0.92 & 0.00] & **-0.768** & [0.92 & 0.38 & 0.00] & - \\ with SOC: in-plane spins & VB & K & **-0.752** & [0.52 & 0.86 & 0.00] & **-0.825** & [0.86 & -0.52 & 0.00] & 0 \\ & VB & K’ & **-0.767** & [0.74 & -0.67 & 0.00] & **-0.807** & [0.67 & 0.74 & 0.00] & 0 \\ with SOC: out-of-plane & VB & K & **-0.752** & [0.52 & 0.86 & 0.00] & **-0.825** & [0.86 & -0.52 & 0.00] & -17.9 \\ & VB & K’ & **-0.767** & [0.74 & -0.67 & 0.00] & **-0.807** & [0.67 & 0.74 & 0.00] & 0.21 \\ \hline \multicolumn{1}{c|}{method} & band edge & \multicolumn{1}{c|}{_k-point_} & \multicolumn{1}{c|}{electrons: \(m_{1}\) [principle axis]} & \multicolumn{3}{c}{\(m_{2}\) [principle axis]} & \multicolumn{3}{c}{_SO offset [meV]_} \\ \hline \multicolumn{1}{c|}{_without SO_} & CB & K/K’ & **0.547** & [0.92 & -0.38 & 0.00] & **0.546** & [0.38 & 0.92 & 0.00] & - \\ with SOC: in-plane spins & CB & K & **0.551** & [0.83 & -0.56 & 0.00] & **0.542** & [0.56 & 0.83 & 0.00] & -6.39 \\ & CB & K’ & **0.551** & [0.59 & 0.81 & 0.00] & **0.541** & [0.81 & -0.59 & 0.00] & -6.39 \\ with SOC: out-of-plane spins & CB & K & **0.551** & [0.83 & -0.56 & 0.00] & **0.542** & [0.56 & 0.83 & 0.00] & -3.50 \\ & CB & K’ & **0.551** & [0.59 & 0.81 & 0.00] & **0.541** & [0.81 & -0.59 & 0.00] & -36.60 \\ \hline \hline \end{tabular} \end{table} Table S5: MnPS\({}_{3}\) with and without SOI included. Note that the collinear spins are assumed. The SO offset is the energy difference between the results obtained with SOC and without it. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline & U [eV] & MAE & \(\Delta^{VB(K+)}_{in-out}\) [meV] & \(\Delta^{CB(K+)}_{in-out}\) [meV] & \(\Delta^{VB(K-)}_{in-out}\)[meV] & \(\Delta^{CB(K-)}_{in-out}\) [meV] & \(E^{in}_{g}\) [meV] & \(E^{out}_{g}\) [meV] \\ \hline MnPS\({}_{3}\) & 3 & 0.041 & 0 & 0 & 5.5 & 9.3 & 2.203 & 2.194 \\ & 5 & 0.028 & 0 & 0 & 6.5 & 3.9 & 2.483 & 2.479 \\ & 7 & - & 0 & 0 & 6.9 & 1.9 & 2.667 & 2.665 \\ \hline MnPS\({}_{3}\) & 3 & 0.315 & 0 & 34.4 & 17.8 & 0 & 1.689 & 1.654 \\ & 5 & 0.223 & 0 & 27.9 & 15.4 & 0 & 1.853 & 1.825 \\ & 7 & - & 0 & 16.4 & 12.6 & 0 & 1.960 & 1.944 \\ \hline \end{tabular} \end{table} Table S6: MAE is the energy difference between the AFM aligned spins along out-of plane (along easy axis) and in-plane directions (along hard axis), given in meV per magnetic ion. Note, that for in-plane directions the electronic structure is identical as without inclusion of the SOC. The \(\Delta^{VB,CB(K\pm)}_{in-out}\) denotes the difference between the VBM (CBM) at given K point. The \(E^{in}_{g}\) (\(E^{out}_{g}\)) electronic band gap for the in-plane (out-of-plane) direction of the AFM aligned spins. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Magn. & U [eV] & method & \(\varepsilon_{xx}\)\(\varepsilon_{yy}\), & \(\chi^{\parallel}_{xx}\)\(\chi^{\parallel}_{yy}\) [Å] \\ \hline MnPS3 & 3 & IP & 2.44 2.44 & 3.45 3.45 \\ & 3 & TD-DFT & 2.26 2.26 & 3.02 3.02 \\ \hline MnPSe3 & 3 & IP & 3.02 3.02 & 4.82 4.82 \\ & 3 & TD-DFT & 2.82 2.82 & 4.33 4.33 \\ \hline NiPS3 & 3 & IP & 3.15 3.10 & 5.12 5.01 \\ & 3 & TD-DFT & 3.02 2.98 & 4.82 4.72 \\ \hline NiPSe3 & 3 & IP & 4.14 4.02 & 7.51 7.21 \\ & 3 & TD-DFT & 3.02 2.98 & 4.82 4.72 \\ \hline FePSe3 & 3 & IP & 3.06 3.03 & 5.56 5.49 \\ & 3 & TD-DFT & 2.91 2.89 & 5.15 5.09 \\ \hline \hline \end{tabular} \end{table} Table S8: Static dielectric constants \(\varepsilon_{ij}\) (electronic contribution) and 2D polarizability \(\chi^{\parallel}\) calculated for the monolayer systems and U=3 eV. The static dielectric properties are calculated by means of density functional perturbation theory in the independent particle approach (IP) neglecting local field effects and including local fields effect (TD-DFT) \begin{table} \begin{tabular}{l|l|c|c|c|c|c} \hline \hline system & U [eV] & spin direction \(\Theta\)[\({}^{0}\)] & E\({}_{b}^{K+}\) in vacuum & E\({}_{b}^{K+}\) in hBN & E\({}_{b}^{K-}\) in vacuum & E\({}_{b}^{K-}\) in hBN \\ \hline MnPS\({}_{3}\) & U=3 & 0 & 1032.19 & 344.46 & 1034.46 & 346.02 \\ & & 50 & 1032.47 & 344.67 & 1034.39 & 345.83 \\ & & 90 & 1033.37 & 345.34 & 1033.48 & 345.45 \\ \hline & U=5 & 0 & 1023.98 & 322.49 & 1025.60 & 323.21 \\ & & 50 & 1023.98 & 322.49 & 1025.60 & 323.21 \\ & & 90 & 1024.69 & 322.79 & 1024.54 & 322.85 \\ \hline MnPS\({}_{3}\) & U=3 & 0 & 770.25 & 266.92 & 787.82 & 278.42 \\ & & 50 & 773.76 & 269.31 & 785.75 & 276.89 \\ & & 90 & 780.62 & 273.32 & 779.96 & 273.13 \\ \hline & U=5 & 0 & 773.71 & 254.23 & 785.49 & 261.16 \\ & & 50 & 775.00 & 254.61 & 781.97 & 259.01 \\ & & 90 & 778.86 & 256.97 & 779.77 & 257.45 \\ \hline \hline \end{tabular} \end{table} Table S10: Exciton binding energies at different K valleys and environments (vacuum, hBN) as a function of spin direction \(\Theta\) for MnPX\({}_{3}\) (X=S, Se). Figure S8. Schematic diagrams (U=5 eV) of direct transitions for (a, b) MnPS\({}_{3}\) and (c, d) MnPS\({}_{3}\) exhibiting in-plane and out-of-plane directions of the AFM aligned spins, as schematically denoted at the bottom of the picture (e, f). On the right side of the pictures schematic absorption onsets with depicted in coloured exciton peaks. Namely, blue, red and green peaks indicate band edge excitons for out-of-plane spin configurations at +K and -K valleys, and in-plane configuration of AFM aligned spins, respectively. Black arrows close to bands indicate spin degenerated bands. The position of the peaks are in order but not in scale. All of the presented direct transition are coupled to "z" polarized light with the intensities listed in Table S3. (g) The first BZ with labeled high symmetry k-points.
2303.06110
Reinforcement Learning Versus Model Predictive Control on Greenhouse Climate Control
Greenhouse is an important protected horticulture system for feeding the world with enough fresh food. However, to maintain an ideal growing climate in a greenhouse requires resources and operational costs. In order to achieve economical and sustainable crop growth, efficient climate control of greenhouse production becomes essential. Model Predictive Control (MPC) is the most commonly used approach in the scientific literature for greenhouse climate control. However, with the developments of sensing and computing techniques, reinforcement learning (RL) is getting increasing attention recently. With each control method having its own way to state the control problem, define control goals, and seek for optimal control actions, MPC and RL are representatives of model-based and learning-based control approaches, respectively. Although researchers have applied certain forms of MPC and RL to control the greenhouse climate, very few effort has been allocated to analyze connections, differences, pros and cons between MPC and RL either from a mathematical or performance perspective. Therefore, this paper will 1) propose MPC and RL approaches for greenhouse climate control in an unified framework; 2) analyze connections and differences between MPC and RL from a mathematical perspective; 3) compare performance of MPC and RL in a simulation study and afterwards present and interpret comparative results into insights for the application of the different control approaches in different scenarios.
Bernardo Morcego, Wenjie Yin, Sjoerd Boersma, Eldert van Henten, Vicenç Puig, Congcong Sun
2023-03-10T17:59:51Z
http://arxiv.org/abs/2303.06110v1
# Reinforcement Learning Versus Model Predictive Control on Greenhouse Climate Control ###### Abstract Greenhouse is an important protected horticulture system for feeding the world with enough fresh food. However, to maintain an ideal growing climate in a greenhouse requires resources and operational costs. In order to achieve economical and sustainable crop growth, efficient climate control of greenhouse production becomes essential. Model Predictive Control (MPC) is the most commonly used approach in the scientific literature for greenhouse climate control. However, with the developments of sensing and computing techniques, reinforcement learning (RL) is getting increasing attention recently. With each control method having its own way to state the control problem, define control goals, and seek for optimal control actions, MPC and RL are representatives of model-based and learning-based control approaches, respectively. Although researchers have applied certain forms of MPC and RL to control the greenhouse climate, very few effort has been allocated to analyze connections, differences, pros and cons between MPC and RL either from a mathematical or performance perspective. Therefore, this paper will 1) propose MPC and RL approaches for greenhouse climate control in an unified framework; 2) analyze connections and differences between MPC and RL from a mathematical perspective; 3) compare performance of MPC and RL in a simulation study and afterwards present and interpret comparative results into insights for the application of the different control approaches in different scenarios. keywords: Greenhouse Climate Control, Model Predictive Control, Reinforcement Learning ## 1 Introduction The world population has grown drastically in recent decades. Although its growth is slowing down, it is estimated that the world population will increase by 2,000 million people in the next 30 years reaching 9,700 million people in 2050 [1]. To feed this population in 2050, projections show that food production would need to increase by 70% between 2005/07 and 2050. Production in developing countries would have to nearly double [2]. On the other hand, the Intergovernmental Panel on Climate Change (IPCC) declared that it is crucial and urgent to render the way of land use and the agricultural production methods more efficient in order to curb global warming. All of these facts have motivated innovative production methods and technical solutions in the agricultural sector to improve agri-food production and increase yield per hectare. Climate controlled greenhouses, which allows for growing crops regardless of the outdoor environment, is one of the important growing methods, especially in a changing climate. To maintain proper growing climate with efficient energy usage and operational cost, advanced control methods of greenhouse production system (e.g. lighting, heating, CO\({}_{2}\) dosing, ventilation, screening, etc.) are also needed. Moreover, as the number of greenhouse production systems is increasing, while the number of experienced growers is limited, autonomous climate control of a greenhouse production system is also necessary. Among various control methods, MPC is effective to optimize a greenhouse production system due to its promising performance in multi-input and multi-output systems. However, predicting disturbances along the prediction horizon remains a challenging task [3; 4; 5]. The first implementations of MPC for greenhouse climate control can be traced back to the beginning of this century. The authors in [6] illustrate that a real-time applied MPC outperforms an adaptive PID controller, demonstrating its potential. MPC applications can also be found in, among others, [7; 8; 9; 10; 11; 12; 13; 14; 15]. Furthermore, [11] firstly introduces a non-linear MPC incorporating energy and water consumption to maintain climatic conditions in a greenhouse near the coast of Spain. Comparison results show that MPC can work better than set-point tracking controller, in some areas achieving an improvement of up to 10 times. [12] presents another nonlinear MPC approach based on a Volterra model [16] that captures the nonlinear relationship between ventilation and temperature. The proposed nonlinear MPC is finally applied to a detailed simulation greenhouse model. [17] also used a non-linear MPC to control the greenhouse temperature and its ventilation, as well as a hybrid MPC in [18]. Besides nonlinear MPC, linear MPC is also designed for temperature control of a greenhouse by [19], which behaves much better than the conventional on-off pulse-width modulation controller that was previously implemented. Moreover, [20] presents a tube-based linear MPC for a greenhouse system with two-time-scale dynamics. With development of sensing technology and data science, MPC tends to enter into a new stage where data-based techniques are integrated into control design. For instance, [5] proposes a data-driven MPC for greenhouse climate control, mainly focusing on temperature and carbon dioxide concentration level. The main contribution of [5] is to combine a dynamical model with a data-based model in order to identify uncertainties in the weather forecast. Besides optimization, [21] also involved tracking performance using a hierarchical MPC improving energy efficiency and reduction of operational costs. More precisely, a two layer-based MPC structure was defined, where the upper layer generates optimal set-points for the greenhouse climate control. The lower layer is introduced to track the trajectories produced by the upper layer. Recent advances in Information and Communication Technologies (ICT), as well as artificial intelligence (AI) have motivated the usage of AI-based or learning-based control, more specifically Reinforcement Learning (RL), as the key technology to transform the modern farming control mindset. Actually, the development and application of RL has a long and well-known history (see [22] and references therein). In the last two decades, several applications have been published to use RL to solve real world challenges [23], where the most relevant come from the adoption of the deep learning paradigm [24]. RL comprises a collection of algorithms and techniques that learn to solve a control problem by trial and error interaction with the environment (system). Due to this characteristic, the control model and strategy of the climate in a greenhouse could be updated and adaptable for different stages of the plant's development, even for difference cultivars. Therefore, RL is becoming more and more popular as an option for optimal and autonomous greenhouse climate control. A sign of the popularity of RL in greenhouse production systems is the three editions of the Autonomous Greenhouse Challenge [25; 26], where dozens of international teams tested their state-of-the-art AI algorithms in greenhouse climate, irrigation, and crop growth control. Besides, greenhouse benchmarks and the so-called gyms (which are software environments to train machine learning agents) have also been published recently [27; 28; 29]. Among which, the first known reference in literature is [30] that describes a RL temperature controller in a rose production greenhouse. Other variables are not controlled and few details are given about the creation of the controller. In [31], the authors have already used a deep learning approach observing seven input variables from the environment and producing eight control actions (some of them are Boolean). This work is based on a greenhouse simulator and aims at keeping the variables within pre-established bounds. A more recent article is [32], where the aim is not only to control the system but also to create optimized models of the greenhouse dynamics using samples from the real environment. The focus of [32] is on the robustness of the models and the controller. In [33], a supplemental lightning control system is described, which is developed with Q-learning, posing the problem as a discrete constrained optimal problem where energy is the variable to optimize. From the literature presented above, it is clear that certain forms of MPC and RL have already been applied to control greenhouse's climate. However, up to now, very few effort has been allocated to analyze connections, differences, pros and cons between MPC and RL especially for greenhouse climate control. However, it is important to do so as the comparisons may bring deep knowledge about these two interesting methods and it also brings insight on how to use and when to choose both methods in a greenhouse climate control application. Consequently, the objective of this paper is to compare the development and application of MPC and RL on greenhouse climate control. An agent-based deep RL controller is developed for a lettuce greenhouse system in a unified framework with a MPC approach, using the Deep Deterministic Policy Gradient (DDPG) approach. The used MPC approach to compare is mainly based on a nonlinear model, with multiple input and output variables, together with climate disturbances [3]. The outline of this paper is as follows: a lettuce greenhouse model is presented in Section 2 on which the MPC and RL controllers are based. The proposed lettuce greenhouse model is also used for simulations and control strategy evaluation. Section 3 details the development of the MPC and RL controllers from a unified framework. The simulation results based on these two different control approaches are comparatively presented in Section 4. Finally, Section 5 discusses and concludes this work with insights for selecting and applying MPC and RL in different scenarios. ## 2 Lettuce Greenhouse Model The lettuces greenhouse model is taken from [7] and discretized using explicit fourth order Runge-Kutta method with sample period \(h\). Consequently, the following state-space model can be defined: \[\begin{split} x(k+1)&=f\big{(}x(k),u(k),d(k),p\big{)}, \\ y(k)&=g\big{(}x(k),p\big{)},\end{split} \tag{1}\] with time \(k\in\mathbb{Z}^{0+}\), state \(x(k)\in\mathbb{R}^{4}\), measurement \(y(k)\in\mathbb{R}^{4}\), control input \(u(k)\in\mathbb{R}^{3}\) and weather disturbance \(d(k)\in\mathbb{R}^{4}\). Parameter \(p\in\mathbb{R}^{28}\) and nonlinear functions \(f(\cdot)g(\cdot)\) are given in Appendix. The state \(x(k)\) contains the dry matter content of the lettuce \(x_{1}(k)\) in kg\(\cdot\)m\({}^{-2}\), which is the lettuce's weight per square meter after all water has been removed. The state additionally contains the indoor \(CO_{2}\) concentration \(x_{2}(k)\) in kg\(\cdot\)m\({}^{-3}\), air temperature \(x_{3}(k)\) in \({}^{\circ}\)C and humidity in \(x_{4}(k)\) in kg\(\cdot\)m\({}^{-3}\). The weather disturbance \(d(k)\) contains the incoming radiation \(d_{1}(k)\) in W\(\cdot\)m\({}^{-2}\) and the outside \(CO_{2}\) concentration \(d_{2}(k)\) in kg\(\cdot\)m\({}^{-3}\), temperature \(d_{3}(k)\) in \({}^{\ast}\)C and humidity content \(d_{4}(k)\) in kg\(\cdot\)m\({}^{-3}\). The control signal \(u(k)\) contains supply rate of \(CO_{2}\)\(u_{1}\) in mg\(\cdot\)m\({}^{-2}\)\(\cdot\)s\({}^{-1}\), ventilation rate through the vents \(u_{2}(k)\) in mm\(\cdot\)s\({}^{-1}\) and energy supply by heating system \(u_{3}(k)\) in W\(\cdot\)m\({}^{-2}\). The measured output vector \(y(k)\in\mathbb{R}^{4}\) contains \(x_{1}(k)\) in g\(\cdot\)m\({}^{-2}\), \(x_{2}(k)\) in ppm, \(x_{3}(k)\) in \({}^{\ast}\)C and \(x_{4}(k)\) in \(\%\). Figure 1 depicts the greenhouse model with lettuce (1). ## 3 MPC versus RL Model predictive control and reinforcement learning have been developed by separate communities, the control system community and the computational intelligence community, respectively [34]. Afterwards, the two methods have evolved more or less independently. As representatives of model-based and learning-based control methods, MPC and RL behave differently from the use of terminologies to the way of seeking for optimal control actions. Due to the way of formulating the control problem and defining the control goal, the implementation of a MPC requires a good model. The adaptability of MPC to various conditions is limited (assuming the model in the MPC is not online updated) and may not be enough for autonomous objectives in greenhouse production. Moreover, handling uncertainties in a MPC is computationally expensive due to the complex mathematical propagation of these uncertainties. Figure 1: Schematic representation of lettuce greenhouse. Reinforcement learning is a dynamic control strategy which can automatically update the current control policy through incorporating newly developed knowledge learned from historical and real time data. Due to this characteristic, the control strategy of RL is more adaptable to current dynamics of the systems. Specifically for a greenhouse it is more adaptable to different stages of plant development, even for difference cultivars. Besides, RL can learn control strategies completely from data and not always a good model is needed. Another advantage is that it is relatively easy applied with limited complexity in manipulating the instruments. In spite of being simple and well-grounded, RL suffers from clear limitations. The most significant is the necessity to have a discrete and reduced set of problem states and control actions. Apart from differences, MPC and RL also share plenty of common features. For example, both MPC and RL are predictive controllers independently of whether they integrate disturbance forecasts in their control logic. MPC uses explicit optimization along a finite prediction horizon, while RL learns actions to optimize the sum of the immediate and the discounted future rewards. In order to compare MPC with RL, the following subsections will explain precisely the development of RL from an unified framework with MPC in terms of 1) prediction principle; 2) reward (or cost) function; as well as 3) constraints handling. Besides the mathematical comparisons of the prediction principle, reward function and constraints handling, the performance of the developed MPC and RL, that are applied on the same use case, will also be compared. The theoretical comparisons about stability, feasibility, and robustness of the two control approaches will not be covered in this paper. In the following, first the MPC framework is detailed (Section 3.1) and then the RL framework is detailed (Section 3.2). ### Nonlinear Model Predictive Control Based on the lettuce greenhouse model presented in Section 2, this subsection presents a nonlinear MPC for climate control to optimize the greenhouse's efficiency. #### 3.1.1 Prediction Principle The proposed MPC controller is working according to the receding horizon principle. Here, at each time step, a new state measurement is taken from the greenhouse and used to initialize the model in the MPC (controller model). This model is propagated forward in time from this current state measurement, while a constrained cost function is minimized with the control inputs as decisions variables. From these optimized future control input sequences, only the first value is applied after which, again, a new current state measurement is collected. This procedure is repeated at each time step. The model in (1) is used in the MPC and in the following, the optimization problem, cost and constraints are formulated. #### 3.1.2 Optimization Problem The optimization problem that is formulated in this section is employing the model given in (1). It is assumed that at each time instant, the state \(x(k)\) can be measured or is perfectly estimated. Then, the following optimization problem is solved at each time step \(k_{0}\): \[\begin{split}\min_{u(k)}&\sum_{k=k_{0}}^{k_{0}+N_{ p}}V\big{(}u(k),y(k)\big{)},\\ \text{s.t.}& x(k+1)=f\big{(}x(k),u(k),d(k),p\big{)},\quad y(k)=g\big{(}x(k),p\big{)},\\ & u_{\min}\leq u(k)\leq u_{\max},\quad|u(k)-u(k-1)|\leq\delta u,\\ & y_{\min}(k)\leq y(k)\leq y_{\max}(k),\quad\text{for }k=k_{0},\ldots,k_{0}+N_{p},\\ & x(k_{0})=x_{0}.\end{split} \tag{2}\] #### 3.1.3 Cost Function and Constraints The cost function \(V\big{(}u(k),y(k)\big{)}\) is defined as: \[V\big{(}u(k),y(k)\big{)}=-q_{y_{1}}\cdot y_{1}(k_{0}+N_{p})+\sum_{j=1}^{3}q_{ u_{j}}\cdot u_{j}(k), \tag{3}\] with \(q_{y_{1}},q_{u_{j}}\in\mathbb{R}\) defined as weights in the optimization that can also be seen as tuning variables. This cost function establishes a trade-off between the maxi mization of yield per square meter and the minimization of energy usage (control inputs). This trade-off is determined by the ratio of \(q_{y_{1}}\) and \(q_{u_{j}}\). The constraints in (2) are defined as: \[\begin{split} u_{\min}&=\begin{pmatrix}0&0&0\end{pmatrix} ^{T},\quad u_{\max}=\begin{pmatrix}1.2&7.5&150\end{pmatrix}^{T},\quad\delta u= \frac{1}{10}u_{\max},\\ y_{\min}(k)&=\begin{pmatrix}0&0&f_{y_{3,\min}}(k)&0\end{pmatrix}^{T},\\ y_{\max}(k)&=\begin{pmatrix}\infty&1.6&f_{y_{3,\max}}(k)&70\end{pmatrix}^{T}, \end{split} \tag{4}\] with lower and upper bounds on the control input defined by \(u_{\min},u_{\max}\in\mathbb{R}^{3}\), respectively, and the bound on the change of the control input defined by \(\delta u\in\mathbb{R}^{3}\). The time-varying lower and upper bound on the output are \(y_{\min}(k)\) and \(y_{\max}(k)\in\mathbb{R}^{4}\), respectively. More precisely, only the third element in each of these bounds is time-varying and defined as: \[f_{y_{3,\min}}(k)=\begin{cases}10,&\text{if }d_{1}(k_{0})<10\\ 15,&\text{otherwise}\end{cases},\quad f_{y_{3,\max}}(k)=\begin{cases}15,& \text{if }d_{1}(k_{0})<10\\ 20,&\text{otherwise}.\end{cases} \tag{5}\] These time-varying constraints on the indoor temperature are set such that the indoor temperature is colder during the night than during the day in the greenhouse according to [35]. Here it is demonstrated that lower greenhouse temperatures can later be compensated by higher ones as long as a daily average greenhouse temperature is satisfied. The time-varying constraint on the indoor temperature is graphically illustrated in Fig. 2. Here, the gray area indicates the region where the controller model output \(y_{3}(k)\) is controlled to. ### Deep Reinforcement Learning Deep Deterministic Policy Gradient (DDPG) is used to develop the RL control agent. The DDPG algorithm stems from Deterministic Policy Gradient [36] algorithm comprising concepts of deep learning theory. The main advantage of DDPG is that it provides good performance in large and continuous state-action space environments, which motivates the selection in the greenhouse climate control application at hand. DDPG is an off-policy and model-free actor-critic RL algorithm [37]. Due to these characteristics, the control actions of DDPG are generated by a different policy than the one being learnt and the optimal policy and value function are estimated directly without making efforts to learn the system dynamics. The structure of an actor-critic RL agent is shown in Figure 3, where the actor has the role of storing and applying the current best policy, using a deep neural network. According to the learned policy function, \(\pi(s)\), and the system state, \(s\), the actor computes the optimal actions, \(u\). On another hand, the critic has the role of storing the value function, \(Q(s,u)\), also using a deep neural network. The value function is the expected accumulated future reward for each state-action pair. The critic estimates the value function using the reward obtained from the system (environment) and its own information. Moreover, the critic is also in charge of calculating the temporal-difference error (TD) (i.e. the loss function), which is used during the learning process for both the critic and the actor. More than the actor and critic roles, DDPG also uses two distinctive elements of Deep-Q-Network [38]: the replay buffer and target networks. The replay buffer is a memory buffer that stores the transition tuple at each step. This tuple contains the current state \(s(k)\), action \(u(k)\), obtained reward \(r(k)\), next state \(s(k+1)\) and a Boolean variable indicating if the next state is terminal Figure 2: Graphical illustration of time-varying constraint imposed on output \(y_{3}(k)\) that represents the temperature inside of the greenhouse. The gray area is the region where the optimized control signals steer \(y_{3}(k)\) in. \(t(k)\). A terminal state designates a state where the experiment ends. At each timestep, the critic and the actor are trained with a minibatch obtained by sampling random tuples from the replay buffer. This way of training eliminates time correlation between learning samples and facilitates convergence during the learning process. Target networks are copies of the actor and critic networks. They are used during the training phase, providing the target values to compute the loss function. Once the original networks are trained with the set of tuples of the minibatch, the trained networks are copied to the target networks using a soft update, _i.e._ forcing the target weights to change slowly. The use of target networks with soft update allows to give consistent targets during the TD backups and makes the learning process remain stable. Consequently, _DDPG_ requires four neural networks: the actor and the critic and their respective target networks. The gradient functions that are used to update the weights of the critic and actor are presented in (6) and (7), respectively. \(\phi\) is the set of weights of the critic network and \(\theta\) the weight of the actor, \(\eta_{\phi}\) and \(\eta_{\theta}\) are the learning rates of critic and actor, \(B\) represents the mini-batch of transition tuples and \(N\) is its size. Target networks are represented with the prime symbol. \(\upsilon(k)\) (8) are the target Q-values (not to be confused with target networks) and are used to compute the loss function. The weights of the critic are updated to minimize this loss function. The discount factor, \(\gamma\), is a value between 0 and 1 that determines the importance of future rewards to the current state. Even though a one-to-one relation with the prediction horizon in MPC is difficult to define, there is a relation between these two. Note that the target Q-Values (8) are Figure 3: Actor-Critic agent structure. obtained from the outputs of the actor and critic target networks, following the target network concept \[\Delta\phi=\eta_{\phi}\nabla_{\phi}\left(\frac{1}{N}\sum_{k\in B}\left(Q(s(k),u(k) \mid\phi^{Q^{\prime}})-\upsilon(k)\right)^{2}\right) \tag{6}\] \[\Delta\theta=\eta_{\theta}\nabla_{\theta}\left(\frac{1}{N}\sum_{k\in B}Q(s(k), \pi(s(k)\mid\theta^{\pi})\mid\phi^{Q})\right) \tag{7}\] \[\upsilon(k)=r(k)+\gamma Q^{\prime}(s(k+1),\pi^{\prime}(s(k+1)\mid\theta^{\pi^ {\prime}})\mid\phi^{Q^{\prime}}) \tag{8}\] The update of the weights of the target networks from the trained networks are shown in (9) and (10). The parameter \(\tau\) indicates how fast this update is carried on. This soft update is made each step after training the main networks \[\phi^{Q^{\prime}}\leftarrow\tau\phi^{Q}+(1-\tau)\phi^{Q^{\prime}}. \tag{9}\] \[\theta^{\pi^{\prime}}\leftarrow\tau\theta^{\pi}+(1-\tau)\theta^{\pi^{\prime}}. \tag{10}\] #### 3.2.1 Agent Description The agent's behaviour is shaped according to it's state, the reward function and the discount factor. The state contains ten terms: \[s(k)=(\Delta_{y_{1}}(k),e_{y_{2}}(k),e_{y_{3}}(k),y_{4}(k),d_{1} (k),d_{2}(k),d_{3}(k),\\ u_{1}(k-1),u_{2}(k-1),u_{3}(k-1)) \tag{11}\] where \(\Delta_{y_{1}}(k)=y_{1}(k)-y_{1}(k-1)\), \(e_{y_{2}}(k)=y_{2,ref}(k)-y_{2}(k)\), \(e_{y_{3}}(k)=y_{3,ref}(k)-y_{3}(k)\), \(d_{i}(k)\) are the current disturbances and \(u_{i}(k-1)\) are the previous control actions. The reward function and the discount factor take similar roles as the cost function and prediction horizon, respectively, in MPC. They are detailed next. _Reward Function and Constraints_. Similarly to the cost function of MPC presented in equation (3), the dry matter is the most important item in the reward function. Nevertheless, other variables also need to be taken into account to guide the learning process. As a result, the reward function contains six terms. The first three terms are rewards and the other terms are penalties. It is defined as: \[r(k)=c_{r,1}\Delta_{y_{1}}(k)+r_{CO_{2}}(k)+r_{T}(k)-(\sum_{j=1}^{3}c_{r,u_{j}} \cdot u_{j}(k-1)) \tag{12}\] where \(r_{CO_{2}}(k)\) and \(r_{T}(k)\) represent the rewards associated with the control of the main production variables, \(CO_{2}\) concentration and temperature, and \(c_{r,i}\) are adjustable constant parameters. Dry matter and energy consumption are considered as in the MPC formula (3) but the agent is also rewarded if \(CO_{2}\) concentration and temperature are controlled satisfactorily. In accordance with the temperature constraint included in the MPC (5), the reward function depends not only on the temperature error but also on the heating system. That is because during the day it is more efficient to warm up the greenhouse with the solar radiation than with the heating system and applying ventilation. \(CO_{2}\) concentration is also controlled to be higher during the day and lower during the night, as is common practice in a greenhouse. The agent will receive a fixed reward for maintaining these measurements within a given range and a penalty for going outside of the range. Consequently, the constraints in (2) are codified here as rewards, defined as \[r_{CO_{2}}(k)=\begin{cases}-c_{r,CO_{2},1}\cdot(y_{2}(k)-CO_{2_{min}}(k))^{2}& \text{if }y_{2}(k)<CO_{2_{min}}(k)\\ -c_{r,CO_{2},1}\cdot(y_{2}(k)-CO_{2_{max}}(k))^{2}&\text{if }y_{2}(k)>CO_{2_{max}}(k)\\ c_{r,CO_{2},2}&\text{otherwise}\end{cases} \tag{13}\] \[r_{T}(k)=\begin{cases}-c_{r,T,1}\cdot(y_{3}(k)-T_{min}(k))^{2}&\text{if }y_{3}(k)<T_{min}(k)\\ -c_{r,T,1}\cdot(y_{3}(k)-T_{max}(k))^{2}&\text{if }y_{3}(k)>T_{max}(k)\\ c_{r,T,2}&\text{otherwise},\end{cases}\] with \(c_{r,CO_{2}}\) and \(c_{r,T}\) being adjustable constant parameters. These parameters can be considered as the weights of the constraint on the reward. Depending on the strength of the reward, the agent will adopt a more conservative or more aggressive strategy for the greenhouse climate control. To sum up, the reward function of the RL agent codifies the MPC cost function and constraints. In (12), the dry matter increase is rewarded instead of the accumulated value, as in (3), because the agent learning was not consistent when the accumulated value was used. Prediction PrincipleNot like the receding horizon used in MPC, the RL agent uses a discount factor \(\gamma\), as shown in (8), to describe the future influence of rewards. If \(\gamma=0\), the RL agent will completely focus on learning optimal actions for the immediate reward, while if \(\gamma=1\), the RL agent will evaluate its actions based on the total sum of all its future rewards. So that picking a particular value of \(\gamma\) is equivalent to picking a prediction horizon of MPC though it is complicated to find an exact one-to-one relation. From (8), one can also see that discount factor \(\gamma\) is related to the aggressiveness of the control actions, similarly to parameters \(q_{y_{1}}\) and \(q_{u_{j}}\) in (3). In this paper, the discount factor \(\gamma\) is tuned by trial and error to produce dynamical behaviour in the greenhouse similar to MPC. #### 3.2.2 Agent training The structure of the RL agent developed in this paper is described in this section. The critic networks structure is shown in Figure 4. Observations are processed by three layers of 10 Rectified Linear Units (ReLU) and actions are processed by two layers of 10 units. All the layers are fully connected. The actor networks consist of three fully connected layers of 20 ReLU units and a final hyperbolic tangent layer with three nodes, corresponding to each one of the control actions. The training options for all the networks are given in Table 1 and the parameters related to the reward/penalty function are listed in Table 2. The agent is trained for 500 epochs, where each epoch consists of one day of crop growth simulation. Each day of simulation contains 96 steps, so each step is 15 minutes. The agent training uses different meteorological conditions and initial conditions for each epoch to avoid overfitting and to facilitate generalization. Therefore disturbances are \(d_{i}(k)=d_{i}(k)\cdot\kappa\) where \(\kappa\sim U(0.7,1.3)\,.\) ## 4 Simulation Results In order to be able to compare MPC with RL, the same disturbance is used and similar constraints and control goals are taken into account The weather data \(d(k)\) used throughout the simulations are real-life data, presented in [39]. These data are collected during experiments performed in the greenhouse called "the Venlow Energy greenhouse" that is located in Bleiswijk, Holland. The collected data points are sampled at 5 minutes and \(N\) of these are used and \begin{table} \begin{tabular}{c c|c c} parameter & value & parameter & value \\ \hline learning rate & \(1\times 10^{-3}\) & experience buffer size & \(1\times 10^{4}\) \\ gradient threshold & 1 & experience mini-batch size & 64 \\ \(L_{2}\) regularization factor & \(1\times 10^{-5}\) & discount factor, \(\gamma\) & 0.9 \\ \end{tabular} \end{table} Table 1: Training options for all the networks \begin{table} \begin{tabular}{c c|c c} parameter & value & parameter & value \\ \hline \(c_{r,1}\) & 16 & \(c_{r,CO_{2},1}\) & 0.1 \\ \(c_{r,u_{1}}\) & \(-4.5360\times 10^{-4}\) & \(c_{r,CO_{2},2}\) & 0.0005 \\ \(c_{r,u_{2}}\) & \(-0.0075\) & \(c_{r,T,1}\) & 0.001 \\ \(c_{r,u_{3}}\) & \(-8.5725\times 10^{-4}\) & \(c_{r,T,2}\) & 0.0005 \\ \end{tabular} \end{table} Table 2: Parameters of the reinforcement function Figure 4: Actor-Critic agent structure. re-sampled to the sample period \(h\). Figure 5 shows the specific disturbance realization used to compare the performance of the DDPG agent and the MPC controller for 288 samples, which are equivalent to 3 days. ### Economic Profit Indicator Besides dry matter, indoor \(CO_{2}\) concentration, air temperature and relative humidity are optimized in the cost (reward) function, also one extra indicator is considered. This is the economic profit indicator (EPI) and considered as an additional validation item to compare the performance of the MPC and the DDPG-based RL agent: \[EPI=\phi(y_{1}(t_{f}))-\sum_{t_{b}}^{t_{f}}(c_{q}u_{q}(t)+c_{co_{2}}u_{co_{2}}(t ))h, \tag{14}\] where \(\phi(y_{1}(t_{f}))\) is the gross income obtained at harvest time \(t_{f}\) and \(c_{q}u_{q}(t)+c_{co_{2}}u_{co_{2}}(t)\) are the operating costs of the air conditioning equipment (\(Hflm^{-2}s^{-1}\)). The auction price of the lettuce follows a linear ratio \(\phi(y_{1}(t_{f}))=c_{pri,1}+c_{pri,2}y_{1}(t_{f})\), between the auction price and the harvest weight of lettuce, in \(kgm^{-2}\). The units of parameters \(c_{pri,1}\) and \(c_{pri,2}\) are \(Hflm^{-2}\) and \(Hflkg^{-1}m^{-2}\), respectively. It is assumed that the operating costs of the climate control equipment are linearly related to the amount of energy \(u_{q}\) (\(Wm^{-2}\)) and the amount of carbon dioxide introduced into the system is \(u_{c}\) (\(kgm^{-2}s^{-1}\)). These operating costs are parameterized by the price of energy \(c_{q}\) (\(HflJ^{-1}\)) and the price of carbon dioxide is \(c_{co_{2}}\) (\(Hflkg^{-1}\)). More details of their values are found in Table 3. Figure 5: Disturbances applied to the greenhouse (solar radiation, external \(CO_{2}\) concentration, temperature and relative humidity). ### Specific MPC settings The weights \(q_{\hat{y}_{1}},q_{u_{i}}\) are tuned such that an acceptable trade-off between yield and energy usage is achieved. The prediction horizon \(N_{p}\) is not taken too large to prevent the necessity of including uncertainty that grows over time. Indeed, weather forecasts become more uncertain over the future horizon. Other settings that are used during the simulation studies are given in Table 4. The open-source software CasADi [40] and solver IPOPT [41] are used in a Matlab environment to solve the optimization problem formulated in (2), while following the direct single-shooting method and warm start option of IPOPT. ### Results Figure 6 shows the measurement of the simulated greenhouse using the DDPG-based RL agent (in blue) and the MPC controller (in orange). From this figure, we can find that both MPC and RL obtain a similar dry matter content of lettuce, though RL is slightly more productive. The rest of the outputs are kept approximately within their constraints. The interior relative humidity levels are very similar in both cases, although RL allows for more variation. Similarly, the indoor temperature and \(CO_{2}\) concentration are close to their \begin{table} \begin{tabular}{c c|c c} parameter & value & parameter & value \\ \hline \(h\) & 15 minutes & \(q_{\hat{y}_{1}}\) & \(10^{3}\) \\ \(N_{p}\) & 6 hours & \(q_{u_{i}}\) & \(\{10,1,1\}\) \\ \(N\) & 40 days & \(N_{s}\) & 20 \\ \end{tabular} \end{table} Table 4: Simulation and controller settings. \begin{table} \begin{tabular}{c|c} parameter & value \\ \hline \(c_{co_{2}}\) & \(42\times 10^{-2}Hflkg^{-1}\) \\ \(c_{q}\) & \(6.35\times 10^{-9}HflJ^{-1}\) \\ \(c_{pri,1}\) & \(1.8Hflm^{-2}\) \\ \(c_{pri,2}\) & \(16Hflkg^{-1}\) \\ \end{tabular} \end{table} Table 3: Parameters of the economic profit function minimum level although the RL agent keeps the temperature at a level slightly above the minimum temperature to avoid receiving this penalty. The RL agent is clearly more conservative than the MPC because the cost function penalizes when the temperature drops below the minimal temperature constraints. The MPC controller controls more accurately the humidity while the DDPG agent does a better job regarding the indoor temperature. The \(CO_{2}\) concentration is increased noticeably during daylight hours by both controllers, which is expected. However, it is kept higher during night by the DDPG agent. Figure 7 shows the control actions of both controllers. It is apparent that more efficient use of ventilation is done by the MPC, but a higher quantity of energy is given by the DDPG agent. \begin{table} \begin{tabular}{c|c|c} Index & MPC & RL \\ \hline EPI & \(1.843Hflm^{-2}\) & \(1.788Hflm^{-2}\) \\ computational time & \(305.04s\) & \(2.36s\) \\ \end{tabular} \end{table} Table 5: Performance of MPC and RL Figure 6: Outputs of the system: dry matter, indoor \(CO_{2}\) concentration, air temperature and relative humidity. DDPG agent (blue) and MPC (orange) Figure 7: Control actions applied by the DDPG agent (blue) and the MPC (orange). In terms of production, the RL agent achieves a greater production of lettuce, but in terms of economic benefit, the MPC achieves a higher economic return (1.843 \(Hflm^{-2}\)) than the agent (1.788 \(Hflm^{-2}\)), as shown in Table 5. This is due to the fact that the economic profitability of the increase in lettuce production is lower than the cost of resource consumption for the economic return function with the parameters given in Table 3. Figure 7 shows that the temperature levels are kept low in both cases but the RL agent clearly keeps the but \(CO_{2}\) concentration at a much higher level than the MPC. An explanation for this behaviour might be that since the increase in the injection of CO\({}_{2}\) can significantly increase the production, the agent has favored exploring this option arriving at a sub-optimal solution. To solve this problem, we can let the agent train continuously to explore a larger state space, or we can reshape the cost function and redefine the upper and lower limits of the CO\({}_{2}\) injection. The agent can potentially find the optimal solution for this redefined problem. However, if the auction price of lettuce or the cost of CO\({}_{2}\) changes, this agent will not adapt to find a new optimal solution unless it takes all these variable parameters as observation and starts training again. Figure 8 shows the outputs of the simulated greenhouse for a complete growing cycle of lettuce, which lasts for 40 days. The production of RL is now clearly higher, but the EPI is smaller (2.195 \(Hflm^{-2}\)) in contrast to 2.504 \(Hflm^{-2}\) produced by the MPC. It is also apparent that the RL agent is more permissive with the constraint of humidity. That might become a health problem for the crops and has to be looked at with care. Figure 8: System outputs for a complete growing cycle (40 days). ## 5 Discussion and Conclusion This paper proposed a MPC and RL-based control agent that control a lettuce greenhouse. The provided greenhouse model behaves as a simulation environment. The performance of the RL and MPC has been compared and analyzed in order to explore insights of using two different control methods for optimal control of greenhouse production to achieve the best use of natural resources and production efficiency in the presence of uncertainty in the forecast of the future climate. Regarding the results, the agent has not obtained as good results as the MPC model in terms of energy optimization. Furthermore, although more production has been obtained, the economic profitability has not been as good as in the case of the MPC. As in greenhouse, it is difficult to evaluate the controller in terms of output due to the large number of outputs, that is, it is difficult to design a cost function that can be accurately judged. Although we could directly use the economic profit function as a reinforcement function for the agent, this is not a reliable method for the following reasons. In reinforcement learning we cannot directly apply constraints to the states, which leads to the fact that if we do not design a small and reasonable range for these states in the reward function, the agent will be able to explore those states that are completely irrational, such as going up the temperature above 40 \({}^{\text{o}}\)C, turn on the fan to the maximum, etc. The second is that if there is a shortcut hidden in the cost function, the agent will go for that shortcut. For example, in the case of comparing the economic benefit, if the fan is set to maximum without injecting any carbon dioxide and without turning on the heating, the greatest economic benefit is obtained, which reaches 1.894 \(Hflm^{-2}\). However, this keeps the indoor temperature and relative humidity at a level that is very dangerous for the lettuces. Furthermore, with a trained agent, it is difficult to know whether the agent has converged to optimal control or suboptimal control. The only solution can be to let it compare itself with other optimal control controllers. In short, it is very difficult to design a well-tuned agent, and the design process is often an iterative one. However, once an agent that can be considered correct is obtained, it is very powerful, and control results can be obtained with the application of very little computing resources and computation time. Moreoever, RL agent can also handle uncertainties much easier than MPC in terms of its optimization strategies. To give clear comparisons between MPC and RL, we list all their cons and pros in the following Table 6, which will give more insights on how to choose these methods for different scenarios. During the development of the project, we have identified a series of areas for improvement and continuation of the work: 1. One of the proposals for continuing this work would be to redesign the agent and integrate future climate predictions as observations. In this work, we only input the current outdoor weather to the agent, and the prediction of the agent's future states is based solely on the current indoor and outdoor weather and the actions performed. In future work, one could try to include predictions of the future outdoor weather as input, thus allowing the agent to make better predictions of future reinforcements; 2. Another proposal is to use the parameters of the economic profit indicator as agent observations and let the agent learning to make corresponding changes in energy use when these parameters change; 3. It is also proposed to apply this problem to more complex reinforcement deep learning algorithms, such as TD3 (Twin Delayed Deep Deterministic policy gradient algorithm), PPO (Proximal Policy Optimization) or SAC (Soft Actor-Critic), and compare their performance. 4. Regarding both MPC and RL have their own cons and pros, another way to use both of their strong features is to integrate MPC with RL for climate control in greenhouse production system. ## Appendix The greenhouse with lettuce model is defined as: \[\frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t} =p_{1,1}\phi_{\mathrm{phot,c}}(t)-p_{1,2}x_{1}(t)2^{x_{3}(t)/10-5/2},\] \[\frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t} =\frac{1}{p_{2,1}}-\phi_{\mathrm{phot,c}}(t)+p_{2,2}x_{1}(t)2^{x_{ 3}(t)/10-5/2}+u_{1}(t)10^{-6}-\phi_{\mathrm{vent,c}}(t),\] \[\frac{\mathrm{d}x_{3}(t)}{\mathrm{d}t} =\frac{1}{p_{3,1}}u_{3}(t)-(p_{3,2}u_{2}(t)10^{-3}+p_{3,3})(x_{3}( t)-d_{3}(t))+p_{3,4}d_{1}(t),\] \[\frac{\mathrm{d}x_{4}(t)}{\mathrm{d}t} =\frac{1}{p_{4,1}}\big{(}\phi_{\mathrm{transp,h}}(t)-\phi_{\mathrm{ vent,h}}(t)\big{)},\] with \[\phi_{\mathrm{phot,c}}(t) =\Big{(}1-\exp\!\big{(}-p_{1,3}x_{1}(t)\big{)}\Big{)}\Big{(}p_{1, 4}d_{1}(t)\big{(}-p_{1,5}x_{3}(t)^{2}+...\] \[\qquad\qquad p_{1,6}x_{3}(t)-p_{1,7}\big{)}\big{(}x_{2}(t)-p_{1,8 }\big{)}\Big{)}/\varphi(t),\] \[\varphi(t) =p_{1,4}d_{1}(t)+\big{(}-p_{1,5}x_{3}(t)^{2}+p_{1,6}x_{3}(t)-p_{ 1,7}\big{)}\big{(}x_{2}(t)-p_{1,8}\big{)},\] \[\phi_{\mathrm{vent,c}}(t) =\big{(}u_{2}(t)10^{-3}+p_{2,3}\big{)}\big{(}x_{2}(t)-d_{2}(t) \big{)},\] \[\phi_{\mathrm{vent,h}}(t) =\big{(}u_{2}(t)10^{-3}+p_{2,3}\big{)}\big{(}x_{4}(t)-d_{4}(t) \big{)},\] \[\phi_{\mathrm{transp,h}}(t) =p_{4,2}\Big{(}1-\exp\!\big{(}-p_{1,3}x_{1}(t)\big{)}\Big{)}\] \[\qquad\Big{(}\frac{p_{4,3}}{p_{4,4}(x_{3}(t)+p_{4,5})}\mathrm{ exp}\Big{(}\frac{p_{4,6}x_{3}(t)}{x_{3}(t)+p_{4,7}}\Big{)}-x_{4}(t)\Big{)},\] and with \(t\in\mathbb{R}\) the continuous time. Here, \(\phi_{\mathrm{phot,c}}(t),\phi_{\mathrm{vent,c}}(t)\), \(\phi_{\mathrm{transp,h}}(t)\) and \(\phi_{\mathrm{vent,h}}(t)\) are the gross canopy photosynthesis rate, mass exchange of CO\({}_{2}\) through the vents, canopy transpiration and mass exchange of H\({}_{2}\)O through the vents, respectively. The measurement equation is defined as: \[y_{1}(t) =10^{3}\cdot x_{1}(t)\qquad\qquad\qquad\qquad\qquad\qquad\ The model is discretized using the explicit fourth order Runge-Kutta method resulting in the discrete-time model as presented in (1): \[\begin{split} x(k+1)&=f(x(k),u(k),d(k),p),\\ y(k)&=g(x(k),p),\end{split} \tag{15}\] with discrete time \(k\in\mathbb{Z}^{0+}\) and relation \(t=k\cdot h\) with \(h\) the sample period. The initial state and control signal that are used during the simulation are defined as: \[x(0)=\left(0.0035\quad 0.001\quad 15\quad 0.008\right)^{T},\qquad u(0)=\left( 0\quad 0\quad 0\right)^{T}.\] \begin{table} \begin{tabular}{c c|c c|c c|c c} parameter & value & parameter & value & parameter & value \\ \hline \(p_{1,1}\) & 0.544 & \(p_{2,1}\) & 4.1 & \(p_{3,1}\) & 3\(\cdot 10^{4}\) & \(p_{4,1}\) & 4.1 \\ \(p_{1,2}\) & 2.65 \(\cdot 10^{-7}\) & \(p_{2,2}\) & 4.87 \(\cdot 10^{-7}\) & \(p_{3,2}\) & 1290 & \(p_{4,2}\) & 0.0036 \\ \(p_{1,3}\) & 53 & \(p_{2,3}\) & 7.5 \(\cdot 10^{-6}\) & \(p_{3,3}\) & 6.1 & \(p_{4,3}\) & 9348 \\ \(p_{1,4}\) & 3.55 \(\cdot 10^{-9}\) & \(p_{2,4}\) & 8.31 & \(p_{3,4}\) & 0.2 & \(p_{4,4}\) & 8314 \\ \(p_{1,5}\) & 5.11 \(\cdot 10^{-6}\) & \(p_{2,5}\) & 273.15 & & & \(p_{4,5}\) & 273.15 \\ \(p_{1,6}\) & 2.3 \(\cdot 10^{-4}\) & \(p_{2,6}\) & 101325 & & & \(p_{4,6}\) & 17.4 \\ \(p_{1,7}\) & 6.29 \(\cdot 10^{-4}\) & \(p_{2,7}\) & 0.044 & & & \(p_{4,7}\) & 239 \\ \(p_{1,8}\) & 5.2 \(\cdot 10^{-5}\) & & & & & \(p_{4,8}\) & 17.269 \\ & & & & & & \(p_{4,9}\) & 238.3 \\ \end{tabular} \end{table} Table 7: Values of the model parameters that are taken from [7].
2302.11745
Membrane buckling and the determination of Gaussian curvature modulus
Biological membranes are able to exhibit various morphology due to the fluidity of the lipid molecules within the monolayers. The shape transformation of membranes has been well described by the classical Helfrich theory, which consists only a few phenomenological parameters, including the mean and the Gaussian curvature modulus. Though various methods have been proposed to measure the mean curvature modulus, determination of the Gaussian curvature modulus remains difficult both in experiments and in simulation. In this paper we study the buckling process of a rectangular membrane and a circular membrane subject to compressive stresses and under different boundary conditions. We find that the buckling of a rectangular membrane takes place continuously, while the buckling of a circular membrane can be discontinous depending on the boundary conditions. Furthermore, our results show that the stress-strain relationship of a buckled circular membrane can be used to effectively determine the Gaussian curvature modulus.
Mei-Ting Wang, Rui Ma, Chen-Xu Wu
2023-02-23T02:25:49Z
http://arxiv.org/abs/2302.11745v1
# Membrane buckling and the determination of Gaussian curvature modulus ###### Abstract Biological membranes are able to exhibit various morphology due to the fluidity of the lipid molecules within the monolayers. The shape transformation of membranes has been well described by the classical Helfrich theory, which consists only a few phenomenological parameters, including the mean and the Gaussian curvature modulus. Though various methods have been proposed to measure the mean curvature modulus, determination of the Gaussian curvature modulus remains difficult both in experiments and in simulation. In this paper we study the buckling process of a rectangular membrane and a circular membrane subject to compressive stresses and under different boundary conditions. We find that the buckling of a rectangular membrane takes place continuously, while the buckling of a circular membrane can be discontinous depending on the boundary conditions. Furthermore, our results show that the stress-strain relationship of a buckled circular membrane can be used to effectively determine the Gaussian curvature modulus. ## I Introduction Buckling, a common phenomenon observed in our daily life [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12], refers to the sudden change in shape of an elastic object under compressive loads. The research of buckling can be dated back to as early as 1691, when Jacob Bernoulli studied the buckling of an elastic beam [13]. In the 18th century, Leonhard Euler and Daniel Bernoulli further developed the elastic beam theory which nowadays constitutes an important branch of the continuum mechanics, and has broad applications in structural and mechanical engineering [14]. In recent years, buckling in fluid membranes has drew attention of many physicists [15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. The major components of a fluid membrane are lipid molecules, which are typically composed of a hydrophilic polar head and two hydrophobic hydrocarbon tails. In aqueous solution, the lipid molecules assemble into a double layer structure with the hydrophobic tails embedded inside so as to avoid water and the hydrophilic heads exposed to the water, which is referred to as a lipid bilayer [25]. The lipid molecules can freely move laterally within the monolayer, which differs the fluid membrane from the solid shell. The buckling of a lipid membrane therefore exhibits novel behaviors compared with that of the elastic shell, such as the anisotropic tension and negative compressibility [15; 16]. The thickness (\(\sim 4\)nm) of a typical fluid membrane is negligible compared with its lateral dimension. When considering the deformation of the membrane on length scales that are even moderately larger than the thickness, the Helfrich theory [26; 27], which treats the membrane as a continuum 2D surface, has been extremely successful in many applications [28; 29; 30; 31; 32; 33; 34]. The theory only has a couple of phenomenological parameters to characterize the membrane's property, such as the mean and the Gaussian curvature modulus \(\kappa\) and \(\bar{\kappa}\), and the spontaneous curvature \(c_{0}\). The local deformation of a membrane directly depends on the mean curvature modulus \(\kappa\), which makes the measurement of \(\kappa\) a relatively easy assay. In experiments, \(\kappa\) is typically obtained via the fluctuation spectrum of a flat membrane [35; 36; 37], or the force spectrum to pull a membrane tether from a spherical shape [38; 39]. However, measurement the Gaussian curvature modulus \(\bar{\kappa}\) is difficult because it only depends on the global deformation of the membrane that alters the topology due to the Gauss-Bonnet theorem [40; 41]. For this very reason, in cellular processes which change the topology of the membrane, such as cell division, endocytosis and exocytosis, the role of Gaussian curvature modulus \(\bar{\kappa}\) can not be simply ignored [42; 43; 44]. Boundary conditions (BCs) are important to determine the shape of an elastic object. For a fluid membrane patch with an open edge, the Gaussian curvature modulus \(\bar{\kappa}\) has a contribution to the boundary stress. This dependence has been used by Molecular dynamics (MD) simulations to estimate the Gaussian curvature modulus \(\bar{\kappa}\) either via the closure of a spherical vesicle [45] or via the edge fluctuations of a flat membrane [46]. However, both methods require the simulation of a large membrane either for a long time or with multiple repeats, and therefore are computational expensive. Recently, buckling of a flat membrane has been suggested as a MD protocol to measure the mean curvature modulus \(\kappa\) of the membrane [16]. Its buckling protocol is easy to implement, and has been shown to be robust against the coarse graining level of the lipid models and the treatment of the solvent when estimating \(\kappa\). Inspired by the work, in this paper, we consider two kinds of geometries of membrane, i.e. a rectangular membrane and a circular one, and respectively investigate their buckling phenomena under a compressive stress via the Helfrich theory [26, 27]. Our numerical results of the stress-strain relationship for the rectangular membrane agree well with the theoretical predictions reported in the literature [16]. Moreover, through investigating the circular membrane, we find that the buckling process shows qualitatively different behaviors under distinct BCs. The stress-strain relationship under the free-hinge BC strongly depends on the Gaussian curvature modulus \(\bar{\kappa}\), which therefore provides an effective method to determine the Gaussian bending rigidity of a fluid membrane through a buckling protocol. ## II Theoretical model We model the rectangular and the circular membranes as a one-dimensional and an axis-symmetric two-dimensional patches, respectively. We consider two types of BCs, namely the free-hinge BC in which the membrane is allowed to rotate freely at the edge where compressible stresses are applied, and the fixed-hinge BC in which the membrane angle is fixed to be in parallel with the substrate at the edge. ### The rectangular membrane For the rectangular membrane which is initially laid flat at the horizontal surface with a length \(L_{0}\) in the \(x\)-direction and a width of \(L_{y}\) in the \(y\)-direction, the buckling is driven by a compressive stress \(f_{x}\) applied at the two ends of the membrane along the \(x\)-direction. When the stress \(f_{x}\) exceeds a critical value, the membrane will buckle and the shape is depicted by the coordinates \([X(s),Z(s)]\), where \(s\) is the arclength along the \(x\)-direction (Fig. 1). We assume the total arclength \(L_{0}\) of the membrane is invariant during buckling process and introduce \(\psi(s)\) as the angle made between the tangential direction of the arc and the \(x\)-direction. The total energy of the membrane is written as \[E=\frac{\kappa}{2}\int c_{1}^{2}\mathrm{d}A+f_{x}L_{x}L_{y}, \tag{1}\] where the first term is the bending energy, with \(c_{1}=\mathrm{d}\psi/\mathrm{d}s\) the principal curvature along the arc direction, and \(\mathrm{d}A=L_{y}\mathrm{d}s\) the area element. The second term is the boundary energy, where the compressive stress \(f_{x}\) is essentially a Lagrangian multiplier with a unit of force per unit length imposed on the base length \(L_{x}\) in the \(x\)-direction. Note that the Gaussian curvature for the curved rectangular membrane is zero, and therefore has no contribution to the total energy. ### The circular membrane For the circular membrane which is initially laid flat with a radius of \(R_{0}\), the buckling is driven by a compressive stress \(f_{r}\) along the radial direction applied at the perimeter of the membrane. When the stress exceeds a critical value, the membrane will buckle with the shape of the membrane depicted by the coordinates \([R(s),Z(s)]\), where \(s\) is the arclength along the meridian direction, as shown in Fig. 2. Different from the rectangular membrane, the total arclength \(S\) can change during buckling, but the total area \(A\) of the membrane remains invariant, the same as in the rectangular membrane. The angle \(\psi(s)\) spans between the tangential meridian direction and the radial direction. The total energy of the membrane reads \[F=\int\left[\frac{\kappa}{2}\left(c_{1}+c_{2}\right)^{2}+\bar{\kappa}c_{1}c_{ 2}\right]\mathrm{d}A+\sigma A+f_{r}\pi R_{b}^{2}, \tag{2}\] where the first integral is the bending energy with \(c_{1}=\mathrm{d}\psi/\mathrm{d}s\) and \(c_{2}=\sin\psi/R\) the two principal curvatures of the membrane surface, and the two terms in the square brackets represent contributions from the mean curvature and the Gaussian curvature, respectively. The Lagrangian multiplier \(\sigma\) in the second term is to impose a constant surface area condition for the membrane during the buckling process, which can be interpreted as the membrane tension. The last term is the boundary energy, where the compressive stress \(f_{r}\) is also a Lagrangian multiplier imposed on the base area \(\pi R_{b}^{2}\) with \(R_{b}\) the base radius of the membrane when it buckles. ### Boundary conditions For the free-hinge BC, we have the vanishing bending moment at the boundary. In the case of the rectangular membrane, it is expressed as the vanishing curvature \(c_{1}=0\). In the case of the circular membrane (Fig. 2(b)), it implies that \[\kappa(c_{1}+c_{2})+\bar{\kappa}c_{2}=0. \tag{3}\] Note that the Gaussian curvature modulus \(\bar{\kappa}\) appears in Eq. (3). This is the key reason why we can use the buckling protocol to measure \(\bar{\kappa}\) for the free-hinge BC, which will be elaborated later. Figure 1: Illustration of a rectangular membrane deformed from (a) a flat shape, to a buckled shape under (b) free-hinge BC or (c) fixed-hinge BC. For the fixed-hinge BC, we simply fix the membrane angle \(\psi=0\) unless otherwise stated, as illustrated in Fig. 2(c) in the case of the circular membrane. ## III Results and discussion ### The buckling of a rectangular membrane The buckling of a rectangular membrane with a fixed-hinge BC has been studied in Ref. [15; 16] both analytically and via MD simulations. Here we numerically solve the shape equations and analyze the buckling process from an energetic point of view and incorporate the free-hinge BC into consideration. The flat shape is always a trivial solution to the shape equations regardless of the stress \(f_{x}\) and the BCs, and the total energy \(E_{\rm f}\) of the flat shape increases with the stress \(f_{x}\) linearly due to the boundary energy (cyan line in Fig. 3). When the stress \(f_{x}\) exceeds a critical value, a new branch of solutions emerge due to membrane buckling (Fig. 3, black dotted line). The total energy of the buckled shape is lower than that of the flat shape, indicating that as the stress increases to the critical point, the rectangular membrane will experience a buckling transition, at which membrane starts to bend with an increasing bending energy \(E_{\rm b}\) at the cost of reduced boundary energy \(E_{\rm l}\). Near the critical stress, the buckled membrane remains relatively flat. As the stress builds up, the membrane becomes more and more bent (top panels in Fig. 3). In order to describe the bending degree, we define the strain \(\mu_{x}=(L_{0}-L_{x})/L_{0}\) and plot the stress-strain relationship in Fig. 4. Buckling is featured in the nonzero value of of the stress \(f_{x}\) at \(\mu_{x}=0\). We find that the stress required to bend the membrane to the same strain for the fixed-hinge is 4-fold of that for the free-hinge. Furthermore, we compare our numerical results with the analytical results for the fixed-hinge BC derived in Ref. [16], \[f_{x}=\kappa\left(\frac{2\pi}{L_{0}}\right)^{2}\left[1+\frac{1}{2}\mu_{x}+ \frac{9}{32}\mu_{x}^{2}+\frac{21}{128}\mu_{x}^{3}+O(\mu_{x}^{4})\right], \tag{4}\] and find a good agreement between them (the red solid line and the red dashed line in Fig. 4). ### The buckling of a circular membrane In this section, we study the buckling process of a circular membrane under an isotropic and compressible radial stress \(f_{r}\) with a unit of force per unit length. The membrane shape is assumed to remain axisymmetric upon buckling and thus can be depicted using its meridian profile. The effect of the Gaussian modulus is neglected, i.e., \(\bar{\kappa}=0\) for the moment and will be elaborated in the next section. It is found that the buckling behavior are qualitatively different for the two types of BCs. For the sake of comparison, we introduce the area strain \(\mu_{r}=(A_{0}-A_{b})/A_{0}\) to reflect the buckling degree of the circular membrane, with \(A_{0}=\pi R_{0}^{2}\) the total surface area and \(A_{b}=\pi R_{0}^{2}\) the base area when the membrane buckles. For the free-hinge BC, the buckling process is quite similar to that of the rectangular membrane. Upon increase of the stress \(f_{r}\), a new branch of buckled solutions emerge in addition to the flat membrane (black dotted line and cyan line in Fig. 5(a)). The total energy of the buckled membrane, consisting of the bending energy \(F_{\rm b}\) and the boundary energy \(F_{\rm l}\), is lower than that of the flat membrane \(F_{\rm f}\), indicating the occurrence of a buck Figure 3: Effect of the stress on the free energy of the rectangular membrane based on (a) a free-hinge or (b) a fixed-hinge model. Here, \(S_{0}=L_{0}/2\). Figure 4: Stress-strain relationship of the rectangular membrane. The dashed red curve represents the analytical result Eq. (4) reported in Ref. [16]. Figure 2: Illustration of a circular membrane deformed from (a) a flat circular shape, to a buckled shape under (b) free-hinge BC or (c) fixed-hinge BC. ling transition. The membrane remains almost flat near the transition point, and after that the buckling degree continuously increases with the stress \(f_{r}\) (top panels in Fig. 5(a)), which is manifested as a continuous stress-strain relationship in the red curve of Fig. 6. We analytically derive the critical stress \[f_{r}^{\text{crit}}=\kappa\frac{\left[x_{1}^{(0)}\right]^{2}}{R_{0}^{2}}, \tag{5}\] a result in a good agreement with the numerical solution (the magenta pentagon in Fig. 6). For the fixed-hinge BC, the energy profiles become complicated and two buckled branches are found for a single stress \(f_{r}\). On one of the buckled branches, the bending degree is decreased with the increasing stress \(f_{r}\) (the purple and the blue shapes of the top panels in Fig. 5(b)). Hereafter, we will refer to this branch as branch 1. While on the other branch, the base of the membrane is nearly closed (the green shape of the top panels in Fig. 5(b)). Hereafter, we will refer to this branch as branch 2. The total energy, consisting of the bending energy \(F_{\text{b}}\) and the boundary energy \(F_{\text{l}}\), is lower in branch 2 than in branch 1. The energy of the flat membrane intersects with branch 2 at a point such that the membrane is nearly closed. All these results suggest that for the fixed-hinge BC, there exists a first-order transition at the critical stress, beyond which a sudden and sharp membrane buckling occurs. In the stress-strain relationship, it is reflected in the sudden jump of the strain from zero to almost 1 when the stress goes beyond the critical point (the blue curve in Fig. 6). To further understand the origin of the first-order transition under the fixed-hinge BC, we calculate the buckling process of a circular membrane under different hinged-angles. Note that for a nonzero hinged-angle, the membrane is already bent even at zero stress \(f_{r}\). Upon increasing the stress \(f_{r}\), if the hinged-angle is large, the total energy of the buckled membrane continuously increases with \(f_{r}\) (the orange line in Fig. 7). However, for small hinged angles, a Gibbs triangle appears in the energy profile (the magenta, black and green lines in Fig. 7), which is the characteristic of a first-order transition. A further calculation tells us that the critical angle distinguishing between the first-order and the second order transitions is around \(0.2571\pi\), as indicated by the red dotted line in Fig. 7. ### Determination of the Gaussian curvature modulus via circular membrane buckling with a free-hinge BC In this section, we study the effect of the Gaussian curvature modulus \(\bar{\kappa}\) on the buckling process of a circular membrane under the free-hinge BC condition Eq. (3), in which \(\bar{\kappa}\) explicitly appears. By virtue of the fact that the Gaussian curvature modulus \(\bar{\kappa}\) only contributes to the boundary bending moment, it makes no difference on the membrane shape equations, and therefore has no impact on the buckling process under the fixed-hinge BC in which \(\psi=0\) has no dependence on \(\bar{\kappa}\). In Fig. 8, we show the total energy \(F=F_{\text{b}}+F_{\text{l}}\) of Figure 5: Dependence of the total energy of the circular membrane on the stress based on (a) free-hinge BC and (b) fixed-hinge BC, respectively. Figure 6: Stress-strain relationship of the buckled circular membrane. The pentagon indicates the analytical result for the critical buckling stress given by Eq. (5). Figure 7: Effect of the stress on the total energy of the circular membrane under the fixed-hinge BC for different hinged-angles. a buckled membrane as a function of the stress \(f_{r}\) with different Gaussian curvature moduli under the free-hinge BC. It is found that the energy of the flat membrane intersects with the energy of the buckled membrane at two stresses (cyan lines in Fig. 8 (a)-(c)). After the first stress, the energy of the flat membrane remains lower than that of the buckled one until the second stress, indicating that the critical buckling actually occurs at the second one. It is found this critical stress increases with the absolute value of the Gaussian curvature modulus \(|\bar{\kappa}/\kappa|\), as shown in Fig. 8(d). In principle, the Gaussian bending modulus can be determined from Fig. 8(d) if we can measure the critical stress to buckle a circular membrane under the free-hinge BC from MD simulations. However, the precision of the measurement will be limited by the single variable measurement. To overcome this issue, we see that the stress-strain relationship for different Gaussian curvature moduli are quite different (Fig. 9). Fitting the stress-strain relationship of the MD simulation with the numerical results at multiple stresses therefore provides a more robust way to estimate the Gaussian curvature modulus \(\bar{\kappa}\). Compared with measuring the fluctuation spectrum at the edge, obtaining the stress-strain relationship is more straightforward. We speculate the method is also robust against the coarse graining level of the lipid models and the treatment of the solvent based on its performance in the measurement of the mean curvature modulus \(\kappa\)[16]. Test of the method with MD simulations will be one of our future works. ## IV Conclusion In summary, we investigate the buckling of a rectangular membrane and a circular membrane under two BCs. It is found that for an initially flat rectangular membrane, when the stress is increasingly loaded, buckling occurs continuously under both free-hinge and fixed-hinge BCs. But for the initially flat circular membrane, the buckling behavior depends on its boundary condition. For the free-hinge BC, with the increase of stress, buckling takes place continuously, a process the same as that of the rectangular membrane. But for the fixed-hinge BC, there exists a first-order buckling transition if the hingedangle is small. Finally, we find an effective method to determine the Gaussian curvature modulus by using the stress-strain relationship of the circular membrane under the free-hinge BC. ###### Acknowledgements. We acknowledge financial support from National Natural Science Foundation of China under Grants No. 12147142, No. 11974292, No. 12174323, and No. 12004317, Fundamental Research Funds for Central Universities of China under Grant No. 20720200072 (RM), and 111 project No. B16029. ## Appendix A Theoretical details for rectangular membranes For the rectangular membrane, one can obtain its principal curvature \(c_{1}=\dot{\psi}\). The elastic energy \(E\) in Eq. (1) then can be written as \(E=\kappa L_{y}\int_{0}^{L_{0}/2}\mathcal{L}\mathrm{d}s\), with the La Figure 8: Dependence of the free energy of the circular membrane on the stress under the free-hinge condition with three different Gaussian curvatures (a) \(\bar{\kappa}/\kappa=-0.4\), (b) \(\bar{\kappa}/\kappa=-0.8\) and (c) \(\bar{\kappa}/\kappa=-1.0\). (d) The critical stress \(f_{r}\) as a function of Gaussian curvature \(\bar{\kappa}/\kappa\) under the free-hinge condition. Figure 9: The stress-strain relationship of the circular membrane with different Gaussian curvatures. -grangian function \[\mathcal{L}=\frac{1}{2}\dot{\psi}^{2}+\bar{f}_{x}\cos\psi+\gamma\left(\dot{X}- \cos\psi\right)+\eta\left(\dot{Z}+\sin\psi\right), \tag{10}\] where \(\bar{f}_{x}=f_{x}/\kappa\), \(\dot{\psi}\) and \(\dot{X}\) denote their derivatives with respect to the arclength \(s\), and \(\gamma\) and \(\eta\) are the Lagrangian multipliers that enforce the geometric relations \[\dot{X}=\cos\psi,\qquad\dot{Z}=-\sin\psi. \tag{11}\] A variation of the functional \(E\) gives \[\delta E =\int_{0}^{L_{0}/2}\mathrm{d}s\left\{\left[\frac{\partial\mathcal{ L}}{\partial\psi}-\frac{\mathrm{d}}{\mathrm{d}s}\frac{\partial\mathcal{L}}{ \partial\psi}\right]\delta\psi+\left[\frac{\partial\mathcal{L}}{\partial X}- \frac{\mathrm{d}}{\mathrm{d}s}\frac{\partial\mathcal{L}}{\partial\dot{X}} \right]\delta X\right. \tag{12}\] \[+\left[\frac{\partial\mathcal{L}}{\partial Z}-\frac{\mathrm{d}} {\mathrm{d}s}\frac{\partial\mathcal{L}}{\partial Z}\right]\delta Z+\frac{ \partial\mathcal{L}}{\partial\gamma}\delta\gamma+\frac{\partial\mathcal{L}}{ \partial\eta}\delta\eta\right\}\] \[-H\delta s\mid_{0}^{L_{0}/2}+\frac{\partial\mathcal{L}}{\partial \psi}\delta\psi\mid_{0}^{L_{0}/2}+\frac{\partial\mathcal{L}}{\partial X} \delta X\mid_{0}^{L_{0}/2}+\frac{\partial\mathcal{L}}{\partial\dot{Z}}\delta Z \mid_{0}^{L_{0}/2}, \tag{13}\] where \(H\equiv-\mathcal{L}+\dot{\psi}\partial\mathcal{L}/\partial\dot{\psi}+\dot{X} \partial\mathcal{L}/\partial\dot{X}+\dot{Z}\partial\mathcal{L}/\partial\dot{Z}\) is the Hamiltonian function given by \[H=\frac{1}{2}\dot{\psi}^{2}-\bar{f}_{x}\cos\psi+\gamma\cos\psi-\eta\sin\psi. \tag{14}\] Having the bulk terms of Eq. (13) vanish leads to the shape equations \[\ddot{\psi}=\dot{\psi}+\gamma\sin\psi+\eta\cos\psi-\bar{f}_{x}\sin\psi \tag{15}\] \[\dot{\gamma}=0 \tag{16}\] \[\dot{\eta}=0. \tag{17}\] Having the boundary terms in Eq. (13) vanish, one can obtain the BCs. In particular, at the membrane tip \(s=0\), we have 4 BCs: \(\psi(0)=0\), \(X(0)=0\), \(\gamma(0)=\bar{f}_{x}-[\psi(0)]^{2}/2\) and \(\eta(0)=0\). At the membrane base \(s=L_{0}/2\), we have 3 BCs: \(Z(L_{0}/2)=0\), \(X(L_{0}/2)=L_{x}/2\), and \(\psi(L_{0}/2)=0\) for the fixed-hinge BC or \(\dot{\psi}(L_{0}/2)=0\) for the free-hinge BC. In summary, Eqs. (11), (15), (16), and (17) constitute the full set of shape equations for rectangular membranes. They can be converted to 6 first order ordinary differential equations. Together with the unknown Lagrangian multiplier \(\bar{f}_{x}\) and the 7 BCs, we solve the problem with the bvp5c solver in MATLAB that is designed for solving boundary value problems of ordinary differential equations. ## Appendix B Theoretical details for circular membranes For the circular membrane, the meridian coordinates \(R\left(s\right)\) and \(Z\left(s\right)\) satisfy the geometric relations via \[\dot{R}=\cos\psi,\qquad\dot{Z}=-\sin\psi. \tag{18}\] The elastic energy \(F\) in Eq. (2) can be expressed as \(F=2\pi\kappa\int_{0}^{S}\mathcal{L}\mathrm{d}s\), with the Lagrangian function \[\mathcal{L} =\frac{R}{2}\left(\dot{\psi}+\frac{\sin\psi}{R}\right)^{2}+\bar{ \kappa}\dot{\psi}(\sin\psi)/\kappa+\bar{\sigma}R+\bar{f}_{r}R\cos\psi \tag{19}\] \[+\gamma\left(\dot{R}-\cos\psi\right)+\eta\left(\dot{Z}+\sin\psi \right),\] where \(\bar{\sigma}=\sigma/\kappa\), \(\bar{f}_{r}=f_{r}/\kappa\), \(\gamma(s)\) and \(\eta(s)\) are Lagrangian multipliers to enforce the geometric relations in Eq. (18). The total arclength \(S\) is an unknown parameter to be solved with shape equations. A variation of the functional \(F\) reads \[\delta F =\int_{0}^{S}\mathrm{d}s\left\{\left[\frac{\partial\mathcal{L}}{ \partial\psi}-\frac{\mathrm{d}}{\mathrm{d}s}\frac{\partial\mathcal{L}}{ \partial\dot{\psi}}\right]\delta\psi+\left[\frac{\partial\mathcal{L}}{ \partial R}-\frac{\mathrm{d}}{\mathrm{d}s}\frac{\partial\mathcal{L}}{\partial \dot{R}}\right]\delta R\right. \tag{20}\] \[+\left[\frac{\partial\mathcal{L}}{\partial Z}-\frac{\mathrm{d}}{ \mathrm{d}s}\frac{\partial\mathcal{L}}{\partial\dot{Z}}\right]\delta Z+\frac{ \partial\mathcal{L}}{\partial\gamma}\delta\gamma+\frac{\partial\mathcal{L}}{ \partial\eta}\delta\eta\right\}\] \[-H\delta s\mid_{0}^{S}+\frac{\partial\mathcal{L}}{\partial\dot{ \psi}}\delta\psi\mid_{0}^{S}+\frac{\partial\mathcal{L}}{\partial\dot{R}} \delta R\mid_{0}^{S}+\frac{\partial\mathcal{L}}{\partial\dot{Z}}\delta Z\mid_{0} ^{S},\] where the Hamiltonian function \(H\equiv-\mathcal{L}+\dot{\psi}\partial\mathcal{L}/\partial\dot{\psi}+\dot{R} \partial\mathcal{L}/\partial\dot{R}+\dot{Z}\partial\mathcal{L}/\partial\dot{Z}\) can be expressed as \[H =\frac{R}{2}\left[\dot{\psi}^{2}-\left(\frac{\sin\psi}{R}\right)^ {2}\right]-\bar{\sigma}R-\bar{f}_{r}R\cos\psi \tag{21}\] \[+\gamma\cos\psi-\eta\sin\psi=0.\] If we have the bulk terms of Eq. (20) vanish, we obtain the following shape equations \[\ddot{\psi}=\frac{\sin\psi\cos\psi}{R^{2}}-\frac{\dot{\psi}}{R}\cos\psi+\frac{ \gamma}{R}\sin\psi+\frac{\eta}{R}\cos\psi-\bar{f}_{r}\sin\psi, \tag{22}\] \[\dot{\gamma}=\frac{1}{2}\dot{\psi}^{2}-\frac{\sin^{2}\psi}{2R^{2}}+\bar{\sigma} +\bar{f}_{r}\cos\psi, \tag{23}\] \[\dot{\eta}=0. \tag{24}\] The BCs can be obtained by setting the boundary terms in Eq. (20) to be zero. In particular, at the membrane tip \(s\)=0, we have 4 BCs: \(\psi(0)=0\), \(R(0)=0\), \(\gamma(0)=0\), and \(\eta(0)=0\). At the membrane base \(s=S\), we have 3 BCs: \(Z(S)=0\), \(R(S)=R_{b}\), \(\psi(S)=0\) for the fixed-hinge BC or \(\kappa(\dot{\psi}+\sin\psi/R)+\bar{\kappa}\sin\psi/R=0\) for the free-hinge BC. In addition, we need to impose the incompressibility condition \[2\pi\int_{0}^{S}rds=\pi R_{0}^{2}. \tag{25}\] In summary, Eqs. (18), (22), (23), and (24) make up the full set of shape equations for circular membranes. They can be converted to 6 first order ordinary differential equations. Together with the 2 unknown parameter \(\bar{f}_{r}\) and \(S\), as well as the 7 BCs and the incomprehensibility constraint (25), we can solve the problem with the MATLAB solver bvp5c. Appendix C Analytical results for the critical buckling stress of a circular membrane under the free-hing BC condition For an almost flat membrane, the angle \(\psi\ll 1\). Under this approximation, we can get the linearized shape equation \[R^{2}\psi^{\prime\prime}+R\psi^{\prime}-\left(R^{2}\bar{\sigma}+1\right)\psi=0, \tag{10}\] where we have converted the function \(\psi(s)\) to \(\psi(R)\) with the prime denoting the derivative with respect to \(R\). The equation has a physically meaningful solution \[\psi\left(R\right)=C_{1}J_{1}\left(R\sqrt{-\bar{\sigma}}\right), \tag{11}\] only if \(\bar{\sigma}<0\). Here \(C_{1}\) is an arbitrary constant and \(J_{1}(x)\) denotes the first kind of Bessel function. The free-hinge BC requires that the following equation \[\psi^{\prime}+\frac{\psi}{R}=0 \tag{12}\] holds at \(R=R_{b}\). Substituting Eq. (11) into (12), we obtain \[J_{0}\left(R_{b}\sqrt{-\bar{\sigma}}\right)=0. \tag{13}\] In order to get the first buckling mode, we let \(R_{b}\sqrt{-\bar{\sigma}}=x_{1}^{(0)}\), the first zero value of the Bessel function \(J_{0}(x)\). The resulting membrane tension reads \[\bar{\sigma}=-\frac{1}{R_{b}^{2}}\left[x_{1}^{(0)}\right]^{2}. \tag{14}\] The critical stress at which buckling occurs is essentially the negative tension in Eq. (14) and have the base radius \(R_{b}=R_{0}\). In this way, we obtain Eq. (5).